About mlconfound

The lack of rigorous non-parametric statistical tests of confoudner-effects significantly hampers the development of robust, valid and generalizable predictive models in many fields of research. The package mlconfound implements the partial and full confounder tests [1], that build on a recent theoretical framework of conditional independence testing [2] and test the null hypothesis of no bias and fully biased model, respectively. The proposed tests set no assumptions about the distribution of the predictive model output that is often non-normal. As shown by theory and simulations, the test are statistically valid, robust and display a high statistical power.

usage

References

[1] T. Spisak, Statistical quantification of confounding bias in predictive modelling, preprint on arXiv:2111.00814 <http://arxiv-export-lb.library.cornell.edu/abs/2111.00814>_, 2021.

[2] Berrett, T. B., Wang, Y., Barber, R. F., and Samworth, R. J. (2020). The conditional permutation test for independencewhile controlling for confounders.Journal of the Royal Statistical Society: Series B (Statistical Methodology),82(1):175–197.

Contact / bug report

GitHub Issues: GitHub issues GitHub issues-closed

Author

Tamas Spisak

tamas.spisak@uk-essen.de

PNI-Lab, University Hospital essen, Germany

See also:

Back to main page                   Give feedback: Star on Github