Danilo Bzdok, RWTH Aachen, University Hospital

Data-analysis regimes for neuroscientific conclusions with large data and without p-values

Neuroscience datasets are constantly increasing in resolution, sample size, multi-modality, and meta-information complexity. This opens the brain imaging field to a more data-driven machine-learning regime (e.g., mini-batch optimization, structured sparsity, deep learning), while analysis methods from the domain of classical statistics remain dominant (e.g., ANOVA, Pearson correlation, Student's t-test).

Special interest may lie in the statistical learning of scalable generative models that explain brain function and structure. Instead of merely solving classification and regression tasks, they could explicitly capture properties of the data-generating neurobiological mechanisms. Python-implemented examples for such supervised and semi-supervised machine-learning techniques will be provided as applications to the currently biggest neuroimaging dataset from the Human Connectome Project (HCP) data-collection initiative as well as the prospective epidemiological UK Biobank. The emphasis will be put on the feasibility of deep neural networks and semi-supervised architectures in imaging neuroscience.

The successful extraction of structured knowledge from current and future large-scale neuroimaging datasets will be a critical prerequisite for our understanding of human brain organization in healthy populations and psychiatric disease.

Organized by

Henrik Walter/Margret Franke

Go back