Efficient physical embedding of information processing networks
Brains are often compared to computers but apart from the trivial fact that both process information using a complex physical pattern of connections, it has been unclear whether this is more than just a metaphor. We uncover novel quantitative organizational principles that underlie the network organization of the human brain, high performance computer circuits, and the nervous system of the nematode C. elegans. Through a topological and physical analysis of connectivity data, we find that each of these systems is cost-efficiently embedded in physical space; they are organized as economical modular networks, paying a modest premium in wiring cost for the functional advantages of high dimensional topology. We also show that the fractal properties of human brain network connectivity can be used to explain allometric scaling relations between grey and white matter volumes in the brains of a wide range of differently-sized mammals - from mouse opossum to sea lion - further suggesting that these principles of nervous system design are highly conserved across species. This work suggests that market-driven human invention and natural selection have negotiated trade-offs between cost and complexity in design of information processing networks and convergently come to similar conclusions.
Slow oscillations orchestrating system-consolidation of memory during sleep
Slow-wave sleep (SWS) facilitates the consolidation of declarative memory (for facts, episodes), i.e., a system-level consolidation process assumed to involve the redistribution of the memory representations from temporary hippocampal to neocortical long-term storage sites. Evidence will be provided indicating that this consolidation relies on a dialogue between neocortex and hippocampus which is essentially orchestrated by the <1 Hz EEG slow oscillation (SO). The SOs characterising SWS originate from neocortical networks. Their amplitude depends partly on the use of these networks for encoding of information, i.e., the more information is encoded during waking, the higher the SO amplitude during subsequent SWS. The SOs temporally group neuronal activity into up-states (of strongly enhanced activity) and down-states (of neuronal silence). This grouping is induced not only in the neocortex but also, via efferent pathways, in other structures relevant to consolidation, i.e., in the thalamus, generating 10-15 Hz spindles, and in the hippocampus, generating sharp-wave ripples which are known to accompany a replay of newly encoded memories taking place in hippocampal circuitries during SWS. The feedforward synchronizing effect of the SO enables memory-related inputs to be synchronously fed back from these (hippocampus, thalamus) and other structures to the neocortex. The co-occurrence in the neocortex of these feedback-inputs possibly plays a critical role for the long-term storage of memories in neocortical networks. Indeed, induction of slow oscillations during NonREM sleep (but not during REM sleep or waking) by slowly alternating transcranial current stimulation not only enhances and synchronizes spindle activity but also improves the consolidation of declarative memory.
Evidence and mechanisms of multistability in cortical rhythms
Spontaneous large-scale cortical rhythms exhibit erratic bursts between multistable temporal modes. We combine the analysis of a simple family of dynamical systems with simulations of a complex, detailed biophysical model of corticothalamic activity to elucidate the mechanisms underlying this phenomena. In particular, we focus on two features of the data that candidate models should exhibit: Firstly, the expression of power fluctuations should be temporally partitioned into distinct exponential distributions, and secondly the cortex should dynamically "dwell" in each of these modes according to long-tailed, stretched exponential forms. Analysis of the simple dynamical system shows that the expression of bistable temporal modes occurs when weakly correlated stochastic fluctuations interact multiplicatively with dynamical states in the presence of a particular type of nonlinear instability, namely a subcritical Hopf bifurcation. The algebraic construction of this model affords an opportunity to relate the form of the long-tailed dwell times to the interaction between diffusive and nonlinear flow terms in this model. Of note, this interaction receives a classic diffusion process and yields relaxation processes whose forms are dramatically non-classical with memory stretched by several orders of magnitude. Informed by these insights, we then turn to the full cortical field model and locate a suitable nonlinear instability with a physiologically plausible limit cycle attractor and realistic biophysical parameters. We hence observe that stochastic fluctuations expressed through this model induce erratic, long-tailed bistable bursts whose statistics accord well with those derived from empirical data.
Functional brain graphs
This talk will focus on graph theoretical methods for modeling functional brain networks in human neuroimaging data. Using illustrative examples from functional MRI and MEG studies, I will discuss some of the key issues of brain graph construction including the fundamental questions: what is a node? And what is an edge? I will explore how functional network parameters can be related both to "higher" cognitive performance and more elementary imaging statistics, such as fMRI time series variability and correlations. I will introduce some of the pitfalls and potential interest of using graph theoretical measures to quantify functional network disorganization in patients (with schizophrenia).
Introduction to dynamical systems theory II
The second part of the Introduction to Dynamical Systems will primarily focus on the impact of noise in nonlinear dynamics. Central question is: what is the probability to find the system in a certain state at a given time? After a brief recall of basic principles of probability theory, the Fokker-Planck equation (FPE) is formally derived as it describes the diffusively spreading probability density of a stochastic dynamics. The FPE will be illustrated for single nonlinear oscillators by disentangling amplitude and phase dynamics through mere averaging. The concept of diffusive processes will be extended toward neural population dynamics using the seminal example of the Kuramoto model with dynamic noise. The yields a description of so-called nonlinear FPEs, that is, diffusion equations that are nonlinear in the probability density.
Modelling the Role of Local Oscillations in Resting Brain Correlations
During rest the brain exhibits organized activity characterized by spatio-temporal patterns of slow correlated fluctuations (<0.1Hz) of the Blood Oxygenation Level Dependent (BOLD) fMRI signal. These correlations, referred to as functional connectivity, yield large-scale maps constituting so-called resting-state networks (RSN) which correspond to the same networks typically seen during attentional tasks. The origin of such organisation in different networks remains unclear. It has been speculated it is due to the structural topology. Nevertheless, dynamics should play a crucial role in shaping the partition of such networks because the transmission of information between cortical areas is not instantaneous, due to axonal conduction and synaptic transmission times. To investigate how the large-scale structure of the brain, together with the dynamics at the local level, contribute to the range of dynamic phenomena observed in the brain at different temporal and spatial scales, we built a large-scale model that takes into account the long-range anatomical connectivity of the human brain forming a network whose nodes represent neuronal populations constituted by a network of excitatory and inhibitory neurons whose dynamical state is above the onset of self-sustained oscillations. This scenario has been used in a previously proposed model, using Wilson-Cowan units oscillating in the gamma-frequency range as node models (Deco et al., 2009). However, due to the complexity of each unit, it has been difficult to reach a detailed understanding of the nonlinear model. Here, in order to comprehensively explore the dynamics underlying this scenario, we employ a simpler node model. When an oscillator - like the Wilson-Cowan unit - is engaged in a limit cycle, its dynamics can be reduced to that of a phase oscillator, and its behaviour in a network is described by the Kuramoto interaction model. Simulations performed using this computational framework revealed, in a region of the parameter space, the emergence of patterns of slow fluctuations of the BOLD signal that correlate positively with experimental measures obtained in humans.
Neuroplasticity and Aging
Aging exerts major reorganization and remodeling at all levels of brain structure and function, which is paralleled by a progressive decline of mental and physical abilities. On the other hand, it is now well-documented that age-related changes are not a simple reflection of degenerative processes, but a complex mix of plastic, adaptive and compensatory mechanisms, suggesting that brain plasticity is operational into old age. Considering the current demographic changes in many civilizations there is an urgent need for measures permitting an independent lifestyle into old age. Therefore, strategies for interventions such as training, exercising, practicing and stimulation, which make use of neuroplasticity principles, have been developed to maintain health and functional independence throughout lifespan. Particularly, the effectiveness of physical exercise programs on cognitive and sensorimotor performance in elderly individuals suggests a critical role of mild stress responses to stimulate neurotrophin expression and enhance neuroprotective functions, which has important implications for life-style induced modification of neuroplasticity mechanisms. In this talk I will summarize studies from aged animals and human elderly individuals illustrating the behavioural and neural effects of aging in the sensorimotor domain. While behaviour is degraded in old age, cortical alterations observed during aging often resemble those seen in young adults typically associated with learning, such as receptive field enlargement, map expansion and excitability enhancement. To explain this atypical relation it has been suggested that age-related reduction of intracortical inhibition mechanisms result in cortical processing to disintegrate. Accordingly, age-related cortical alterations are assumed to reflect specific forms of reorganization associated with aging processes, which differ qualitatively from learning-related reorganization occurring in young and adult subjects. I will also show data from recent experiments demonstrating a remarkable efficacy of stimulation and training procedures to ameliorate cortical and behavioral age-related degradation, which corroborates that aging effects are not irreversible but treatable. These studies show, however, that elderly individuals cannot be rejuvenated. Instead, restoration of function becomes possible through the emergence of new processing strategies. This has been taken as evidence that remodeling in the aging brain occurs twice: during aging, and during treatment of age-related changes.
Probing System-Level Consolidation in Humans
Hippocampal lesions cause temporally graded retrograde amnesia, suggesting a time-limited role of the hippocampus in memory retrieval. This phenomenon forms the basis of the standard model of system-level consolidation, which proposes that the hippocampus is part of a retrieval network for recent memories, but that memories are gradually more dependant on neocortical circuits alone. More specifically, while the hippocampus links posterior representational areas when recent memories are retrieved, the medial prefrontal cortex might take over this pointer function for remote memories. Given these ideas, system-level consolidation is characterized by network changes rather than local activity changes. Therefore, analyses of connectivity dominate recent fMRI studies probing memory consolidation in humans. We used standard approaches like psychophysiological interaction probing neural consequences of consolidation at retrieval. However, it is more difficult to probe neural correlates of consolidation directly, because its time course is unknown. Therefore, we used model-free methods like interregional partial correlations to probe functional connectivity more directly associated with consolidation during a rest period following encoding. To vary consolidation experimentally, we contrasted recent and remote memories in some experiments and slow and fast consolidation in others. We manipulated the speed of system-level consolidation by manipulating the degree by which new information can be assimilated into existing mental schemata. These new behavioral paradigms combined with adequate connectivity analyses appear to provide the instrumentation to study the neural underpinnings of how we retain and integrate new information for long-term use.
Attention modulates brain wide functional connectivity assessed through 252 channel monkey electrocorticography.
We have proposed that effective interactions among brain areas are subserved by rhythmic synchronization. In order to test this, we recorded the electrocorticogram (ECoG) from 252 sites distributed across the left hemisphere of macaque monkeys. In two monkeys, the ECoG covered among others areas V1, V2, V4, parietal, somatosensory, motor and premotor cortex, and the frontal eye field (FEF). The monkeys performed a selective visual attention task in which two stimuli are presented in the two visual hemifields, but in any given trial only one of them is behaviorally relevant. When the right stimulus was relevant, synchronization among left hemisphere areas was enhanced, in some cases strongly. Synchronization delineated clearly several networks, involving a visual gamma network and an FEF-parietal-visual beta network. These results strongly support the notion that functional interactions among brain areas are subserved by beta- and gamma-band synchronization.
Recently, a related morphometry-based connection concept has been introduced using local mean cortical thickness and volume to study the underlying complex architecture of the brain networks. In this article, the surface area is employed as a morphometric descriptor to study the concurrent changes between brain structures and to build binarized connectivity graphs. The statistical similarity in surface area between pair of regions was measured by computing the partial correlation coefficient across 186 normal subjects of the Cuban Human Brain Mapping Project. We demonstrated that connectivity matrices obtained follow a small-world behavior for two different parcellations of the brain gray matter. The properties of the connectivity matrices were compared to the matrices obtained using the mean cortical thickness for the same cortical parcellations. The topology of the cortical thickness and surface area networks were statistically different, demonstrating that both capture distinct properties of the interaction or different aspects of the same interaction (mechanical, anatomical, chemical, etc.) between brain structures. This finding could be explained by the fact that each descriptor is driven by distinct cellularmechanisms as result of a distinct genetic origin. To our knowledge, this is the first time that surface area is used to study the morphological connectivity of brain networks.
Modelling plasticity in perceptual learning with dynamic causal models
The suppression of neuronal activity to a repeated event is a ubiquitous phenomenon in neuroscience. In this talk, we show how repetition of auditory events can induce connectivity changes over time - or plasticity - both between distant cortical areas, and within an area belonging to a cortical network. We use Bayesian inversion of dynamic causal models (DCM) for event-related potentials (ERPs), to examine the temporal evolution of experience-dependent plasticity with repeating stimuli. Intrinsic, or within-source connections, showed fast biphasic changes, whereas extrinsic, or between-source connections, showed monotonic decreases with repetition. This suggests that learning an auditory perceptual model from the environment is associated with repetition-dependent plasticity in the human brain.
Fronto-Parietal Coherence During Visual Working Memory
It is well established that perceptual and cognitive processes involve the coordinated activity of large populations of neurons distributed over multiple cortical regions. However, we remain surprisingly ignorant of the spatio-temporal organization of these processes, their underlying neuronal mechanisms, and their relation to behavior. This gap in understanding stems largely from the complex and non-stationary nature of distributed cortical activity and from technical limitations in our ability to make appropriate electrophysiological measurements. In my presentation, I will present new experimental findings, from large scale multi-electrode recordings in macaque monkeys, that provide insight into the dynamics of cortico-cortical interactions during visual perception and visual working memory.
Michael D. Greicius
Altered Connectivity in Neurodegenerative Disease
Converging evidence from distinct modalities supports the hypothesis that specific neurodegenerative diseases target specfic distributed brain networks. This talk will consider the data that support this hypothesis. Studies investigating network abnormalities in preclinical populations will also be discussed. Possible physiologic mechanisms that might drive degeneration along a brain network will be posited. The following question is posed to promote dialogue: should neurodegeneration invariably lead to reduced functional connectivity in a network or might a dying network show reduced amplitude but normal or even enhanced connectivity?
Multivariate decoding in neuroimaging
Multivariate decoding has recently emerged as a novel and powerful analysis tool in functional neuroimaging. The application of multivariate pattern recognition techniques for the analysis of fMRI and EEG signals has several important advantages over more conventional analyses based on ‘mass-univariate’ approaches. Pattern recognition can help increase the sensitivity for detecting experimental effects. It can assess the amount of information ‘encoded’ in a particular brain region under various cognitive tasks, even for fine-grained representations that are often assumed to be inaccessible to current neuroimaging techniques. It provides a more powerful framework for analysing neural representations that takes into account their distributed nature. It can also be extended to reveal the encoding of similarity structures and representational spaces. Furthermore, its increased sensitivity makes simple forms of ‘brain reading’ possible, where mental states are decoded from neuroimaging signals. This opens up a window for potential applications such as brain-computer-interfacing, biofeedback, clinical diagnostics or even the detection of deception and neuromarketing. After an overview this session showcases the usefulness of decoding techniques for the study of human behavior, clinical diagnostics and neurotechnological applications.
Introduction to Connectivity
A brief introduction to the areas covered by workshop. The big picture: analyzing and understanding anatomical, functional and effective connectivity of neuronal elements requires complex mathematical methods that include dynamical systems theory, graph theory and neural modeling.
Introduction to dynamical systems theory II
In this first part of Introduction to Dynamical Systems we discuss the types of nonlinear dynamics expressed by systems with one and two state variables, as well as the mathematical and computational tools to study them. In particular, we introduce the notions of attractors (fixed points and limit cycles), their stability and linear stability analysis, bifurcations, as well as phase portraits and potential function. We will discuss selected examples that play an import role in the neurosciences including the FitzHugh-Nagumo and Morris-Lecar neuron models . Finally we briefly touch upon the concepts of attractive manifolds relevant for dimension reduction and emergence in self-organized systems.
Multivariate decoding of disease from MRI data
Pattern recognition methods are finding their way into clinical practice. A basic understanding of the disease under study and current diagnostic approaches should help computer scientists to improve feature extraction as well as the output of computer-based methods. With a focus on dementia, I will discuss the requirements for these methods to become clinically useful and attempt to illustrate the view-point of a clinician. Early and accurate diagnosis, the prediction of the future disease course and a convenient integration into the work flow are desired characteristics. In addition, I am highlighting current strategies for feature extraction and classification.
Optimization of cortical hierarchies with continuous scales and ranges
Few scientific studies have been able to bridge the tremendous gap in scale and complexity between local neural microcircuitry on the one hand and cortico-cortical connectivity and the activity of entire brain regions on the other hand. In 1991 a seminal study by Felleman and Van Essen (1) provided an anatomical link, by relating the way axons attach to different brain layers to their rank in a processing hierarchy, using tracer data for visual areas from macaque monkey. Their findings about the visual hierarchy were summarized in what must be one of the most reproduced figures in neuroscience. However, in 1996 Hilgetag et al. (2) demonstrated that the very same data support many other hierarchies with an even greater degree of confidence. The work by Reid et al. (3) brings this research to a new level. On one hand it clarifies further anatomical relations between layer-specific connectivity and processing level. On the other hand it abolishes the notion of discrete rank, opting for a continuous scale instead. This allowed the use of powerful optimization algorithms for finding the hierarchy most compatible with extant data. These novel methods were so impressive that the paper received the NeuroImage Editors' Choice Award “Methods and Modeling Section” 2008/9 which was presented at the opening ceremony of the Human Brain Mapping Meeting 2009. Finally, the visualization of their results is inspired: By coloring anatomical regions of the macaque brain according to rank, one can immediately see that visual processing spreads out from V1 to other areas roughly according to spatial proximity. These compelling illustrations were also featured on the title page of that issue of NeuroImage.
1. Felleman DJ, van Essen DC (1991) Distributed hierarchical processing in the primate. Cereb Cortex 1(1): 1–47.
2. Hilgetag CC, O’Neill MA, Young MP (1996) Indeterminate organization of the visual system. Science 271(5250): 776–777.
3. Reid AT, Krumnack A, Wanke E, Kötter R (2009) Optimization of cortical hierarchies with continuous scales and ranges. NeuroImage 47(2): 611–617.
How does history influence our perception?
It is well known that we can selectively attend to the things that are most importance to us. More recent evidence shows that our attention is drawn to those features that we have recently attended to (such as on preceding trials), even in violation of our intentions. I will briefly summarize evidence from psychophysics and studies of neurological patient studies which show this in terms of consistency in feature, space and time. I will then discuss recent neuroimaging evidence with regard to such history effects. These neuroimaging results show how this benefit of consistency may involve complex interactions of early visual areas and networks for attentional, which may influence the operation of the earlier sites, as a function of what has occurred on previous trials. I argue that these studies open up some intriguing avenues for studying connectivity in the brain in future.
Electrical Microstimulation and fMRI
Electrical stimulation (ES) of the brain has been performed for over 100 years, and although some might say it is a crude technique for understanding the detailed mechanisms underlying different neural computations, microstimulation has made significant contributions to our knowledge in both basic and clinical research. Recently there has been resurgence in its use in the context of electrotherapy and neural prostheses. For example, ES has made it possible to at least partially restore hearing to deaf patients by delivering pulses via implanted electrodes to different regions of the cochlea. Stimulation of the basal ganglia is remarkably effective in restoring motor function to Parkinson’s patients, and microstimulation of the geniculostriate visual pathway is regarded by some as a very promising (future) method for making the blind see again.
Yet, the methodology still suffers from at least two fundamental problems; (a) we do not always know exactly what is being stimulated when we pass currents through the tissue; and (b) stimulation causes activation in a large number of areas even outside the stimulation site, making it difficult to isolate and evaluate the behavioral effects of the stimulated area itself. Microstimulation during fMRI (esfMRI) could provide a unique opportunity to visualize the networks underlying electrostimulation-induced behaviors, to map neuromodulatory systems, or to develop electrotherapy and neural prosthetic devices. Moreover esfMRI is an excellent tool for the study of the effects of regional synaptic plasticity, e.g. LTP in hippocampus, on cortical connectivity. Last but not least, esfMRI can offer important insights into the functional neurovascular coupling. In my talk, I shall discuss findings from recent and on-going work on signal propagation during electrical stimulation, as well as data related to effective connectivity.
Brain signal variability and development
Brain development carries with it a large number of structural changes at the local level which impact on the functional interactions of distributed neuronal networks. Such changes enhance information processing capacity, moving the brain from a more deterministic system to one that is more stochastic. The evidence thus far suggests such a stochastic property is a result of an increase in the number of possible functional network configurations for a given situation. This is captured in the variability of evoked responses. In empirical data from infants and children, signal variability increases with maturation and correlates positively with stable behaviour. Importantly, the variability is best explained through increased distributed entropy between cortical sources with a concomitant decrease of local entropy. These data, along with extant modeling work (e.g, Ghosh et al, 2008, PloSCB), suggest that maturational changes in signal variability represent the enhancement of the brain's dynamic repertoire.
Machine learning for neurotechnology
Brain Computer Interfacing (BCI) – a modern instantiation of Neurotechnology – aims at making use of brain signals for e.g. the control of objects, spelling, gaming and so on. This talk will first provide a brief overview of Brain Computer Interfaces from a machine learning and signal processing perspective. In particular it shows the wealth, the complexity and the difficulties of the data available, a truely enormous challenge: In real-time a multi-variate very strongly noise contaminated data stream is to be processed and neuroelectric activities are to be accurately decoded in real time. Emphasis is put on a novel computational method for alleviating non-stationarity in data, namely stationary subspace analysis (SSA). Finally, I report in more detail about the Berlin Brain Computer Interface (BBCI) that is based on EEG signals and take the audience all the way from the measured signal, the preprocessing and filtering, the classification to the respective application. BCI as a new channel for man-machine communication is discussed in a clinical setting and for gaming. This is joint work with Benjamin Blankertz, Michael Tangermann, Claudia Sanelli, Carmen Vidaurre, Thorsten Dickhaus, Steven Lemm, Paul von Bünau, Frank Meinecke, Wojciech Wojcikiewicz (TU Berlin), Guido Nolte, Andreas Ziehe, (Fraunhofer FIRST, Berlin) Gabriel Curio, Vadim Nikulin (Charite, Berlin) and further members of the Berlin Brain Computer Interface team, see www.bbci.de.
Neural models and their connections to experiments: A friendly reminder that fancy mathematics can never trump physical and biological principles
Physicists and mathematicians have developed many mathematical models of neural interaction over the past 50 years. Some models attempt connections to genuine brain data; others seem to pursue more abstract goals. The former group is further distinguished by the spatial scales of the dependent variables and by physiological time scales underlying predicted dynamic behavior. Here I emphasize experimental EEG connections made by large scale neocortical models with basic time scales due to PSP (local) and/or axonal (global) delays. The conceptual framework supporting this discussion involves “networks” (cell assemblies) embedded in global synaptic action fields; this adoption of field variables greatly facilitates experimental connections to EEG.
Our discussions address three basic questions: 1) Why do we even bother to develop neural models that can represent only hollow representations of real brains? 2) What pitfalls await brain theoreticians attempting connections to genuine data? 3) How does the proposed conceptual framework (networks embedded in synaptic fields) impact the design of cognitive experiments aimed at finding the neural correlates of mental activity? For example, functional cortical connections associated with mental activity form and dissolve on time scales typically in the range of hundreds of milliseconds. Yet, white matter axons form fixed corticocortical and thalamocortical connections, which must constrain but cannot dominate the observed functional connections.
-Nunez PL, Neocortical Dynamics and Human EEG Rhythms, New York: Oxford University Press, 1995.
-Nunez PL, Toward a quantitative description of large scale neocortical dynamic behavior and EEG, Behavioral and Brain Sciences 23: 371-437 (invited target article,18 commentaries, and author responses), 2000.
-Nunez PL, and Srinivasan R, Electric Fields of the Brain: The Neurophysics of EEG, Second Edition, New York: Oxford University Press, 2006.
-Nunez PL, Brain, Mind, and the Structure of Reality, New York: Oxford University Press, 2010.
Integrative Brain Modeling Using Neural Field Theory
The aim of brain modeling is to interrelate stimuli, neural activity, and measurements to help unravel the workings of the brain. Modeling brain activity and resulting measurements requires dynamics at many scales to be incorporated simultaneously, and the results to be integrated into a unifying framework. Thus, integrative models take what is known at various individual temporal and spatial scales and put these pieces together to address the whole brain. Such models should be based closely on physiology, but cannot include all details at all levels if they are to be tractable. Hence, integrative modeling concentrates on including the main features at multiple levels. A fruitful approach to integrative modeling is via neural field theory in which microscopic neural properties are incorporated in a way that enables multiscale computations to be performed to make realistic predictions for comparison with experiment, plus interpretations of real data in terms of physiology and anatomy. In particular, neural field models provide unified theories of multiple phenomena, and make quantitative predictions of many types of observations.
This talk will introduce some of the main ideas of neural field theory, spanning from synapses to the whole brain, with parameters measuring quantities such as synaptic strengths, neural densities, signal delays, cellular time constants, and neural ranges, all of which are constrained by independent physiological measurements. It will outline how to extract quantitative predictions of phenomena, and a number of applications will be reviewed, including comparisons with EEG spectra, evoked responses to stimuli, epileptic seizures, activity correlations, and arousal dynamics. Fitting of the model to experimental data to infer physiological parameters in normal and abnormal conditions will also be described.
Perceptual bi-stability: implications for brain connectivity and dynamics
When an ambiguous stimulus has two (or more) distinct interpretations, perception alternates back and forth between them; this is known as perceptual bi-stability (or multi-stability). The alternations seem haphazard, but closer inspection of their dynamics reveals systematic properties that are common to many bi-stable phenomena. We have studied those commonalities with a combination of experimental and modeling approaches, for a variety of bi-stable and multi-stable stimuli. Our results challenge several long held beliefs about the mechanisms underlying bi-stability, and have potential implications also for how the brain deals with the more general ambiguity that is inherent in all sensory stimuli. In particular, we propose that: (i) Neural competition plays a major, ongoing role in creating perceptual experience. Until recently, competition-based models have been used only for binocular rivalry, whereas for other bi-stable phenomena the alternations were commonly thought to arise from passive decay (‘fatigue’) of the ongoing percept. Our results suggest that /all/ bi-stable phenomena involve active competition between the neural populations representing the different possible interpretations of the stimulus. (ii) Rather than passive decay, the alternations in perception are due to ongoing noise that is occasionally strong enough to disturb one quasi-stable pattern of neural activity and make the system settle into another quasi-stable pattern of activity. Furthermore, although at a reductionist level the noise may be aimless, at a computational level it serves an important function: it allows the brain to “sample” the space of possible stimulus interpretations, and allocate perceptual time in a way that matches their ecological plausibility.
Analysis of Complex Brain Networks: Measures and their Interpretation
In this talk I will review a series of recently developed methods for the analysis of complex brain networks. Most of the measures operate on graphs, descriptions of brain data that take the form of sets of nodes and edges. Three classes of measures can be distinguished on the basis of the information they provide about the brain: measures that capture anatomical or functional segregation, integration, and influence. Not all graph measures are equally well suited to be applied to structural or functional connectivity – I will outline some of the differences and discuss the range and limits of interpretation of various graph metrics.
Pedro A. Valdes-Sosa
The experimental validation of neural mass and field theories
Neuroimaging techniques have evolved to the point where they can provide detailed experimental verification of mesoscopic neural mass and neural field models ( NM). This endeavor consists of three components:
- Developing forward models that predict observable EEG/MEG/fMRI from NMs.
- Solving the corresponding inverse problem from observables to NMs.
- Evaluating the evidence for competing NMS given specific individual data.
This presentation outlines results obtained in all three directions with emphasis in on the model comparison component. The specific example for discussion be comparison of models of the alpha rhythm.
Title: Machine learning in psychophysical research
Understanding perception and the underlying cognitive processes on a behavioral level requires a solution to the feature identification problem: Which are the features on which sensory systems base their computations and what techniques can we use to extract them? Thus one of the central challenges in psychophysics is to try and infer the critical features, or cues, human observers make use of when they see or hear: for real-world, complex stimuli, what aspect of the visual or auditory stimulus actually influences behaviour? Over the last years in my laboratory we have developed exploratory, data-driven non-linear system identification techniques based on modern machine learning methods to infer the critical features from human behavioural judgments. I will present these methods and show what their benefits are over the traditional “classification image” and “bubbles technique” approaches.