Menu directory status & updates copyrights help

??May2023 initial draft, ?date? initial posting
video script: Grossberg's consciousness

Video script for Stephen Grossberg 2021 "Conscious Mind, Resonant Brain"

"... ??insert a quote, saying, quip?? ..."

Table of Contents



Basis of this video - style, tools

While this video presents some core ideas from Grossberg's book "Conscious Mind, Resonant Brain", it's greatest value may be as an example framework so that viewers can create their own videos, mixing their own information with list of [chapter, section, figure, table, selected Grossberg quotes]s that can be downloaded from this website (links are provided). Other webPages on this site contain additional information that may be handy, but of course online sources will be most important.

Much of this video consists of direct quotes, mostly from (Grossberg 2021), but also including a few other sources. As much as possible, I prefer that the reader see the actual wording of authors, rather than my own interpretations. I do provide commentary where I felt it would add a different perspective.

Rather than produce one very long video, I have produced video segments of hopefully 5 to 10 minutes length maximum. This makes it much easier for the viewer to : Video viewers may want to follow the video by viewing or creating "theme" listings. Viewers may list their own comments in files (on or more files from different people, for example), to include in Files listing [chapter, section, figure, table, selected Grossberg quotes, my comments]s. These files of lists are my basis for providing much more detailed information. While this is FAR LESS HELPFUL than the text of the book or its index alone, it can complement the book index, and it has the advantages that : Rather than just watch this video, you can follow it by reading the script and following its links, once I write it...


What is consciousness? Popular concepts

What is consciousness? I will start with a simple definition concentrated on how out [awareness of [environment, situation, self, others], expectations, feeling about a situation] arise from essentially non-conscious cognitive, emotional, and motor processes, including muscle control. "Awareness", "Expectations", "Emotions", lead to "Actions". "Actions" include muscle actions, language communications, striving towards a goal, reactions to the current situation, directing [perception, cognition], and other processes. "Learning" in a robust, stable, and flexible manner is an essential part of this, given that the environment forces us to learn and adapt to new situations and to modify our [conscious, sub-conscious] understanding where it is wrong or insufficient. Some other components of consciousness are provided in the remainder of this video, but there are many, many more in the literature. Of interest to philosophers such as David Chalmers, are qualia and phenomenal experiences.

Consciousness and its various aspects have long fascinated people over the course of history. Most people don't have to think much about it, because we use it all of the time. We may intuitively have a better feeling for consciousness than all of the theories. However, concepts of consciousness may have something to offer that helps those who are interested in knowing more about it.

How does consciousness arise? It is helpful to discuss this in the context of BOTH conscious and sub-conscious processes. There are many theories that tend to focus on general principles and models, and most seem to be somewhat vague. This is much like research in the area of neural networks, where much effort is put into powerful "general learning algorithms" that can explain, or at least do, a broad range of tasks. However, statistically based Large Language Models like chatGPT, typically produce extremely large networks that are a challenge to understand. I feel one cannot ignore the fine-grained neural architecture, and its core role in brain function and processes. But essentially none of the popular theories of consciousness do that to at least a functional level.

There are not so many theories that substantially link a concept for what consciousness is to how and why it arose during evolution.



What is consciousness and WHY does it arise? Grossberg

Grossberg has proposed what is, by far, the best, most coherent, definition of consciousness that I have seen, as well as why it has evolved :
Howell 22Jul2023 : Obviously, a key Grossberg concept is brain "resonances and their synchrony", and how ALL concious processes are resonant process, but not all resonant processes are conscious process. At present, it's not clear that there is a definitive experimental measure to distinguish which resonances are conscious, and which are not. Even if those signals are easy to distinguish "in isolation", that might not be so in the context of everything else going on in the brain?

p040 Howell 24Jul2023, Grossberg 2021 p040c2h0.80 : It appears to me that the same [microcircuit, modal architecture] in the brain may be conscious in one situation (for example when producing results for simple decision-making), and non-conscious in another (when the outputs must be considered in the context of more complex, multiple [modes, environmental scenarios]?).

p040 Howell 24Jul2023, Grossberg 2021 p040c2h0.80 : so I must sometimes be cutting off my thinking at a sub-conscious level, which would explain my "disastrous consequences"

Stephen Grossberg's theories for conscious are, to the best of my current knowledge, the only consciousness theory :
How much of consciousness is inherited? A key interest of mine is how both conscious and subconscious capabilities arise from heredity, via DNA, RNA, etc, in both a Lamarckian (epigenetic changes during an individual's life) and Mendellian (DNA set by conception, for sexual reproduction). In other words, learning and adaptation in individuals is probably STRONGLY dependent on what we start with. Instinct is one example that is vastly under-rated in my opinion, and often achieves things that our theories cannot. Grossberg is one of very few researchers who refers to this in a non-trivial way. His work does not get to the coding level, and it's not clear that we're anywhere near advanced enough for that?

Dorsal "Where" stream, ventral "What" stream - complementary computing and resonances

Howell: Note that the "resonance of consciousness" of Grossberg's theory consiousness does have some data support. But keep in mind that ???



What is-NOT consciousness?

It is almost as important to describe what IS-NOT conscious, as to describe what is. Grossberg is pretty specific, but it not easy to remember. However, there are a huge number of other theories of conciousness that vary as to what is [, not] conscious, and almost none of those theories provide any neural [architecture, function, process] or signal basis like resonance for segregating conscious fron non-conscious, albeit many are based on [EEG, fMRI] signal analysis.

I like to keep in mind Grossberg's comment : "... p040sec Why did consciousness evolve? In particular, the book will show how and why evolution provides a way to hide detail so that we can focus on what is important without distractions? ...".
At times, that comment may be the deciding basis for guessing what is conscious or not, given that : As a "rule of thumb", it's probably better than most distinctions that are made between conscious and non-conscious.

Here is my best current guess, influenced by [Grossberg, Robert Hecht-Nielson, others] :

To be, or not to be, conscious :
process, mode [see, hear, cognition, etc]non-consciousconscious
ART Matching Rule, focus attention on expected critical featuresr5c2r5c3
boundary, shroud formationr4c2r4c3
filling-inr3c2r3c3
invariant recognitionr3c2r3c3
[randomize, search, select] multiple hypothesisr3c2r3c3
cognition, emotion, motorr2c2r2c3
r5c1r5c2r5c3
r5c1r5c2r5c3
r5c1r5c2r5c3
r5c1r5c2r5c3

Grossberg provides a relatively clear concept of consciousness that is tightly [composed of, linked to] his neural [micro-circuits, modal architectures] for non-conscious processes. It's easy to blurr the lines, and, perhaps more important for me, is that while consciousness is important, it tends to be dwarfed by the MUCH larger and often more beautiful, framework that Grossberg provides. For example, the [architecture, function, process]s for many subconscious processes seems far more [challenging, powerful, fun] than the simplistic [rational, logical, scientific] thinking (cognition) that we are used to. The "Hard problem of Consciousness" seems more like a pathetic joke.


Conscious versus non-conscious

As quoted from Grossberg 2021 page ??? :
p451sec From streaming to speech: Item-list resonances for recognition of speech and language
  • p451sec Top-down attentive matching during speech and laguage using the ART Matching Rule
  • p451sec Phonemic restoration: How the future can influence what is consciously heard in the past
  • p452sec Conscious speech is a resonant wave: Coherently grouping the units of speech and language
  • p452sec When is a "gray chip" a "great ship"?
  • p453sec Resonant transfer from "gray" to "great": Longer silence intervals allow greater habituation
  • p455sec Phonemic restoration in a laminar cortical model of speech perception
  • p459sec lisTELOS: Storing, learning, planning, and performing eye movement sequences
  • p460sec Spatial Item-Order-Rank working memory stores sequences of saccadic target positions
  • p461sec How does the brain know before it knows? Balancing reactive vs. planned movements
  • p462sec Choosing reactive and planned movements using frontal-parietal resonances
  • p463sec From TELOS to lisTELOS: Basal ganglia coordination of multiple brain processes
  • p464sec Three basal ganglia gates regulate saccadic sequence storage, choice, and performance
  • p472sec Adaptive resonance in lexical decision tasks: Error rate vs. reaction time data
  • p473sec Semantic vs. visual similarity
  • p473sec Auditory-visual interactions are needed to model semantic relatedness
  • p474sec Explaining chunk data from the tachistoscopic condition using ART
  • p474sec Explaining data from the reaction time condition using ART: List item error trade-off
  • p477sec Back to LIST PARSE: Volitionally-controlled variable-rate sequential performance
  • p515sec From survival circuits to ARTSCAN Search, pART, and the Where's Waldo problem
  • p549sec START: Spectrally Timed Adaptive Resonance Theory
  • p552sec nSTART: neurotrophic Spectrally Timed Adaptive Resonance Theory
  • p579sec Spatial navigation uses a self-stabilizing ART spatial category learning system
  • p600sec An ART spatial category learning system: The hippocampus IS a cognitive map!
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? conscious, resolution of complementarity and uncertainty 619 conscious ARTWORD, see cARTWORD conscious audition, singing 479 conscious emotions, 516 conscious minds, experiences of 618-19 conscious minds, hypothesis linking resonant brain dynamics and, 44-45 consciousness, xi, 152-53 consciousness, approaches to understanding, 47-9 consciousness, brain organization, 619 consciousness, evolution of, 40-41 consciousness, extra degree of freedom, 41 consciousness, hard problem of, xii consciousness, link between movement and, 39t consciousness, type of, 42t consciousness, What and Where cortical processing streams, 28-29 consciousness, without qualia, 88 consciousness Explained, (Dennett), 48,135, conscious perception, motion explaining, 325-26 conscious resonance, extra degree of freedom, 41 conscious seeing vs recognition, 5 conscious speech, grouping speech and language, 452 conscious states, resonant states, xii,38-9,42-3 conscious visibility, 372 unconscious inference, 4,41,154,335,355, unconscious inference, Helmholtz's theory 69


    The unifying perspective of autonomous adaptive intelligence

    As quoted from Grossberg 2021 page xx :
    "... It is important to realize that the words mind and brain need not be mentioned in the derivations of many of the book's design principles and mechanisms. At bottom, three words characterize the kind of understanding to which this book contributes: autonomous adaptive intelligence. The theories in this book are thus just as relevant to the psychological and brain sciences as they are to the design of new intelligent systems in engineering and technology that are capable of autonomously adapting to a changing world that is filled with undexpected events.

    Mind and brain become relevant because huge databases support the hypothesis that brains are a natural physical embodiment of these principles and mechanisms. In particular, the hypothesis that I use im gedanken, or thought, experiments to derive brain models of cognition and congitive-emotional interactions describe familiar properties of environments that we all experience. Coping with these environmental constraints is important for any autonomous adaptively intelligent agent, whether natural or artificial. Indeed, Chapter 16 notes that the processes which the book describes can be unified into an autonomous intelligent controller for a mobile agent. ..."





    Mind-body problem: Brain theories assemble laws and modules into modal architectures

    This slide quotes directly from Grossberg's book (Grossberg 2021 pages xi, xvi), and is applicable to BOTH conscious and subconscious processes. Too bad I don't have a recording of Grossberg reading this out... :

    "... Indeed, one can explain and predict large amounts of pyschological and neurobiological data using a small set of mathematical laws, such as laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM), and a somewhat larger set of characteristic microcircuits, or modules, that embody useful combinations of functional properties, such as properties of learning and memory, decision-making, and prediction. Thus, just as in physics, only a few basic laws, or equations, are used to explain and predict myriad facts about mind and brain, when they are embodied in modules that may be thought of as the "atoms" or "molecules" of intelligence.

  • Specializations of the laws in variations of these modules are then combined into larger systems that I like to call modal architectures, where the word "modal" stands for different modalities of intelligence, such as vision, speech, cognition, emotion, and action. Modal architectures are less general than a general-purpose von Neuman computer; but far more general than a traditional AI algorithm. Modal architectures clarify, for example, why we have the five senses of sight, sound, touch, smell, and taste, and how they work. Continuing with the analogy from physics, modal architectures may be compared with macroscopic objects in the world.

    These equations, modules, and modal architectures underlie unifying theoretical principles and mechanisms of all the brain processes that the book will discuss, and that my stories will summarize. ..."




    Cognitive impenetrability and the failure of classical AI

    As quoted from (Grossberg 2021 page xvii)

    "... A related factor that makes the mind-body problem so hard to solve is that our conscious experiences do not give us direct introspective access to the architecture of our brains. We need additional tools to bridge this gap, which is often called the property of cognitive impenetrability. Cognitive impenetrability enables us to experience the percepts, thoughts, feelings, memories, and plans that we use to survive in a rapidly changing world, without being needlessly distracted by the intricate brain machinery that underlies these psychological experiences.

    Said in a different way, brain evolution is shaped by behavioural success. No matter how beautiful your ancestor's nerve cells were, they could all too easily have ended up as someone else's meal if they could not work together to generate behaviour that was capable of adapting quickly to new environmental challenges. Survival of each species requires that its brains operate quickly and effectively on the behavioural level. It is a small price to pay that we cannot then also solve the mind-brain problem through introspection alone. ..."


    "... One of the problems faced by classical AI was that it often built its models of how the brain might work using concepts and operations that could be derived from introspection and common sense. Such an approach assumes that you can introspect internal states of the brain with concepts and words that we use to describe objects and actions in our daily lives. It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works, and thus failed to imitate the results of brain processing on the level of our psycholocial awareness, rather than probing the mechanisms of brain processing that give rise to these results. They fell victim to the problem of cognitive impenetrability. ..."

    Howell 21Jul2023 : To this I add the question :
    Do general neural network approaches succumb to cognitive impenetrability? The phrase "Artificial Intelligence" has been generalized to include Computational Intelligence, including neural networks. High-profile modern AI systems, like Large Language Models (Google BARD, OpenAI-Microsoft chatGPT, etc) are based on "Transformer Neural Network" (TrNN) architectures that had been progressively simplified from Recurrent, to Convolutional, to the current favourite Transformer Neural Networks. These systems are strongly dependent on information theoretic and statistical methods as well as growing archtectures, and of course gradient descent learning from errors. Because they are data driven, they avoid some of the pitfalls of classical AI that Grossberg mentions, but after learning, do they automatically develop appropriate neural modules and modal architectures? My guess is that this is unlikely, as to the best of my very limited knowledge : But even if TrNNs currently fails, that still leaves the question as to whether they might provide a base for back-integrating concepts like Grossberg's, either by modifying the TrNN itself and retrining the system, or by evolving mature TrNN-LLMs automatically with concepts like Grossberg's. Neither of these approaches sounds easy.



    The Hard problem of consciousness

    As quoted from Grossberg 2021 page 43 :

    "... My preceding remarks make the claim that, in effect, this book is proposing major progress towards solving what is known as the Hard Problem of Consciousness; namely, it is summarizing an emerging theory of the events that occur in our brains when we have specific conscious experiences. Before turning to the scientific results themselves, it may be helpful to summarize what the Hard Problem of Consciouness is, and the limits to what such a theory can achieve.

    First, what is the 'hard problem of consciousness'? Wikipedia says: '... The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences - how sensations acquire characteristics, such as colors and tastes'. David Chalmers, who introduced the term 'hard problem' of consciousness, contrasts this with the 'easy problems' of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. As (Chalmer 1995) has noted: 'The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As (Nagel 1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt qaulity of emotion, and the feeling of a stream of conscious thought. What unites all these states is that there is something it is like to be in them. All of them are states of experience.'


    "... The Internet Encyclopedia of Philosophy goes on to say: 'The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is something it is like for a subject in conscious experience, why conscious mental states light up and directly appear to the subject.' ..."

    Howell 23Jul2023 : one example of how Chalmer's Hard Problem is approached by Grossberg's concepts
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
    Howell 21Jul2023 :
    I really don't see the hard problem of consciousness as being much different than supposedly 'easy problems', which actually aren't easy at all except when treated by philosophical, rational-logical descriptions? The hard problem may very well be to utilise the same modular architectures that Stephen Grossberg or neuroscience [neuron spike trains, EEG, fMRI, etc] approaches that others have developed some time ago. In any case, Grossberg's concepts already [explain, model] examples that Chalmers cites, often long before the time that Chalmers made many of his statements (~1995?): Is consciousness really any different or any harder, and if so, how so? IF my thinking was restricted to a philosophical background and modes of thinking, I might easily believe so. Chalmers does not, to me, provide that in the simple statements cited here, and does not seem aware of work by Grossberg and others. It is almost as if his thinking stops at an arbitrary level. Perhaps I would have to dig much deeper, and so perhaps should Chalmers. Or maybe not...

    An "Even Harder Problem" is to understand the experimental data from neuroscience, neurophysiology, psychology, etc. To me, experiments come before theories. Of course, explaining this data is what Grossberg and others are trying to do, but I think there is a problem if one only looks at the theories or models, without actually understanding the data.

    Stephen Grossberg goes into further details and questions on the "Hard Problem", but I will leave it to you to read his book.

    To me, the "Hard Problem of Consciousness" has now become that of understanding, and being able to program software implementing Grossbergs' consciousness theories, which of far, far more difficult that all of the philosphical discussions, and is already far more rewarding, even though I have yet to start in earnest.




    ART - Adaptive Resonance Theory

    As quoted from Grossberg 2021 page ??? :
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet? some are conscious (decision-quality? or must interact with conscious cognitive?), others not
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    020
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. 025
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells 030
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) 040
  • p184 Howell: grepStr 'conscious' - "... I claim that a common set of brain mechnisms controls all of these processes. Adaptive Resonance Theory, or ART, has been incrementally developed to explain what these mechanisms are, and how they work and interact, since I introduced it in 1976 [Grossberg, 1976a, 1976b] and it was incrementally developed in many articles to the present, notably with the help and leadership of Gail Carpenter, as I will elaborate on below. There are many aspects of these processes that are worth considering. For example, we need to understand between... ..." [Grossberg 2021 p184] 100
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || 200
  • p190sec Computational properties of the ART Matching Rule 205
  • p195sec Mathematical form of the ART Matching Rule 210
  • p206sec ART cycle of hypothesis testing and category learning 215
  • p227sec ART direct access solves the local minimum problem 220
  • p240sec Converting algebraic exemplar models into dynamical ART prototype models 230
  • p241sec Explaining human categorization data with ART: Learning rules-plus-exceptions 235
  • p246sec Self-normalizing inhibition during attentional priming with the ART Matching Rule 240
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? 300
  • As stated in [Grossberg 2021 p13c1h1.0] : "... This range of applications is possible because ART models embody general-purpose properties that are needed to solve the stability-plasticity dilemma in many different types of environments. In all these applications, insights about cooperative-competitive dynamics also play a critical role. ..." 300
  • p208sec ART links synchronous oscillations to attention and learning 305
  • p249sec Many kinds of psychological and neurobiological data have been explained by ART 310
  • p358sec Intracortical but interlaminar feedback also carries out the ART Matching Rule 315
  • p365sec ART Matching Rule in multiple cortical modalities 320
  • p600sec An ART spatial category learning system: The hippocampus IS a cognitive map! 325
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || 330
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance 335
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. 340
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Matching Rule is restored.
    || Stabel and unstable learning, superset recoding 345
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype 350
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. 355
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). 410
  • add content of subSection "Multiple applications of ART to large-scale problems in engineering and technology" 800
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) 900
  • p420sec SPINET and ARTSTREAM: Resonant dynamics doing auditory streaming 905
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.


    ???

    As quoted from Grossberg 2021 page ??? :
  • image
  • image
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable different kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."


    ???

    As quoted from Grossberg 2021 page ??? :



    ???

    As quoted from Grossberg 2021 page ??? :



    ???

    As quoted from Grossberg 2021 page ??? :