#] #] ********************* #] "$d_web"'ProjMini/TrNNs_ART/Grossbergs ART- Adaptive Resonance Theory workCore.html' # www.BillHowell.ca 23Jul2023 initial # view in text editor, using constant-width font (eg courier), tabWidth = 3 going from "$d_web"'ProjMini/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html' to an ordered listing of theme-specific points, eg : "$d_web"'ProjMini/TrNNs_ART/Grossbergs ART- Adaptive Resonance Theory workCore.html' add '#] ...' at the start of each line to be extracted where is a three-digit number, i.e. \([0-9]\{3\}\) very rough initial numbering classes : 000-099 computing principles [[in, out]star, TD-BU, Complementary, Cooperative-Competitive, ART Matching Rule, Laminar, etc] 100-199 architecture & functionality 200-299 mathematics & functionality 300-399 [neuro-[bi, physi], psych]ology - explain, experiment, features 400-499 non-[neuro, psych] applications (eg engineering, medical, ???) 500-599 special [function, property] 800-899 [anticipate, explain] [experiment, theory] 900-999 leads into other themes multiple class-orders are possible, eg : '#] ... ' note that extractions currently pull out only first (later - all) # $ grep "^#] \([0-9]\{3\}\)" "$d_web"'ProjMini/TrNNs_ART/Grossbergs ART- Adaptive Resonance Theory workCore.html' | sed "s/^#\] //" | sort >"$d_web"'ProjMini/TrNNs_ART/Grossbergs ART- Adaptive Resonance Theory workCull.html' need script to transfer to new output of applying grepStr, eg : additions to new files in "$d_web"'ProjMini/TrNNs_ART/1_TrNNS_ART list of txt files to search.txt' #48************************************************48 grepStr = ART\|cART\|pART\|ARTMAP\|ARTSTREAM\|ARTPHONE\|ARTSCAN\|dARTSCAN\|pARTSCAN\|ARTSCENE\|ARTSTREAM\|ARTWORD\|cARTWORD\|LAMINART\|PARSE\|SMART\|START\|nSTART
-->
  • p012sec Cooperative-competitive-learning dynamics: ART in the brain and technology
  • p038sec How does object attention work? ART matching rule
  • p041sec The road to ART: Helmholtz, James, and Gregory

    Credibility from non-[bio, psycho]logical applications of Grossberg's ART

    #] 300
  • As stated in [Grossberg 2021 p13c1h1.0] : "... This range of applications is possible because ART models embody general-purpose properties that are needed to solve the stability-plasticity dilemma in many different types of environments. In all these applications, insights about cooperative-competitive dynamics also play a critical role. ..." #] 410
  • add content of subSection "Multiple applications of ART to large-scale problems in engineering and technology" #] 200
  • p190sec Computational properties of the ART Matching Rule
  • p190sec ART matching: Expectation and attention during brightness perception
  • p193sec Neurobiological data for ART matching by corticogeniculate feedback
  • p194sec Neurobiological data for ART matching in visual, auditory, and somatosensory cortex #] 205
  • p195sec Mathematical form of the ART Matching Rule
  • p205sec How does ART stabilize learning? #] 210
  • p206sec ART cycle of hypothesis testing and category learning #] 300
  • p208sec ART links synchronous oscillations to attention and learning
  • p212sec A thought experiment that leads to ART
  • p222sec Multiple applications of ART to large-scale problems in engineering and technology
  • p225sec Catastrophic forgetting without the top-down ART Matching Rule #] 215
  • p227sec ART direct access solves the local minimum problem
  • p228sec Learning of fuzzy IF-THEN riles by a self-organizing ART production system
  • p228sec ART provides autonomous soluutions for Explanable AI
  • p229sec SMART: A laminar cortical hierarchy of spiking neurons #] 220
  • p240sec Converting algebraic exemplar models into dynamical ART prototype models #] 230
  • p241sec Explaining human categorization data with ART: Learning rules-plus-exceptions
  • p242sec Self-supervised ARTMAP: Learning on our own after leaving school #] 235
  • p246sec Self-normalizing inhibition during attentional priming with the ART Matching Rule #] 305
  • p249sec Many kinds of psychological and neurobiological data have been explained by ART
  • p315sec ART again: Choosing an object's motion direction and speed
  • p315sec ART again: Dynamically stabilizing learned directional cells also solves the aperture problem
  • p319sec ART Matching Rule explains induced motion
  • p350sec Solving the aperture problem for heading in natural scenes using the ART Matching Rule #] 310
  • p358sec Intracortical but interlaminar feedback also carries out the ART Matching Rule #] 315
  • p365sec ART Matching Rule in multiple cortical modalities #] 900
  • p420sec SPINET and ARTSTREAM: Resonant dynamics doing auditory streaming
  • p422sec From SPINET processing of sound spectra to ARTSTREAM creation of multiple streams
  • p443sec LIST PARSE: A laminar model of variable-length sequences
  • p451sec Top-down attentive matching during speech and laguage using the ART Matching Rule
  • p468sec ARTPHONE: Rate-sensitive gain control creates rate-invariant working memories
  • p474sec Explaining chunk data from the tachistoscopic condition using ART
  • p474sec Explaining data from the reaction time condition using ART: List item error trade-off
  • p477sec Back to LIST PARSE: Volitionally-controlled variable-rate sequential performance
  • p515sec From survival circuits to ARTSCAN Search, pART, and the Where's Waldo problem
  • p549sec START: Spectrally Timed Adaptive Resonance Theory
  • p552sec nSTART: neurotrophic Spectrally Timed Adaptive Resonance Theory
  • p579sec Spatial navigation uses a self-stabilizing ART spatial category learning system #] 320
  • p600sec An ART spatial category learning system: The hippocampus IS a cognitive map! #] 020
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable different kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s. #] 025
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells #] 030
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. Thje ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) #] 325
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || #] 800
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) #] 330
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites #] 335
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. #] 340
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Matching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus #] 240
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? #] 345
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where's Waldo problem.
    || similar ilustration as Figure 06.03, with some changes to arrows
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p354fig10.01 The laminar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes #] 350
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala. #] 100
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch]. #] 905
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation. #] 355
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS. The reader may want to create their own file of comments based on this example, or augment this list with their [own, others'] notes. If using a new file, it should be added to the bash search script. #] 040
  • p184 Howell: grepStr 'conscious' - "... I claim that a common set of brain mechnisms controls all of these processes. Adaptive Resonance Theory, or ART, has been incrementally developed to explain what these mechanisms are, and how they work and interact, since I introduced it in 1976 [Grossberg, 1976a, 1976b] and it was incrementally developed in many articles to the present, notably with the help and leadership of Gail Carpenter, as I will elaborate on below. There are many aspects of these processes that are worth considering. For example, we need to understand between... ..." [Grossberg 2021 p184]
  • p240fig05.44... When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model. || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • p402fig11.44... 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model. || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • p402fig11.44 Howell: amazing results from the 3D LAMINART model, Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes. But could simple parallax processing produce much of the same?
  • p419fig12.17... The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance. ||
  • p??? Howell: grepStr 'Explainable AI' - Grossberg explains how ART easily leads to IF-THEN rules... somewhere in book