Formation of chains of sparse distributed codes in a macrocolumns

Macrocolumn using sequential sparse codes
  • The animation shows a sequence of spatial inputs (frames) over a small region of the visual field, as notionally represented in the LGN.
  • The sequence corresponds to a corner of a rectangle passing through the region.
  • On each frame, an L2 code becomes active in the V1 hypercolumn (macrocolumn) (pink) that sees the central (white) aperture.
  • These L2 codes (ensembles, or cell assemblies) are sparse disributed codes. Each code consists of one active neuron (black) per minicolumn. For the moment, assume that the choice of winner in each minicolumn is completely randomly, and therefore that the choice of each code as a whole is random.
  • This tiny macrocolumm instance consists of only Q=7 minicolumns, each containing only K=7 principal neurons (which we take to correspond to the layer 2/3 pyramidals). However, real neocortical macrocolumns consists of Q~70 minicolumns, each with K~20 layer 2/3 pyramidals; V1 macrocolumns (hypercolumns) may be about twice as large.
  • On each frame, the active L1 (LGN) code (which is not a sparse distributed code) is associated in the bottom-up (U) and top-down (D) directions with the L2 code [set of black (active) units] that becomes active. Only a tiny sample of these vertical (U or D) associations are shown (gray lines).
  • In addition, though not shown here, the L2 neurons connect, via a horizontal synaptic matrix, to all other L2 neurons in the same and nearby macrocolumns (with distance-dependent fall-off of connectivity rate) and the neurons comprising the L2 code active at T increase their weights onto the neurons comprising the L2 code that becomes active at T+1.
  • Thus, we see (if we visualize the horizontal weight increases) the formation of a sparse distributed spatiotemporal memory trace—a chain of SDRs, i.e., a Hebbian "phase sequence" of cell assemblies—in this particular macrocolumn in response to the occurrence of a natural space-time pattern (moving edge). Note that all those SDRs are stored in superposition in the hypercolumn (more specifically, in the hypercolumn's priimary coding field, which, as stated above, we take to be the L2/3 portion of the hypercolumn) and may generally intersect.
  • In the real brain, this scenario would be taking place in the context of a much larger network with many more hierarchical levels, corresponding to the progression of visual cortical areas along the ventral and dorsal pathways. This animation shows a slightly more complex scenario. There is simultaneous learning at multiple macrocolumns across the hierarchical stages. Cells (and therefore the sparse distributed codes that they comprise) at higher cortical stages have larger spatial receptive fields (RFs), due to more stages of connectional divergence/convergence, and larger temporal RFs, due to longer activation durations (persistences) (see here and here), for which there is substantial evidence (cf., Uusitalo et al, 1996; Hasson et al., 2008, and the huge working memory literature).
  • The longer persistence of a code at level J allows/causes it to associate with multiple successive codes at level J-1 (effecting a hierarchical temporal nesting). This implements a chunking mechanism of sequence learning, and thus, of compression.
  • Note: LGN is shown as partitioned into receptive fields (RFs, green hexagons), each associated with a V1 hypercolumn, only one of which is shown here. We assume complete connectivity between the units comprising an LGN RF and all units comprising the associated hypercolumn. In reality, the RFs of hypercolumns are overlapped as seen here for example. Also, the borders between the V1 hypercolumns are not really abrupt either. These simplifications are just to facilitate explanation.