The video below illustrates recall of a 13-frame sequence by a single Sparsey coding field (CF), or "mac" (short for macrocolumn). The CF consists of Q=10 WTA competitive modules (CMs), each having K=18 binary units (analogs of L2/3 pyramidals) The input is 10x10 pixels and each frame is a random set of five active pixels (analogs of LGN projection cells). During the single learning trial, when each frame was presented, a sparse distributed code (SDC), or just "code", was chosen and the bottom-up (U) synaptic weights from active pixels to the active code units (black) were increased to "1". In addition, on each frame (except the first), the horizontal (H) weights (green arcs) from the previously active code onto the currently active code were increased to "1". And finally, on each frame, the top-down (D) weights (magenta) from the active code units to the active pixels were also increased to "1". Thus, the engram (essentially, the episodic memory) of the sequence was formed at full strength on the basis of the single presentation. To demonstrate recall, we present the first input frame, which then causes its code to activate. This code then sends H signals out via the recurrent H matrix, which arrive back at the CF on the next frame, which causes the second code to activate. That code then sends: a) D signals (magenta) down to pixels to activate the correct set of pixels for the frame; and b) H signals (green) which will cause the next code to activate, etc. On each frame the currently active coding units are black and the previously active ones are gray and the green arcs show the H synapses carrying the recurrent signals. The video is fast: you can pause it, and then click through frame by frame.
The input here is a completely random sequence. Sparsey's algorithm for choosing codes for inputs preserves similarity, in particular, spatiotemporal similarity. It does this by computing a the familiarity, G (in [0,1])] of each spatiotemporal moment, i.e., the current frame in the context of the full sequence of frames leading up to it, and adding noise inversely proportional to G into the choice of winners in the CMs. In this case, a random sequence, G is near zero for each frame, which causes each code to be essentially random. Thus, the set of 13 codes assigned to these 13 frames have approximately the maximum expected Hamming distance from each other. This spreading out of codes in the codespace, maximizes storage capacity, because it reduces crosstalk interference, during retrievals.