Neurithmic Systems
Finding the Fundamental Cortical Algorithm of Intelligence

Notional Mapping of Sparsey® onto Cortex

The animation describes how we think Sparsey maps onto the cortex (indicated brain locations and types of features represented are approximate and broadly consistent with current thinking in neuroscience). The annotations describe the flow of signals and aspects of the features represented at the various levels. It is important to realize that the Sparsey algorithm, the Code Selection Algorithm (CSA) (described in other pubs referenced from this web site) runs in every macrocolumn (mac) (whose input meets certain criteria) at every cortical level on every frame (sequence item) presented. During learning, the CSA combines the bottom-up (U), top-down (D), and horizontal (H) signals arriving from the input level and other macs and selects a sparse distribured code (SDC) in a mac-holistic way, i.e., as a function of the the entire pattern of H, U, and D, input to the mac. Specifically, the mac computes G, which is a measure of the overall (spatiotemporal) familiarity of the mac's total H,U,D-input (which we refer to as a spatiotemporal moment) and adds noise (randomness) into the code selection process, which is inversely proportional to G. This yields the property that the similarity of chosen codes is proportional to the similarity of input moments.

We emphasize that Sparsey's use of G and other particulars of the CSA mean that codes are chosen as complete functions of the spatiotemporal input, i.e., it is not limited to learning only spatiotemporally separable functions. This stands in strong and crucial distinction to Numenta's HTM/Grok algorithm (the "CLA") which formally pipelines, and thus separates, computation of the temporal and spatial functions of the input (i.e., "temporal pooler" and "spatial pooler") in determining the winners comprising the SDC.

Start of Hyperessay