Neurithmic Systems
Finding the Fundamental Cortical Algorithm of Intelligence

These are examples of 64x64 snippets of varying lengths that we produced from the viHASi database. These are used in many of the simulations we are now running. See Experiment "64x64 Episodic Event Recognition" in "Results" menu.

At right, we have a an original 36-frame snippet at upper left.  This snippet focuses on an event of "Leg-Crossing" which occurs as part of a "Walking" event.  The next three videos are -10, 10, and 20 degree rotations in plane of the original snippet.  The second row has the same underlying events but from a different azimuth camera angle.

The blue grids are 4x4 pixels and correspond approximately to a V1 mac receptive field (RF), though the actual V1 RFs are hexagonal/circular and overlapped.  Close analysis would show that even though the snippets are of the exact same underlying event, the precise spatiotemporal pattern that occurs in any given RF varies greatly across the 8 snippets.

This montage shows a set of 40 variations where varying pixel-wise amounts of noise are also added, in addition to the in-lane rotations and azimuth angles.  This only exacerbates the within-class (in fact, within-instance) variation mentioned above.

To have any hope of recognizing the event class, "Leg-Crossing-During-Walking",  higher-level, more abstract, representations, or codes, of the snippets must be computed and remembered (stored).  In Sparsey, such codes are computed/stored in the macs at progressively higher levels (and spatiotemporal scales).  When a novel instance of the event is presented, the network automatically computes higher-order abstract codes at the higher level macs as the novel instances unfolds.  If the novel instance is similar enough to one or more of the learned instances, then those newly generated abstract codes will be similar to learned codes.  Such smilarity (within some tolerance) is the definition of recognition.

If a Sparsey network's parameters are appropriate, then if the novel input is similar enough to one or more learned inputs, then the average similarity of the newly generated and stored codes should increase with level, i.e., higher-level classes are represented in macs at higher levels.