Unitary Constant Time Complexity Update of Entire Probability Distribution over Stored Spatiotemporal Hypotheses

  • This animation shows results of a simulation in which three preprocessed (including temporal decimation) Weizmann snippets, a total of 40 frames, were presented once each during learning.
  • The model has two internal levels. L1 had 12x18=216 macs and L2 has 6x9=54 macs.
  • During that learning, L1 mac 91 became active 30 times, i.e., on 30 of those 40 frames. An SDR (code) was stored in that mac on each of those 30 occasions, i.e. a total of 30 SDRs were stored in that mac.  And the majority of those codes occurred as parts of chains.
  • Similarly, during that time, L2 mac 23 became active 15 time, resulting in storage of 15 codes. Note that L2 codes persist for twice as long as L1 codes, i.e., for two time steps.
  • The receptive field (RF) of L1 mac 91 is shown in the input level; it's a patch of 102 pixels. In Sparsey, a mac activates if it has a criterion number of active features (for L1 macs, pixels) active in its RF. Here that criterion was that the number be from 10 to 12 pixels.
  • L2 mac 23's RF is also shown. It's direct (immediate) RF is a patch of 25 L1 macs. It's indirect (input-level) RF is the large cyan patch, which shows the union of the RFs of the 25 L1 macs comprising its direct RF. An L2 mac becomes active if a criterion number of its the L1 macs in its direct RF are active, which again may have a range.
  • Shown here is a replay of one of the three learning snippets.
  • The yellow callout for L1 mac 91 shows the likelihood distribution over the 30 stored inputs that were stored (as SDRs) in that mac. Each of inputs was actually a particular, context-dependent, spatiotemporal moment. What you see is that distribution being updated on each frame (of the snippet being replayed) when L1 mac 91 was active during the replay. The callout for L2 mac 23 shows similar information. The distribution is blank on frames on which the associated mac was not active on the corresponding training trial.
  • In each distribution, the dark blue bar represents the code of the most likely of the previously learned (stored) inputs, and in most cases, it is the highest bar, which is appropriate since the replay is identical to the training trial. The reason the dark blue bar alway steps to the right is that the stored inputs (codes) in the distribution are organized in the order in which they occurred during learning, which, again, was a single presentation of each of three sequences.
  • Sparsey's core algorithm, the Code Selection Algorithm (CSA), executes independely in each mac, and executes on each frame (on which a mac is active) during learning. The CSA (more typically, a simplified variant of it) is also used on each frame during retrieval, which is what is being shown here in this replay.
  • The main point underlying this animation is that on each frame during retrieval, the CSA (or its simpler variant) updates the entire probability distribution over the codes stored in a mac (based on the new input presented on that frame) with a number of computational operations (i.e., in runtime) that does not depend on the number of stored codes. In other words, in practice, the CSA does "belief update" with constant time complexity.
  • From the standpoint of quantum computing (QC), the CSA does unitary update of the superposition (over stored hypotheses) with constant time.