My 2023 COSYNE abstract clearly describes, in a concise mathematical fashion, a fundamentally new principle for how spikes transmit information.  This is the second time it has been rejected.  My COSYNE 2019 abstract (essentially similar to this one) was also rejected: the 2019 reviewer comments are here. In my opinion, the 2019 review comments are clearly shallow and frankly incompetent.  The comments for this 2023 submission are similarly either vacuous (Reviewer 2), or appallingly shallow/incompetent (Reviewers 1 and 3). None of the reviewers actually comment on the details of the described principle/mechanism, though it is presented in a quantitatively rigorous and self-contained way. 

============================= Reviewer Comments ======================================

* Overall: 4 * Confidence: 4 * Comments: The abstract presents a storage scheme and a representation for likelihood function. The model depends strongly on the specific sparseness patten in the “cell assembly”; this correlated activity in the assembly needs a very specific learning rule. The structure does not sound biologically plausible and to my knowledge, there is no evidence for such cell assemblies in the brain. On the other hand, simple RNNs and multi-layer networks can easily perform the described task. The abstract also suggests that the activity of a neuron in the target field may represent the likelihood of an instance. This has been one of the earliest suggestions for probability representations in the brain and has been studied extensively (see for example, the review by Ma and Jazayeri, Annu. Rev. Neurosci. 2014). The abstract has been written clearly and the figure explains the concept. Nevertheless, the limitations of the model reduces the significance of the abstract.

My response to Rev 1: The "very specific learning rule" that Reviewer 1 says my model requires is actually about the most generic Hebbian law possible: if a synapse's pre and postsynaptic cells are co-active, increase the wt to the max value, 1 (it's clearly stated in the abstract). Then Reviewer 1 actually says there "is no evidence for cell assemblies in the brain"! So does Reviewer 1 think that all percepts/concepts represented in the brain are represented by individual neurons?! It's a completely gratuitous remark. Then Reviewer 1 says that RNNs and multilayer nets can easily perform the task. The "task" is to send the full likelihood distribution, i.e., the likelihoods of all stored items, at once, i.e., in a volley of single (simultaneous) spikes from the co-active source neurons. Of course, there is no paper in the literature demonstrating any such capability by any RNN or multilyer net. The shallowness of thought, or simply laziness, that this remark belies is just embarassing. Then Reviewer 1, says that I say that "the individual neuron in the target field may represent the likelihood." No! This is expressly the opposite of what I suggest. I suggest that it is a set of co-active neurons which collectively represent (the magnitude) of a likelihood of an item. Moreover, since the sets representing the various stored items will, in general, intersect, the likelihoods of all the stored items can simultaneously be active in superposition. This has NOT "been studied extensively" by anyone. It is fundamentally new. Reviewer 1 has utterly failed to understand what is a clearly written, and quantitatively self-contained, description of a completely new and powerful concept/principle. Extremely disappointing that this would pass muster with COSYNE.

============================= Reviewer Comments ======================================

* Overall: 7 * Confidence: 2 * Comments: I find the proposed coding scheme interesting; I would've been glad to see a bit more of the explanation/evidence for its improved efficiency or plausibility. The discussion on the similarity scheme was hard to parse but intriguing nonetheless.

My response to Rev 2: OK, well this review just simply has no content. They 'want to see more evidence of it's improved efficiency'!  The quantitative result is right there in front of them, 4.58 bits is more than 2 bits!  Actually do your job and READ the abstract!  Why wouldn't the COSYNE review process just throw this review out and find another reviewer to actually do the work!?  This particular review is so eggregiously lazy, surely it should have been flagged.  Perhaps COSYNE should enact a policy where they simply do not accept reviews with low confidence.  Wth a confidence of 1 or 2, the reviewer is essentially saying they don't understand the material.  In that case, why should their rating count at all? 

============================= Reviewer Comments ======================================

* Overall: 4 * Confidence: 1 * Comments: The authors claim to have proposed a new class of coding in the brain that involves cell assemblies and atemporal coding. It seems to me that most if not all the ideas presented here were in fact already proposed and partly accepted by the community. For example, Gerstner (and others before him) discusses in his book the importance of having neural populations to more reliably and quickly encode stimuli. Similarly, the ideas of distributed codes and winner-take all circuits were already present in the work of Fusi and Maas. Overall, I think that in this abstract the possible novelty is heavily overstated.

My response to Rev 3: Reviewer 3 says "most if not all the ideas presented were in fact already proposed and partly accepted by the community", e.g.: a) the idea that neural populations can encode stimuli more quickly and reliably; b) distributed codes; and c) WTA. Yeah, of course neural populations, distributed codes, and WTA, are concepts in the literature. The point is that I'm decribing a fundamentally NEW kind of population code (distributed) code that achieves a fundamentally new and more powerful capability. My new formalism happens to use WTA. So what! Reviewer 3 might just as well have said, "well, the author is using the concept of a neuron, which is already present in the literature, e.g., the work of Ramon y Cajal, so nothing new here." Honestly, Reviewer 3 doesn't seem educated enough to review the material. Am I being reviewed by ChatGPT?