I just binge-read Jeff Hawkins’ latest book called A Thousand Brains - recommended/discussed by luminaries like Bill Gates, Peter Norvig, and Richard Dawkins. I absolutely loved it since it’s the long overdue book that tries to explain laypeople like us what all do we know about the brain. If you’re looking for an overview of the thousand brains theory, hear it from Jeff himself at Google, or in this blog, or in this podcast with Lex Fridman.
This article jumps straight to the point of why I think machine/deep learning algorithms may be closer to the thousand brains theory than the author believes, especially in regards to Reference Frames. My background: I’m a PhD student at USC and avid reader of high level neuroscience inspirations for machine intelligence. I had the good fortune of semester-long discussions with Dr Irving Biederman on cognitive neuroscience and I credit Marvin Minsky’s Society of Mind to helping me see how the hard problem of consciousness, is “not a problem at all,” to quote Jeff Hawkins from the book. Oh, and I found a working Palm Pilot in a garage on my first visit to the United States, minutes after learning about this technology for the first time. I’ve been in awe and have followed Jeff’s work ever since.
According to Numenta’s theory of reference frames, the neocortex (the new brain) is made up of the same underlying module, a cortical column which learns a map of any concept or perception. They posit (and others’ experiments have recently supported) that the cortical column consists of neuron structures similar to the grid and place cells of the entorhinal cortex, part of the old brain responsible for figuring out where the body is. Reference Frames generalize these grid and place cells into a more abstract way of representing any information.
Jeff suggests that complex concepts like democracy and government may be represented in terms of similar reference frames, except that we do not what specific dimensions (and how many) are used to grid the semantic world. They even cite an experiment that found similar grid cells being triggered when participants in an MRI study were asked to compositionally imagine an unseen bird, by generalizing across two dimensions: neck length and length of legs.
The book has a fascinating example of a map of several towns split into horizontal and vertical grid cells to denote where they are and also place markers to denote what is found there. This what and where framework is very similar to the two widely acknowledged pathways in the visual part of the brain. The what pathway helps understand what is being seen, and the where pathway is to position it in an environment. Patients lacking in one of the pathway have shown a remarkable tendency to retain the other skill almost intact, e.g., one with a damaged where pathway can still tell you what object it is, but will not be able to point or reach out to it.
While reading about these higher dimension reference frames, it appeared obvious to me the link with the embedding space that neural networks learn for not only perceptual inputs like pixels but also for abstract concepts like democracy. Therefore, I was shocked to read in the next chapter that Jeff believes reference frames are entirely missing in today’s AI! True, most networks do not learn by actively exploring the environment unlike these sensory-motor cortical columns, but we’ll come back to that later.
My hypothesis is therefore this: neural networks learn a multi-dimensional space (grid) to represent concepts or perceptions in the same way that cortical columns learn a reference frame using grid and place cells. The 768 dimensions in BERT (a language model that learns by predicting words) may thus correspond to latitude, longitude, height, and so on, of any given concept.
An important distinction (or is it?) with artificial neural networks is that the reference frame / grid / embedding space is randomly initialized. Before training, the word democracy is mapped to an arbitrary position in this 768 dimensional grid. What matters, as in the case of neural cortical columns, are the connections between the place cells (input) and the grid (embedding space).
I tread carefully in this domain where I have little expertise, but the book led me to believe that it is not yet known how a grid is formed in the cortical column. While 2D or 3D co-ordinate system might seem an obvious candidate for physical positioning, the grid to represent mathematics and language, says the book, are likely to be higher dimensional and differ from person to person. To an ML researcher, the answer is obvious: it’s the random initialization that does the trick. You do not need an intelligent grid system - you only need it to be consistent and fairly uniformly split, i.e., not all objects are clustered at the same location on any given dimension.
The place and grid dichotomy seems like a great framework for understanding how networks learn. When learning to map a perceptive signal such as location, the grid cells are randomly initialized but the place cells (the real inputs) have some underlying pattern. Neurons that fire together wire together, hence when the same set of arbitrary neurons fire often at the same pattern, say red light on retina, then the arbitrary grid connections to the red-detector place cells learn to fire on anything red. The network has learnt a meaningful embedding / grid space thanks to patterns in observations.
The reverse may happen when learning a seemingly random set of symbols. When learning a new language such as Hindi, you may have no idea what “laal” means (it means red) but have a well-defined grid for identifying different colors. After lots of practice, your brain will learn to associate this arbitrary new symbol “laal” with red and it will eventually fire whenever you wish to communicate about, say, a red shirt.
Grids are amazing because they lead to exponentially expressive space of possible concepts to be represented using a very finite set of connections. Imagine you had two co-ordinates (x and y) and three possible values for each dimension (-1, 0, 1). This allows you to accurately represent 3^2 = 9 different positions. Each can be associated with at most one concept, like the eight planets or the seven colors of a rainbow, or nine members from your extended family. The reference frame theory helps me imagine how this may very well be the way we represent concepts in our brain! While the first two sets of concepts appear to have a natural order: the VIBGYOR spectrum and distance-from-the-sun, the last example allows more flexibility. You and your sibling may have arranged the same nine members of your family in very different co-ordinate system, depending perhaps on the way they were randomly initialized!
Much of formal education is about aligning reference frames across different brains, or more specifically aligning the brains of kids to follow the fixed standard laid out by adults. No one reference may be better or worse, but we need a common language to discuss things and collaborate, hence we are all schooled about, say historical events or scientific discoveries, to favor one kind of frame over the rest.
Rough: