top of page
Search

Same Concept, Different Scale - Intelligence (Part - 1)

  • Writer: Kaan Bıçakcı
    Kaan Bıçakcı
  • Nov 21, 2024
  • 5 min read

Updated: Feb 23

Phew. Been a while since my previous post. In my first post about Deep Learning and AGI, I explained how current AI systems struggle with generalization. I defined generalization as:

Generalization is the ability to efficiently apply knowledge across diverse contexts, bridging familiar, novel, and unseen situations through adaptive cognitive processes.

In this blog post, I'll try to show you how other biological systems/beings achieve what our most advanced AI systems still struggle with. This will be a shorter and a simpler post which I'll try to give you the intution.


Contents:


The Scale Paradox - Beyond Size and Numbers

The relationship between intelligence and physical scale presents an intriguing paradox. While deep learning models grow ever larger in pursuit of AGI, nature shows us that raw size and computing power aren't everything.


The Brain Size Myth

Many people believe brain size is the key to intelligence. This seems logical - after all, humans have relatively large brains for their body size. However, this simple logic breaks down when we look closer:

  • Elephants have brains around 5kg (compared to our 1.5kg)

  • Whales have even larger brains, some weighing up to 8kg


Yet none of these creatures match human cognitive capabilities. Now we need to look at cerebral cortex (which I'll explain what it is in the following sections). For example, the cerebral cortex of an elephant contains only about one-third of the neurons found in a human's cerebral cortex.


Species

Total Neurons (Billions)

Brain Mass (kg)

Encephalization Quotient (EQ)

Neurons in Cerebral Cortex (Billions)

Human

86

1.5

7.4

16

Elephant

257

5

2

5.6

Chimpanzee

28 (estimated)

0.39

2.2

6.7

In this case, one simple hypothesis would be looking at the neuron count in the cerebral cortex and stating that more yields more cognitive abilities.


But no!

Long-Finned Whale
Long-Finned Whale

A long-finned whale has 37.2B neurons in its cerebral cortex. Therefore, the total number of neurons in the neocortex does not directly correlate with the advanced cognitive skills of humans.


We need to focus on different topics.


What are Cognitive Abilities?

(Note: I use LLM as an example, because they are trending and seen as road to AGIs.)


Perception

It is the ability of organizing, identifying, and interpreting sensory information to understand and interact with the environment.

  • Humans: We can (really) see, hear, smell, taste, and touch with (complex) sensory integration.

    • Example: Recognizing a someones face in a crowd while hearing their voice


  • Spiders: Highly specialized sensory systems - excellent vibration detection, some have good vision.

    • Example: Detecting prey on web through vibrations, some jumping spiders can recognize specific shapes.


  • LLMs: No real perception, just processing of the input data.

    • Example: When "shown" or given an image, just processes pixel data.


Memory Functions

We can divide this into 3 simple sub-groups.

Encoding: Converting received information into a suitable format so that it can be stored.

Storage: Maintenance of the information over some time.

Retrieval: Accessing the stored information when needed.


  • Humans: Short and long-term memory, emotional memories, and procedural memories.

    • Example: Remembering where you parked while also remembering how to drive.


  • Spiders: Have basic memory capabilities - remember web locations, successful hunting spots, potential mates.

    • Example: Remembering which parts of their territory have good prey


  • LLMs: No persistent memory, each interaction is considered new. Actually there are recent features attempt to add "memory" capabilities, but these are fundamentally different from biological memory. While natural memory adaptively forgets less important information over time, these models either remember everything or nothing at all.


Literacy

It is ability to read, write, and comprehend written language. Also it includes the understanding of the symbols, syntax, and meaning.


It involves both the technical ability to decode written symbols and the cognitive capacity to extract and process meaning from text.


  • Humans: Can read, write, and deeply understand meaning and context.

    • Example: Reading a novel and feeling emotional about characters


  • Spiders: No literacy capabilities, but can "read" environmental cues.

    • Example: Can interpret web vibrations to identify if a visitor is prey, friend, or threat, and even assess the prey's size and strength.


  • LLMs: Process text without true understanding.

    • Example: Can generate text about quantum physics without understanding it. Also they can generate text sounds plausible but contains wrong information.


Abstract Thinking

It can be defined as a level of cognitive development where an individual can think about intangible concepts, ideas, and possibilities that exist beyond the concrete, physical world.


  • Humans: Can understand and create abstract concepts.

    • Example: Understanding metaphors, creating (real) art.


  • Spiders: Limited to concrete situations, but show some flexibility.

    • Example: Can adapt hunting strategies to new prey types.


  • LLMs: Can work with complex abstract concepts and make interesting connections, ONLY within training data boundaries.

    • Example: Can discuss laws (quite complicated for normal people - who are not lawyers etc.) but can't create new concepts based on laws.


Cerebral Cortex - Organization

If I'd explain it in one sentence, it would be the "brain's executive suite". It handles your "higher" thinking and processes sensory information.


(IMO) Processing sensory information will be one of the key aspects of future AGIs. Current SOTA models don't have a world model and can't learn continously. I've covered this in my previous blog posts.


The cerebral cortex is organized into different regions called "lobes," each specialized for different functions - kind of like departments in that executive suite:


Visual Processing - Occipital Lobe

Occipital lobe initially processes raw visual information and then coordinates them with other lobes. Let's expand this a little bit with an example. Suppose you see an object on the table:

  • Visual signals move to temporal lobe from occipital lobe for object recognition.

  • Information travels to the parietal lobe for spatial awareness and possibly movement coordination.

  • The frontal lobe receives processed visual data to plan appropriate motor responses.


The example above was simplified. The idea I am trying to tell is that there is a hierarchical processing and integration. The cortex processes information in hierarchical layers, with each level is responsible for something different. There are other things we need to consider:

  • Dense Interconnectivity

  • Lateral Frontoparietal Network

  • Neuroplasticity and Learning

  • ...


I think it's more clear that why the above examples are "simplified". Actually it's not just about sequential processing, but rather about simultaneous, interactive processing across regions - that are highly interconnected, parallel, and flexible organization determines how cognitive abilities are formed :)


Conclusion

Understanding intelligence across different scales challenges our current approaches to AI development. As I concluded in my previous post:

Advanced AI isn't just about scaling up these models or feeding them more data. It's about developing systems that can generalize, creating truly adaptive models that can navigate the dynamic, ever-changing landscape of real-world problems.

The path forward might lie in learning from these natural systems - not just mimicking their structure, but understanding the principles that allow them to achieve such remarkable generalization and adaptation across different scales.


I believe there are two main fascinating things:

  • What spiders can do with their tiny brains

  • How efficiently they can do it


If we compare them with current AI systems, we can clearly see that there is something fundamentally missing.


Actually I think that current LLMs can accelerate science, for example, I am not an expert in Psychology but I can prompt an LLM to give me an idea about the domain (which should be verified nevertheless).


In conclusion, the challenge for current A(G)I development isn't just to make bigger models, but to create systems that can use these principles of natural intelligence across all scales. Only then might we approach the kind of general intelligence we see in nature.


You can also check our paper, Rewiring AGI.


Sources and Further Reading:
 
 

You can reach out to me by filling this form

Message Sent Successfully!

© 2024 by Kaan Bicakci. All rights reserved.

bottom of page