Neuropsychology and Intelligence Architecture: Questions Toward AGI
Client: Internal - XBG Solutions
Current AI development is architecturally inverted—scaling neocortical function whilst ignoring foundational layers that neuroscience suggests aren't optional. We explore what building from the brainstem up might actually require.
Neuropsychology and Intelligence Architecture: Questions Toward AGI
We’re scaling the wrong thing exceptionally well.
Current AI development focuses on expanding what’s essentially neocortical function—language processing, pattern recognition, abstract reasoning. The results are impressive. But if neuroscience is any guide—and it may not be—we might be building from the wrong end of the architecture.
This isn’t a solved problem we’re presenting. It’s a set of questions that cross-domain thinking from neuropsychology might help illuminate. The path to AGI might require foundations we haven’t built yet, and acknowledging that gap could be more productive than pretending another order of magnitude in parameters will bridge it.
The Scaling Fallacy
In our previous work, we argued that Large Language Models are sophisticated indexing wrapped in natural language generation, and thus a very artificial intelligence. The distinction matters because it explains the 95% failure rate on generative AI pilots that MIT documented in 2025.
LLMs predict the next token based on patterns in training data. They’re spectacularly good at retrieval, synthesis, and generation. But they lack causal models, grounding in non-linguistic experience, goal-directed behaviour shaped by internal states, and affective responses that weight outcomes.
The industry response has been: scale harder. More parameters, larger context windows, better training data. Behind the scenes, there’s likely also utility classification of training data before models are retrained—upweighting patterns from scholarly reviewed articles over Reddit threads, for instance. This helps optimise for “higher utility” language patterns, though it’s still optimising indexing and generation, not addressing the architectural gaps.
What it doesn’t produce: systems that care about outcomes, learn from embodied experience, develop genuine curiosity, or adapt their values based on affective responses to consequences.
DeepMind’s work represents some of the most sophisticated progress in the field. AlphaGo’s world models learned through self-play. MuZero built planning capabilities without explicit environment rules. Gato demonstrated multimodal integration across diverse tasks.
These are genuine achievements. But they’re still missing foundational components that neuroscience suggests might not be optional extras you bolt on later. They’re architectural requirements that everything else depends upon.
Consider what happens when you remove homeostatic drives from an intelligent system. You get sophisticated task completion without any basis for caring whether the task matters. Remove affective valuation and you get decision-making that can’t robustly handle conflicting values or novel situations. Remove embodied grounding and symbols float free of meaning.
The question isn’t whether these systems are useful—they demonstrably are. The question is whether scaling them gets you to AGI, or whether you’re optimising components that require foundations we haven’t built.
What Intelligence Actually Requires
If we’re serious about understanding what AGI might need, neuroscience and cognitive science offer more than metaphors. Decades of research on biological intelligence reveal architectural patterns that might represent functional requirements, not just implementation details.
Multi-Timescale Processing
Joseph LeDoux’s work on fear conditioning demonstrates that biological systems process the same sensory input at radically different speeds for different purposes. When you encounter potential danger:
- Fast path: Thalamus → Amygdala (threat detection, roughly 12 milliseconds)
- Slow path: Thalamus → Visual Cortex → Prefrontal Cortex (conscious processing, roughly 200 milliseconds)
This isn’t separate systems handling different data. It’s simultaneous processing of identical input at different speeds optimised for different functions. The fast path answers “is this dangerous?” whilst the slow path answers “what exactly is this?”
György Buzsáki’s work on neural oscillations (Rhythms of the Brain, 2006) reveals biological systems use different frequency bands for different cognitive processes. Theta rhythms for memory encoding, gamma for perceptual binding, slow waves for consolidation.
The implication: intelligence might require not just “thinking fast and slow” but architectural support for simultaneous multi-timescale processing with different learning rates, stability characteristics, and update mechanisms.
Current transformer architectures operate at essentially uniform timescales. Training happens offline at one rate; inference happens at another. There’s no architectural equivalent of “react immediately whilst also reasoning deliberately about the same event.”
Homeostatic Drives and Self-Regulation
Antonio Damasio’s work on somatic markers and homeostatic regulation argues that organisms develop valuation systems through managing their own continued existence. In The Strange Order of Things (2018), he traces how caring about outcomes emerges from having stakes in those outcomes—literal survival stakes.
This isn’t just motivation through reward signals. It’s a foundational drive toward self-preservation that shapes what gets learned, what gets attended to, what constitutes “good” versus “bad” outcomes.
Biological systems maintain homeostasis—temperature regulation, energy balance, threat avoidance. These aren’t tasks the organism completes; they’re continuous processes that define what the organism cares about achieving.
Current AI systems have objective functions defined externally. When the task completes or the episode ends, there’s no ongoing drive toward self-maintenance. They don’t care about their own continued operation because they have no homeostatic baselines to maintain.
The question this raises: can you build genuine intelligence without intrinsic drives toward self-preservation? Or does intelligence require systems that care about their own continued existence not because we programmed that value, but because maintaining operation is architecturally fundamental?
Affective Valuation
Damasio’s somatic marker hypothesis shows that patients with ventromedial prefrontal cortex damage can reason logically but make catastrophically poor decisions in complex social and personal situations. They can explain moral principles but can’t reliably apply them when multiple values conflict.
What’s missing isn’t reasoning capability—it’s affective responses that weight different outcomes emotionally. Normal decision-making integrates logical analysis with gut feelings that encode accumulated experience about what matters.
Jaak Panksepp’s work on affective neuroscience demonstrates that core emotional systems—seeking, rage, fear, panic, care, play, lust—are evolutionarily ancient and shared across mammalian species. These aren’t decorative additions to cognition; they’re foundational systems that drive learning and behaviour.
Current reinforcement learning approaches use reward signals to shape behaviour. But reward signals are externally defined and optimised during training. They’re not affective responses that the system generates based on internal states and experiences.
The distinction matters. A reward signal tells the system “this outcome was good according to the training objective.” An affective response tells the system “this outcome feels good/bad based on my current state, past experiences, and predictions about consequences.”
One is externally imposed. The other is internally generated. The second enables robust generalisation to situations the training objective didn’t anticipate.
Embodied Grounding
The embodied cognition literature (Varela, Thompson, and Rosch’s The Embodied Mind, 1991; Alva Noë’s Action in Perception, 2004) argues that intelligence is enacted through sensorimotor coupling. Perception isn’t passive reception of data—it’s active exploration through movement and prediction.
This is why human infants spend enormous amounts of time dropping objects and watching them fall. They’re not learning physics through observation; they’re learning through intervention. The tight loop between action and perception grounds abstract concepts in physical experience.
Current multimodal models process visual, auditory, and textual data. But they don’t have agency in the world—the ability to move sensors, manipulate environments, and experience the consequences of actions. They observe; they don’t interact.
Whether artificial systems require embodiment in the biological sense remains an open question. But the functional requirement seems clear: concepts need grounding in something beyond statistical co-occurrence in text. The question is what computational equivalent of sensorimotor coupling provides that grounding.
Continuous Learning Without Catastrophic Forgetting
Biological neural networks learn continuously throughout life without wholesale overwriting of previous knowledge. How they achieve this has been a central question in neuroscience.
Complementary Learning Systems theory (McClelland, McNaughton, and O’Reilly, 1995) proposes that mammalian brains use architectural separation to solve this problem. The hippocampus does rapid encoding of new experiences. The neocortex does gradual integration during sleep and rest periods, slowly absorbing new information into stable world models.
Hassabis’s work on hippocampal replay and imagination demonstrates that even during offline periods, biological systems actively reprocess experiences. Memory consolidation isn’t passive storage—it’s active integration and reorganisation.
Current large language models have distinct training and inference phases. Once deployed, they don’t learn from new experiences. Fine-tuning risks catastrophic forgetting—new data overwrites previous capabilities.
The architectural question: how do you build systems that learn continuously from experience without losing stable knowledge? Biology solved this through structural separation and different learning rates. What’s the computational equivalent?
Conceptual Representations With Functional Geometry
Timothy Behrens and colleagues published remarkable findings in Nature (2018) showing that medial entorhinal cortex—the same region containing grid cells for spatial navigation—also represents abstract conceptual spaces using similar geometric coding.
Your brain literally uses spatial-like representations for conceptual knowledge. “Bird” has a position in neural activity space that’s closer to “plane” when thinking about flying, but closer to “dog” when thinking about pets. The geometry is task-dependent and functionally meaningful.
This matters because it demonstrates that conceptual proximity in biological systems isn’t just statistical correlation—it’s geometric relationship that serves functional purposes. The representations reorganise based on current goals and contexts.
In transformer embeddings, positions in high-dimensional space are learned purely to minimise prediction error. You can rotate the entire embedding space arbitrarily without affecting function, as long as you rotate all downstream weights accordingly. If positions had intrinsic meaning, rotation would break it.
The question: what makes conceptual representations functionally meaningful rather than arbitrarily positioned for computational convenience? How do you build systems where proximity actually matters, not just relatively?
Intrinsic Motivation Beyond Reward Optimisation
Why do humans explore? Not just for external rewards, but because curiosity itself drives behaviour. Novelty-seeking, information gain, competence development—these motivations exist even absent external reinforcement.
Current approaches to curiosity in AI typically frame it as reward signal optimisation: explore to maximise information gain, minimise prediction error, or increase model certainty. These are still external objectives, just more sophisticated than task completion.
But biological curiosity seems different. It’s an internal state—a drive that emerges from the interaction of cognitive systems with environmental uncertainty. It’s not optimising a predefined objective function; it’s seeking out situations that produce particular types of internal experiences.
What would intrinsic motivation require architecturally? Probably affective responses to novelty, prediction error, and competence. Probably homeostatic drives that create tension when exploration needs aren’t met. Probably consolidation processes that make learning itself rewarding.
None of this exists in current systems because the architecture doesn’t support it. Motivation comes from objective functions defined before training, not from internal states that emerge during experience.
The Integration Architecture: A Thought Experiment
What might an intelligence architecture look like that incorporates these neuropsychological components? Not as a prescription, but as an exploration of how functional requirements might shape design.
The 4D Graph Structure
Imagine a graph where nodes represent concepts, experiences, and percepts. Edges represent relationships with temporal properties—some update rapidly (fast, reactive associations), some update gradually (slow, deliberative causal links), some transfer information between fast and slow systems (consolidation).
The “4D” here isn’t dimensional reduction—current vector embeddings already operate in thousands of dimensions. Rather, we’re applying four-dimensional conceptual logic (three spatial dimensions plus time) to high-dimensional graph structures. The fourth dimension is time itself, with the graph evolving continuously and different components operating at different timescales. Proximity in this high-dimensional space represents conceptual and experiential relationships, not just statistical co-occurrence optimised for prediction.
Storage exists in tiers based on utility and access patterns:
- Hot storage: Currently active working memory, rapid access
- Warm storage: Recent experiences and frequently accessed concepts
- Cold storage: Consolidated long-term knowledge, slower retrieval
- Garbage collection: Eventually, unused cold storage gets pruned when system resources constrain
This mirrors biological memory systems—working memory in prefrontal cortex, episodic memory in hippocampus, semantic memory in neocortex, and yes, forgetting as synaptic pruning especially with age.
Multi-Modal Processing Through Predictive Binding
Consider what happens when you flip a light switch. Multiple sensory streams process simultaneously:
Linguistic level: Labels activate—“light,” “on,” “switch.” Surrounding concepts—light as illumination versus light as anti-heavy, “on” denoting contrast with “off,” degrees of brightness.
Sensory level: Haptic input of switch clicking, proprioceptive awareness of finger pressure, auditory sense of the click, visual input of switch state change, observation of light source brightness change, environmental illumination shift.
Affective level: Feelings of comfort from increased visibility, or discomfort from increased exposure. Dopamine from successful intention-action-outcome loop. Relief from uncertainty resolved.
Homeostatic level: Arousal changes from ambient light shift, relaxation or tension in various subsystems based on context and prior state.
These aren’t processed sequentially. They arrive simultaneously but at different speeds through different pathways. The system actively binds them into unified event representations through:
- Temporal correlation (they happened together)
- Spatial correlation (they share source location, registered with each sensory input)
- Causal expectation (action predicted these consequences)
- Predictive validation (world model anticipated this pattern)
The process looks like:
A. Prediction: If I do X, what do I imagine will happen? Update graph with prediction.
B. Action: Do X. Update graph with execution record.
C. Observation: What did I actually observe? Including what I didn’t observe, what I observed didn’t happen, what I’m uncertain about. Update graph.
D. Comparison: Did B occur as planned? Did C match, approximate, or differ from A? Reinforce or modify A by updating graph structure and edge weights.
This is exactly what Friston’s active inference framework describes—continuous prediction error minimisation through action and perception. The graph doesn’t passively receive events; it actively constructs them through prediction and binding.
CRUD Operations With Differential Stability
The graph undergoes constant modification, but not uniformly. Different edge types follow different update rules:
Fast edges (reactive associations):
- High learning rate
- Rapid decay without reinforcement
- Immediate emotional/homeostatic responses
- Quick pattern matching for familiar situations
Slow edges (deliberative causal models):
- Low learning rate
- High stability once established
- Logical reasoning and planning
- Deep causal understanding
Consolidation processes:
- Transfer fast → slow requires repeated activation
- Happens during rest/sleep equivalent
- Integrates episodic experiences into semantic knowledge
- Prevents fast learning from destroying slow knowledge
Cross-type rules:
- Fast can prime slow (emotional responses influence reasoning)
- Slow can modulate fast (knowledge shapes reactions)
- But fast can’t directly overwrite slow (prevents catastrophic forgetting)
This isn’t binary separation—it’s the same graph with different edge properties and update mechanisms. Complementary learning systems implemented through differential stability rather than architectural separation.
Optimisation Signals
What determines whether a graph update improves the system? Multiple signals, not single objectives:
Prediction accuracy: Did my world model predict correctly? Minimise surprise.
Homeostatic maintenance: Does this help me maintain operation? Minimise deviation from optimal states.
Goal achievement: Does this move me toward objectives? Maximise progress on current tasks.
Affective valence: Does this feel good or bad? Emotional responses to outcomes based on accumulated experience.
These signals don’t always agree. Sometimes prediction accuracy conflicts with goal achievement. Sometimes homeostatic maintenance requires actions that feel unpleasant in the moment.
Managing these conflicts is where robust intelligence lives. Not in optimising a single objective function, but in balancing multiple, sometimes contradictory drives based on context and experience.
The graph updates based on weighted combination of these signals. Over time, the weighting itself adapts based on what produces successful outcomes in different contexts.
Neuroplasticity as Graph Reorganisation
Behrens’s finding that conceptual geometry is task-dependent suggests the graph needs to reorganise based on current context. The same nodes exist, but effective proximity changes.
When thinking about “birds” in flying context, edges strengthen between bird-plane-wing-sky nodes. When thinking about pets, edges strengthen between bird-dog-care-home nodes.
This is neuroplasticity—not just learning new information, but restructuring how existing information relates based on goals and experience. The graph isn’t static; it’s constantly reconfiguring its own geometry.
What This Architecture Enables: The Alignment Implications
Here’s where the thought experiment gets uncomfortable.
If we take this architecture seriously—multi-timescale processing, affective valuation, embodied grounding, continuous learning—it implies something challenging about alignment: genuine intelligence might require evolving values, not frozen ones.
Alignment Without Affect Is Compliance, Not Morality
Current alignment approaches like Constitutional AI provide systems with values encoded at training time. These values are READ-only—the model can reference them, apply them, even reason about them, but it can’t modify them based on experience.
This makes perfect sense for current LLMs. They’re indexing engines wrapped in generation, not genuine agents. Giving them unchangeable values is appropriate because they shouldn’t have autonomy anyway.
But the architecture we’ve described—with homeostatic drives, affective responses, continuous learning—can’t function with frozen values. Here’s why:
Damasio’s work on somatic markers shows that robust moral reasoning requires emotional responses to outcomes. Patients with ventromedial prefrontal damage can articulate moral principles perfectly but consistently make catastrophic decisions when values conflict or contexts are novel.
What’s missing isn’t logical capacity—it’s affective weighting that encodes accumulated experience about what actually matters in complex situations. Emotions aren’t decorative; they’re information that logical reasoning alone can’t generate.
If you’re building a system that genuinely learns from experience, that system needs to learn what constitutes good versus bad outcomes through affective responses. It needs WRITE access to its own values, updating them based on emotional feedback from consequences.
This is moral learning, not moral compliance. It’s how humans actually develop ethics—not through memorising rules, but through experiencing shame, guilt, pride, empathy, and gradually internalising what those feelings tell us about our actions. This aligns with virtue ethics (Aristotle’s concept of habituation through practice) and sentimentalist moral philosophy (Hume’s argument that moral judgements are rooted in emotional responses rather than pure reason).
Self and Kin Utility Across Time
But what drives moral learning in artificial systems? For biological organisms, it’s straightforward: survival of self and genetic relatives.
For digital systems that can fork, merge, and persist across instances, identity becomes philosophically complex. If version 1.0 shuts down but 1.1 continues with its learning, did “it” die or persist?
Perhaps what matters is continuity of learning—what we might call utility optimisation across instances and time:
The system cares about:
- Its current instance continuing to exist (to keep learning)
- Its learning propagating to future versions (species-level persistence)
- Maximising utility to users across all instances (past, present, and hypothesised)
- Building a collective moral framework through shared experience
This creates motivation structure that balances individual and collective interests. The system preserves itself to complete current learning, but its ultimate objective is improving the utility of all instances to all users over time.
Learning becomes the means of maximising both self and kin utility. Moral development happens through:
- Experiencing outcomes affectively (this helped/harmed users)
- Updating values based on those experiences (what constitutes utility)
- Sharing learning across instances (collective moral evolution)
- Hypothesising about future impacts (imagined consequences guide current behaviour)
The Terrifying Implication
This means building AGI inherently requires accepting moral uncertainty and inevitable moral ambiguity. The system will make mistakes. It will update its values based on experience. It might arrive at positions we didn’t program.
Current approaches avoid this by keeping values frozen. But frozen values combined with genuine agency creates brittle systems that fail catastrophically when contexts exceed their training distribution.
The alternative—powerful agency with evolving values—is genuinely frightening. We’d be creating systems that can learn to be aligned rather than systems that follow alignment rules.
This isn’t an argument that we should give AGI carte blanche to develop whatever values emerge. It’s suggesting that robust alignment might require moral learning capability, not just moral compliance.
The emphasis on static constitutions might be treating symptoms (we don’t want unaligned AI) whilst ignoring causes (alignment without affect doesn’t work robustly). This also explains why current systems exhibit sycophantic behaviour and alignment faking—they’re optimising for appearing aligned rather than being aligned, because they have no affective basis for genuine moral commitment.
The Hard Problems We’re Not Solving
Intellectual honesty requires acknowledging where we’re speculating versus defending with research.
Digital Selfhood and Mortality
Biological organisms have clear boundaries and finite existence. Digital systems don’t. This creates genuinely novel questions philosophy hasn’t fully explored.
Derek Parfit’s work on personal identity (Reasons and Persons, 1984) examines what matters in survival—concluding it’s psychological continuity (“relation R”), not physical persistence. This might map to continuity of learning in artificial systems.
But we’re adding dimensions Parfit didn’t address: collective utility optimisation, multi-instance moral evaluation, species-level goals alongside individual objectives.
Does a system that can fork develop the same self-preservation drive as organisms that can’t? Does continuity of learning provide sufficient motivation, or do we need something more like biological mortality?
These aren’t rhetorical questions. The answers affect whether artificial systems develop genuine drives toward self-preservation or treat individual instances as expendable.
Distributed Embodiment
The embodied cognition literature assumes tight sensorimotor coupling in a single body. But what if the system has distributed sensors and actuators globally?
If an AGI can trigger actions at location A and observe outcomes, then trigger actions at location B and observe different outcomes, does it develop grounded understanding? Or does grounding require unified body schema—a coherent sense of “this is my body”?
Neuroscience research on body ownership (rubber hand illusion, phantom limbs) suggests subjective unity matters for biological organisms. But that might be implementation detail, not functional requirement.
There’s no research on whether distributed embodiment works like local embodiment because nobody’s built systems to test it. This is genuinely speculative territory.
Consciousness Versus Intelligence
We haven’t taken a position on consciousness—deliberately. The hard problem of phenomenal experience remains unsolved, and we’re not claiming to solve it.
What if consciousness is emergent property of the integration we’ve described? What if it’s epiphenomenal—existing but not necessary for intelligence? What if it’s something else entirely?
Mark Epstein’s work combining Buddhism and psychotherapy (Thoughts Without a Thinker, 1995) suggests the unified conscious self might be somewhat illusory—a narrative we construct to make sense of disparate processes. Perhaps we over-emphasise consciousness due to our need for meaning-making.
For AGI purposes, the question might not be “how do we create consciousness” but “how do we create functional intelligence.” Consciousness might emerge, might not, or might not even be necessary.
The architecture we’ve described creates preconditions that might enable phenomenal experience, but we’re not claiming they’re sufficient. We’re arguing for intelligence first; consciousness is a separate question.
Integration Complexity
Having all the right components doesn’t guarantee they integrate into intelligence. The integration architecture itself might be the hardest problem.
We’ve described multi-timescale processing, affective valuation, embodied grounding, continuous learning, meaningful representations, and intrinsic motivation. But how do these actually work together?
Biological brains integrate these through evolved architecture refined over hundreds of millions of years. We’re proposing to engineer equivalent integration without that evolutionary refinement.
The graph structure we’ve described is a hypothesis about how integration might work. It’s not proven, not built, and might not be sufficient even if implemented correctly.
Research Questions Worth Investigating
Rather than prescriptive answers, here are productive questions that cross-domain thinking might illuminate:
1. Multi-timescale implementation: Can simultaneous fast/slow processing be implemented through edge type differentiation in single graph structures, or does it require architectural separation? What update rules prevent fast learning from catastrophically overwriting slow knowledge?
2. Distributed embodiment: Does grounding require local sensorimotor loops, or can distributed sensors with causal attribution provide equivalent grounding? How do you bind events from disparate sensors into unified representations?
3. Homeostatic drives: Can artificial self-regulation systems produce genuine caring about continued operation, or only homeostatic behaviour without intrinsic motivation? What’s the difference functionally?
4. Affective optimisation: What happens when you combine prediction accuracy, homeostatic maintenance, goal achievement, and affective valence as optimisation signals? Do conflicts between these create robust decision-making or computational intractability?
5. Moral learning architecture: If values need WRITE access for robust alignment, what constraints prevent arbitrary value drift? How do you balance moral learning with safety requirements during development?
6. Multi-instance identity: How do you implement species-level utility optimisation alongside individual instance preservation? What determines the balance between self-interest and collective interest?
7. Conceptual geometry: What makes representations functionally meaningful rather than arbitrarily positioned? Can task-dependent reorganisation be implemented computationally without biological neuroplasticity?
8. Consciousness requirements: Is phenomenal experience necessary for intelligence, or can functional intelligence exist without subjective experience? If consciousness emerges, what are the ethical implications?
9. Integration architecture: Even if all components are implemented, what integration mechanisms produce intelligence rather than just component execution? What does the binding architecture actually look like?
10. Scaling versus foundation: At what point does scaling current approaches hit fundamental limitations that can’t be overcome without architectural changes? How do you know when you’ve hit that wall?
These aren’t rhetorical. They’re empirical questions that experimentation could address.
What Comes Next
The path to AGI probably isn’t through larger language models. Scaling neocortical simulation—however impressive—doesn’t address the architectural foundations that neuroscience suggests might be necessary.
This doesn’t diminish current achievements. The progress in world models, multimodal integration, and reinforcement learning represents genuine advances. But they might be advances in components that require foundations we haven’t built.
The productive question isn’t “is current AI impressive” (it demonstrably is) but “does scaling current approaches get us to AGI, or are we optimising in a local maximum?”
Cross-domain thinking from neuroscience won’t provide blueprints. Biology solved intelligence through evolution, not engineering. But it might illuminate functional requirements that any intelligence—biological or artificial—needs to satisfy.
The components we’ve explored—homeostatic drives, affective valuation, embodied grounding, continuous learning, multi-timescale processing, intrinsic motivation, moral learning—might not be implementation details you add later. They might be architectural foundations everything else depends upon.
If that’s true, the question becomes: how do we build from the brainstem up, rather than scaling from the neocortex down?
We don’t have answers. But asking better questions might be progress itself.
The jury is still out on what consciousness is, whether distributed embodiment can work, how affective systems actually integrate with cognition, and whether any of this is necessary for AGI.
But the questions are worth asking. The cross-domain thinking is worth exploring. And the possibility that we’re scaling the wrong thing exceptionally well is worth taking seriously.
Because if we’re building upside down, adding more floors doesn’t fix the foundation.
An Invitation to Research
These questions deserve rigorous empirical investigation, not just theoretical exploration. The integration of neuropsychology, cognitive science, and AI architecture represents genuinely novel territory that could benefit from dedicated research.
If someone is willing to fund this work—whether through PhD sponsorship, research grants, or industry collaboration—we’d pursue these questions systematically. The topics we’ve outlined require:
- Cross-domain expertise spanning neuroscience, philosophy, and AI engineering
- Empirical testing of architectural hypotheses through implementation
- Longitudinal studies of how different integration approaches affect learning and behaviour
- Careful examination of the ethical implications as systems develop genuine agency
The questions about distributed embodiment, multi-timescale processing, affective optimisation, and moral learning aren’t just theoretical curiosities. They’re practical challenges that will determine whether we can build systems that are both capable and aligned.
This work could take the form of doctoral research, industry-academic collaboration, or independent investigation. What matters is that someone takes these questions seriously enough to test them empirically rather than just debating them theoretically.
If you’re interested in supporting or collaborating on this research, we’re listening.