Danyal Akarca
Principles of intelligence in brains and machines
About me
I lead a small but growing group at Imperial College London aiming to discover shared principles of efficient intelligence between natural and artificial systems.
In 2024, I was awarded a joint Imperial College Research Fellowship and Schmidt Sciences Fellowship, in addition to a Nature Computes Better Opportunity Seed from the UK’s Advanced Research and Invention Agency (ARIA). In 2022, I completed my PhD at the University of Cambridge at the Cognition and Brain Sciences Unit. I previously trained as a medical doctor with a research focus on neurosurgery and neuroanatomy.
The core belief running through our research is that by inferring the latent principles of efficient computation we see in nature and reverse-engineering them, we can fuel the growth of entirely new paradigms of intelligence. Modern architectures powerfully exploit one core principle: scale through immense parallelism. What new principles will fuel the future generations of efficient intelligent systems?
My work has been featured by outlets including the University of Cambridge and Nature Machine Intelligence. My primary interests include:
Building neural networks operating in context of spacetime constraints (e.g., spatially and temporally embedded neural networks).
Understanding how deep multi-scale heterogeneity and asynchrony, intrinsic to neural systems, widens the expressivity of possible architectures and computational primitives.
Instantiating energy-efficiency and systems-level principles common in nature within scalable AI architectures.
Understanding the early stages of human neural and cognitive development through constructivist simulation (e.g., brain development via generative methods).
Using network science and dynamical systems theory to improve AI safety and interpretability.
Leveraging technology, in particular AI, to accelerate scientific discovery.
Team members
-
My goal is to uncover unifying principles of intelligence we find in nature, like the brain, and distill their core principles into artificial systems of intelligence.
Link: Google Scholar
Email: d.akarca@imperial.ac.uk
Mentored by Dr Daniel Goodman in the Neural Reckoning Group -
Post-doctoral Research Scientist
Pengfei works on developing brain-inspired neural networks, with a particular focus on delay learning and neuromorphic applications.
Link: Google Scholar
Email: p.sun@imperial.ac.uk
Supervised by Dr Danyal Akarca and Co-supervised by Dr Daniel Goodman
-
PhD Student
Jatin works on interpretable machine learning applied to neuroscience, including spatially embedded neural networks.
Email: j.sharma24@imperial.ac.uk
Supervised by Dr Danyal Akarca & Co-supervised by Dr Daniel Goodman
My team sits within the Intelligent Systems and Networks Group in the Department of Electrical and Electronic Engineering and Imperial’s flagship AI initiative, I-X.
We work in very close collaboration with both Dr Daniel Goodman’s Neural Reckoning Group and Dr Jascha Achterberg.
Interested in joining us? Send me an email at d.akarca@imperial.ac.uk.
Current Master’s Students
2024 - 2025: Robert Crossen @ Cambridge, Co-Supervised with Prof Petra Vértes.
2024 - 2025: Guillermo Adell Vega @ Imperial, Co-Supervised with Dr Daniel Goodman.
Previous Master’s Students
2022 - 2023: Cornelia Sheeran @ Cambridge, Co-Supervised with Dr Jascha Achterberg, now pursuing a PhD at University of Michigan.
2022 - 2023: Andrew Ham @ Cambridge, Co-Supervised with Dr Jascha Achterberg, now pursuing Medicine at Harvard University.
Some highlights
Papers. To get a flavour of some of my interests:
Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings.
Nature Machine Intelligence.A generative network model of neurodevelopment diversity in structural brain organisation.
Nature Communications.Towards computational neuroconstructivism: a framework for developmental systems neuroscience.
Trends in Cognitive Sciences.Homophilic wiring principles underpin neuronal network topology in vitro.
bioRxiv (in Print at eLife).Computational Principles of Brain Network Development.
PhD Thesis. Pembroke College, University of Cambridge.
Here are links to my Google Scholar and GitHub.
Writing & talks. I give talks/write, and occasionally they are recorded/published:
2025. If scale is the answer, what is the question?
Institute of Art and Ideas.2024. Spatially embedded neural networks. What can they teach us?
Hamburg Institute of Computational Neuroscience.2024. AI, the human brain and embodied constraints.
Institute of Art and Ideas.2021. Generative models in network neuroscience.
Edinburgh Salvesen Mindroom.
Contact me
You can email me at d.akarca@imperial.ac.uk or chat to me @DanAkarca.
I’m particularly interested to speak with:
People who are interested in collaborating at the intersection of brain-inspired neural computation, AI and hardware.
Funders who are interested in scaling novel architectures in AI via unconventional hardware acceleration.
People who are interested in accelerating scientific discovery in the natural sciences.