AI & Evolution: Learning to do More with Less | David Ha | Episode 146

David Ha is the Head of Strategy at Stability AI, and one of the top minds working in AI today. He previously worked as a research scientist in the Brain team at Google.

David is particularly interested in evolution and complex systems, and his research explores how intelligence may emerge from limited resource constraints. He joins the show to discuss the advantages of open-source models, modelling AI as an emergent system, why large language models are bad at maths and MUCH more!

Important Links:
- Teaching Machines to Draw - https://blog.otoro.net/2017/05/19/teaching-machines-to-draw/ (2017)
- Weight Agnostic Neural Networks- https://weightagnostic.github.io) (2019

00:00 Start
01:15 Main Podcast
02:36 Why David joined Stability AI
07:08 The advantages of open source models
16:06 We cannot predict the inventions of tomorrow
23:33 Making memes with generative AI
25:41 The centaur approach to AI
29:27 An introduction to large language models
39:28 The relationship between complex systems & resource constraints
47:57 Large language models are bad at maths
58:18 Modelling AI as an emergent system
01:02:22 Understanding different perspectives
and much More!