AGI vs Narrow AI
Who doesn’t want an AI like Jarvis!
Let’s have a look at some of the equally or more powerful/conscious AIs in fiction:
- Transcendence (2014) - My favourite when it comes to a superpowerful pervasive AI
- I-Robot (2004) - Sunny and Viki, AIs with a world plan with the 3 laws of Issac Asimov
- Westworld - TV series and movie where AIs evolve consciousness
- Avengers (2015) - Don’t forget the evolving Ultron
- Anukul (2015) - Similar concept to Sunny, based on Satyajit Ray’s work
- Ex-Machina (2015) - Reimagining the Turing test
- Her (2013) - Emotional OS
others like Tron, Matrix, Wall-E, Interstellar (TARS), StarWars (R2D2), etc.
Ok, now to the 2 aspects that are required to make these types of AI:
- Interface - humanoid, voice commands, etc..
- Intelligence - self-evolving, meta-learning, etc..
The technological know-how of the state-of-the-art research in artificial intelligence is very close to what we might need. The knowledge is scattered in various artefacts - but the ingredients exist:
- Siri/Alexa/Cortona/Google Assistant/Bixy - basically the ability to crawl the internet for facts, and having a voice command interface
- Replika - the homely conversation you might want, a chit-chat bot, now also with a voice calling feature
- Sophia/Harmony - the physical appearance you might want it to have
- Boston Dynamics robots - for that extra dose of mechanical movements
Clubbing these into a single entity would make a great interface!
Now, to the brain.
Most of today’s AI focus on what’s called Narrow AI, specialized training for specific tasks, e.g. Deep Blue’s chess, IBM Watson’s jeopardy, OpenAI’s DotA, AlphaGo, etc. However, most of these require a huge computing hardware for their marvels.
Some of the early pioneers of AI (Turing, McCarthy, Minsky, Solomonoff) had a vision of an Artificial General Intelligence (AGI). It wasn’t possible in the hardware of that era (perhaps not possible even today). But, evolving a program was quite possible in some of these early languages, like LISP (Scheme). The framework existed.
With the advent of research on Artificial Neural Networks (ANN), we now have a better understanding of ‘learning’ complex associations. Yet most ANNs are trained on a fixed topology with a specific dataset. This brings us to the current focus on neural plasticity - the ability to expand the learning capabilities to other domains - like transfer learning, active learning, lifelong learning, etc. based on what’s called Topology and Weight Evolving Artificial Neural Networks (TWEANN). UberAI is doing some fascinating work on this topic.
Also, recent research on Spiking Neural Networks, and memristor-based Neuromorphic accelerators brings us closer to biological realism for ANNs.
On the other hand, there are rigorous mathematical models of AGI, by Jürgen Schmidhuber and Marcus Hutter, called Gödel Machines and AIXI, respectively. Implementing these self-improving systems is highly non-trivial.
I believe, these are what is required for the ‘brain’ part.
Yes, we can make Jarvis as humanity!
We, however, don’t have a single Tony Stark!
P.S. - Making Iron Man is way easier with Jet Packs :P
Let’s actually make one
- Platform: Mycroft AI https://github.com/MycroftAI
- Wikipedia and Wolfram Alpha https://medium.com/@salisuwy/build-an-ai-assistant-with-wolfram-alpha-and-wikipedia-in-python-d9bc8ac838fe
- Replika Cake-Chat https://github.com/TREE-Ind/skill-fallback-cakechat
- Desktop control https://github.com/TREE-Ind/desktop-control