Matt Turck has a fascinating article on Artificial Intelligence with insights on the progress that has been made, and where we are right now. I found it a good read as it touched on various models being used to lead to "general" artificial intelligence. It seems in the last year a lot of progress has been made in the hardware front that has led to gains but we may still be far off from the robots taking over.
GANs, or “generative adversarial networks” is a much more recent method, directly related to unsupervised deep learning, pioneered by Ian Goodfellow in 2014, then a PhD student at University of Montreal. GANs work by creating a rivalry between two neural nets, trained on the same data. One network (the generator) creates outputs (like photos) that are as realistic as possible; the other network (the discriminator) compares the photos against the data set it was trained on and tries to determine whether whether each photo is real or fake; the first network then adjusts its parameters for creating new images, and so and so forth. GANs have had their own evolution, with multiple versions of GAN appearing just in 2017 (WGAN, BEGAN, CycleGan, Progressive GAN).
How close is AlphaZero from AGI? Demis Hassabis, the CEO of DeepMind, called AlphaZero’s playstyle “alien”, because it would sometime win with completely counterintuitive moves like sacrifices. Seeing a computer program teach itself the most complex human games to a world-class level in a mere few hours is an unnerving experience that would appear close to a form of intelligence. One key counter-argument in the AI community is that AlphaZero is an impressive exercise in brute force: AlphaZero was trained via self-play using 5,000 first generation TPUs and 64 second generation TPUs; once trained it ran on a single machine with 4 TPUs. In reinforcement learning, AI researchers point out that the AI has no idea what it is actually doing (like playing a game) and is limited to the specific constraints that it was given (the rules of the game). Here is an interesting blog post disputing whether AlphaZero is a true scientific breakthrough
Transfer learning has been mostly challenging to make work – it works well when the tasks are closely related, but becomes much more complex beyond that. But this is a key area of focus for AI research. DeepMind made significant progress with its PathNet project (see a good overview here)