2018 is the year AI got its eyes
Computer scientists have spent more than two decades teaching, training and developing machines to see the world around them. Only recently have the artificial eyes begun to match (and occasionally exceed) their biological predecessors. 2018 has seen marked improvement in two areas of AI image processing: facial-recognition technology in both commerce and security, and image generation in — of all fields — art.
In September of this year, a team of researchers from Google’s DeepMind division published a paper outlining the operation of their newest Generative Adversarial Network. Dubbed BigGAN, this image-generation engine leverages Google’s massive cloud computing power to create extremely realistic images. But, even better, the system can be leveraged to generate dreamlike, almost nightmarish, visual mashups of objects, symbols and virtually anything else you train the system with. Google has already released the source code into the wilds of the internet and is allowing creators from anywhere in the world to borrow its processing capabilities to use the system as they wish.
“I’ve been really excited by all of the interactive web demos that people have started to turn these algorithms into,” Janelle Shane, who is a research scientist in optics by day and a neural-network programmer by night, told Engadget. She points out that in the past, researchers would typically publish their findings and call it a day. You’d be lucky to find even a YouTube video on the subject.
“But now,” she continued, “they will publish their model, they’ll publish their code and what’s even greater for the general creative world is that they will publish a kind of web application where you can try out their model for yourself.”
This is exactly what Joel Simon, developer of GANbreeder has done. This web app enables users to generate and remix BigGAN images over multiple generations to create truly unique creations. “With Simon’s web interface, you can look at what happens when you’re not generating pictures of just symbols, for example,” Shane points out. “But you’re generating something that’s a cross between a symbol and a comic book and a shark, for example.”