How Artist João Beira Uses AI to Create Masterpieces
“I think that our generation still differentiates the physical world and the digital world,” says artist João Beira. “I think that the next generations will stop doing that.”
If that’s the case, it won’t solely be because of new technologies and the scientists and engineers who create them, but also due to the artists and creators, like Beira, who use those technologies in elegant, unexpectedly human ways. Born in Porto, Portugal, and now based in Austin, Texas, Beira (along with Datagrama, the international arts collective he co-founded) uses artificial intelligence (like deep learning and neural style transfer) and augmented reality technology (such as 3D projection mapping) in his stunning art installations, immersive environments and performance collaborations to blur the lines between the physical and digital realms.
Though his work often incorporates cutting-edge technology to summon the latter, it is his commitment to the former—namely exploring the human body and brain—that elevates it beyond some computerized parlor trick.
In his 2014 performance work Biomediation, for example, Beira used an EEG headset (which monitors electrical activity in the brain) on a dancer to transpose what was going on in the performer’s mind into the physical realm via 3D visuals and sound. As a trip into the psychic dimension, it was, well, a trip. A body and its brain waves playing off of each other in beautiful, occasionally frightening ways. Depending on how you saw it, it was a reminder that the mind is an eternally mysterious place, an argument that The Matrix might be closer to a documentary, or a living example that all worlds, seen and unseen, are more connected than we think.
“I’d done visual projections for commercial projects and what is called ‘VJ-ing’ for a long time. But nothing really felt artistic about it, to be honest,” Beira says. “When I started to work with live dancers, and having that mixed reality experience with real-time generative [visual] design, for me was a real breakthrough moment. As an artist I found something that made sense.”
Though he arrived relatively late to performed art, his interest in digital technology came early. “My dad was highly involved in technology and computers from the early days. He was in computer science,” Beira says, “so I grew up in computers from the get-go.” A child of the ’80s, Beira entered his teenage years amid the rise of the world wide web. “My first interaction with computers as a preteenager, was making music. That’s how I understand the dynamics of editing, and eventually [I] started working with imagery.”
While he was studying fine art in college, “everything kind of came together when I started using a computer as my medium,” he says. “Which was not very common at that time in a traditional fine art school.” Using his rudimentary knowledge of coding (and enlisting the help of others more adept), he saw his work begin to stand out. Pursuing his master’s degree in multimedia arts, Beira began to “go a little more into the technical and the more nerdy parts” and learned more about programming and human computer interaction research.
A scholarship allowed Beira to move to Austin to pursue his Ph.D. at the University of Texas in 2010. That same year, Microsoft released the motion-sensing camera Kinect, as an add-on to its Xbox 360 console, in an effort to compete with Nintendo’s massively popular Wii. The relatively inexpensive device, coupled with the open source software that was released soon after, became quickly integrated into non-gaming applications in fields like robotics and medicine. The Kinect represents a key element in Beira’s artistic journey, opening a whole new avenue of research for him. “Using that cheap $50 device allowed me to do very ambitious motion tracking in a 3D environment,” he says. “Traditional cameras capture light. With an infrared camera, you’re capturing data.”
Having access to that data allowed Beira to create digital art in real time during live performances. “We see people connect so much with live music; there’s something beautiful about that spontaneity,” he says. “And now, visual media and visual technologies are able to express themselves in real time as well.”
For example, the 2013 performance piece he designed, titled 3D [Embodied], used a Kinect to capture the movement of dancers who then became extended agents of 3D video mapping. The result, something like an IRL scene from Tron, was dancers moving the architecture of the 3D-rendered geometric “set” around with their bodies: pushing walls, lifting floors up to make them ceilings and creating entirely new environments generated by their movements.
The unpredictable nature of using AI to create real-time visualization is a huge part of the appeal for Beira. “We don’t do things and record and sell them,” he says. “Our software is meant to be performed in real time. It’s an algorithm that is running, not media that is playing. [It] reacts with the world as it plays.” When that algorithm uses machine learning and other elements of AI, Beira believes the work begins to have more of a soul.
“Using deep learning technologies [like DeepDream and neural networks], they will replicate some unique sense of the world based on what they learn and what they see. And every single element, every pixel becomes a seed and that seed will open up a new image,” he says. “In our case, we work with sound, with image and with video… It will open up a new piece of content that will change and generate another piece of content. Ultimately, [the result] is a surprise. We’re always mesmerized.”
Technological marvels aside, in the end, Beira’s approach to his art is surprisingly old-fashioned, choosing to focus on live performance that is unique to the place, the time and the audience that sees it, to share that experience collectively—the Magic of Live Theater, as it were. And that connection might be more possible for the next generations, who won’t have to live their lives looking down at a screen all of the time because the world itself will be the screen.
Thinking about that future, Beira says, “my hope is within the next years or decades, we will connect more and more with reactive environments that will be generated. Almost like nature is always growing in front of and around us. I think there are amazing benefits to be seen in using these self-regulating systems. For me the beauty will be in discovering how we experience things as a collective. To me that is the most powerful thing about AI, to be honest.”
However, as artificial intelligence continues to push the possibilities of what machines are capable of, it’s worth considering what its ultimate limitations are. A computer will always be able to draw a more perfect circle than a human hand, for instance, but will it ever feel the need to draw one unprompted—that is, to make art simply because? Beira, for one, remains unconvinced. “I think that’s a unique human feature—that urge to create. A computer can be programmed to have those principles, so it can be designed to be like that. But is it genuine? Probably not.”