FN Meka got signed by Capitol Records, then dropped within a week amid strong backlash. For those who haven’t gone to deep into this, FN was supposed to be an AI-powered virtual rapper - although he was more influencer than rapper in his TikTok and overall social media presence. My discovery of the character kickstarted an article I published over a year ago where I put then current developments into a long - 100 year - history of robotic influences in music. Back then, the PR surrounding FN explained how the music and lyrics were generated through an unnamed AI program, but the voice was human - for now [this is significant, remember this bit]. As others have pointed out in the last week, there doesn’t actually seem to be a lot - if any - AI tools involved in the music related to FN. And the human rapper who gave FN a real voice has just come out to say he’s been completely cut off from the project FN Meka after being promised equity and the like.
So, this isn’t pretty, especially because of the racial implications of the character, but also because beyond that this seems a tone-deaf project in every way. It’s pure marketing, it’s purely money-driven - I mean, they sold an NFT of a toilet for 4 ETH once, and well done Don Diablo on that purchase. But there were other reactions, too, other types of backlash. Krayzie Bone, The Game, and Lil Mama for example, came out to say that these kinds of avatars or robots will take the jobs of real rappers. And that’s not how I look at AI or ML tools at all. Instead, we should see them as technologies that can aid in the creative process.
I wrapped up my piece on robot rappers last year on that notion of creativity. I focused on Shimon, an actual robot who gets trained on a bunch of data sets and uses a creativity algorithm to actually surprise human players when they play together. Of course, we can then say that Shimon takes the place of a real human marimba player, but we can also think about how the robot impacts the way humans play together and explore that. I still believe that this presents a better approach than to be afraid. As Mat Dryhurst and Holly Herndon put it in their discussion of AI-driven image generators such as DALL·E:
“The easier it is to generate artworks, the more challenging it will be to generate distinction and meaning, as it ever was. Great Art, like AI, is very often what hasn’t been explored yet.”
In other words, giving meaning to something is the distinctly human element and what will distinguish what someone or something creates from anything else - it’s about context and stories.
Enter Fi, who just entered our world and is an AI-powered artist. Of course, there’s humans behind Fi. The character came from the combination of Thunderboom Records - a label that dubs itself ‘the robot record label’ - and Reblika - who make virtual characters. Fi is, in a sense, the antithesis to FN Meka. Thunderboom is a non-profit. Fi is a fluid artist who change their sonic identity as well as their appearance over time. Fi won’t be an autonomous artist, but will remain a digital presence that other artists can potentially make use of or work with - all the tech will be made available through open-source projects.
Basically, we shouldn’t look at Fi as an artist at all. Instead, it’s a collection of technologies. There’s dozens of music and visual 3D technologies involved from Magenta and Vocaloid to Musia. The human(s) involved are basically nothing more than one part of that set of technologies. But meaning only comes through Fi’s interaction with artists and musicians. The idea is that Fi will jam with artists, preferably upcoming artists, and help foster and exchange creative processes. Part of that process will be the everchanging nature of Fi. The backstory has been processed and retrieved through GPT-3 and following iterations will also be generated from that same tool. For now, Fi is named after the first mouse astronaut and has come to our planet from space. As for Thunderboom, they’re focus is mainly to explore whether this collection of technologies can be brought to life - whether it can be given meaning - without major budgets. To put that differently, how can we give more artists the experience of working with an avatar who can fly through Fortnite without requiring the budgets that the likes of Ariana Grande or Travis Scott, or Epic, bring to the table.
There’s more experiments, of course. In last year’s article I already talked about Holly Herndon’s Spawn, which has since evolved in the Holly+ project which allows other artist to sing through Herndon’s twin voice. Holly+ runs through a DAO and there’s more experiments involving AI-generation and music in the Web3 ecosystem. One of these is WVRPS [disclaimer: I own WVRPS] who have a set of virtual avatars and AI composition technologies to create the music these avatars pump out. As with Fi, WVRPS are almost more a collection of technologies with the added experiment of how this works in a community of humans. While the focus has so far been firmly on the four avatars it’s shifting towards more human-involved play elements through, for example, The Lab. On this website, people can play around with the sounds and visuals of each avatar, encouraging play and creativity.
AI-powered robot artists are coming and what they will be like, or what their role within a creative ecosystem will be is still up for discussion and up for experimentation. Unfortunately, there will be more disasters of the FN Meka x Capitol variant. Let’s not let that curb our enthusiasm about the future of music and AI technologies. Follow experiments like Fi and see what happens when the tech meets the artists.