Tech News
It was a scandal. Third-party contractors, many working from places where data protection legislation is lax (or even non-existent), were listening and transcribing recordings of conversations from people who unwittingly believed robots were doing all the work. Conversations about work, finance, and romance were snooped upon, analyzed, and then typed up.
I’m not talking about Facebook, or Apple, or Microsoft, or Google, or Amazon.
In fact, I’m not talking about any of the big tech companies, all of whom are currently under scrutiny over their use of humans in AI transcription.
Instead I’m talking about Spinvox. You might not be familiar with this now-defunct British startup. During the late 2000s, it claimed to have an accurate, automatic transcription service for voicemails (back when such a thing was genuinely technologically impressive). In the process, it raised more than $200 million from gullible investors.
The Spinvox site, circa 2009
However, almost exactly ten years ago, things fell apart. Thanks to some dogged investigative journalism work, it transpired the company relied almost exclusively on third-party contractors based in the developing world to transcribe messages into text.
The AI bit? It was just marketing. The real work took place in halogen-lit offices across South Africa and the Philippines.
It’s interesting to see how the same issues we’re facing today played almost ten years ago. Users of the service were aghast, as they weren’t informed that their messages, many of which were of a deeply sensitive nature, were being listened to by ordinary workers. Journalists, most notably the BBC’s veteran technology editor Rory Cellan-Jones, suggested the company was breaking European data protection rules.
There was also a genuine sense of betrayal from users. We’re seeing this play out with the current tech giants. People felt profoundly mislead, much as they do now. They were promised one thing and got another. It was, in every definition of the word, a bait-and-switch.
In the end, Spinvox sold for a fraction of its peak value to Nuance, the US voice-recognition titan.
This leads into a bigger point. One that transcends individual companies, like Spinvox, or Facebook, or Apple, or Microsoft, or Google, or Amazon.
We need to have a wide and open conversation about how AI works, and crucially, how it’s marketed. At the moment, the enduring layman’s perspective is that it’s an entirely machine-driven process, with humans completely removed from the equation.
The reality is completely different. Humans create the models. Humans help train the models (using real-world data). And humans do the quality-assurance work that keeps the AI models on track, by checking their work.
In that sense, you could call the term “artificial intelligence” a bit of a misnomer, as any AI applications are fundamentally grounded in human intuition and knowledge. You could even argue that consumers are being routinely mislead on a routine basis, thanks to the business practices of the companies operating within this burgeoning space.
Right now, the AI sector suffers from a dearth of informed consent. People don’t know how their data is being processed within the context of AI applications. For example, the EULA to the Microsoft-owned Skype Translate paid no explicit mention to the fact that recordings may be listened to by humans.
And that matters because these tools are being routinely used in the most commercially sensitive, and even romantically intimate, contexts.
Artificial intelligence poses such an opportunity for humanity. It makes our lives more convenient and holds tremendous promise within sectors like healthcare and transportation. But I worry that the next scandal could produce massive public outrage, followed by an inevitable legislative crackdown, that’ll arrest any such progress.