When medtech doesn’t let you say what you want
AI voice clone platforms like those from ElevenLabs are a boon for people who’ve lost their voices due to motor neuron disease.
However, it turns out that users aren’t allowed to say anything they want.

Medtech censorship?: An ElevenLabs user from the U.K. lost her access to the platform after potentially using lewd language at home while speaking to her husband.
- MIT Technology Review interviewed the woman, who was first warned and then banned from the platform for using “inappropriate language.”
- While she doesn’t remember exactly what she said, she recounts the flagged conversation as “normal British banter between a couple getting ready to go out.”
Since escalating the issue to ElevenLabs, the woman received an apology and had her account reinstated, but she is still unsure why she was banned in the first place.
ElevenLabs has since stated that language such as that which was used by the woman is no longer restricted. On its website, the company even featured a comedy set performed by another user with his voice clone which contained many curse words.
Rules are necessary: ElevenLabs’ own prohibited use policy makes sense when you think of it as a communication platform.
- Some of its rules prohibit threatening child safety, engaging in illegal behavior, providing medical advice, impersonating others, and interfering with elections. These are rules common to many online platforms, which protect the platform from liability.
- However, these kinds of rules don’t apply to the typical alternative to tools like ElevenLabs’, such as voice banking. This approach encourages users to record set phrases in their voice before it deteriorates too much—including whatever language they would like to use.
AI language woes: As concerns with the kind of language AI tools output continue to be an issue, especially in medical applications, we’re bound to face more case studies of AI tools saying harmful things we’d rather they not output.
- This story offers an interesting alternative in an unintentional over-correction of language chosen not by the AI tool, but by the human user.
- It’s also an interesting consequence of the privacy users must sometimes relinquish to benefit from innovation. By bringing a medtech tool into their intimate conversations, they risk not just being misunderstood by their conversational partner but also by human and machine moderators at the developer’s HQ.