Sony has reportedly moved to remove more than 135,000 AI-generated deepfake tracks from streaming platforms, after a huge number of fake songs imitating real artists were found circulating online.
And honestly, that number alone says enough.
This is way past the point of being a weird little internet experiment. We are now talking about thousands and thousands of fake uploads built to sound like real artists, released into the same spaces where people actually go to discover music every day.
That is where it starts getting messy.
Because this is not only about copyright anymore. It is also about identity, trust and how easy it is becoming for fake music to blend into real artist ecosystems without people even realising straight away.
A few years ago, this kind of thing would have sounded futuristic. Now it is already happening at scale, and labels are clearly starting to realise just how fast this can become a serious problem if nobody gets ahead of it.
The bigger issue is that this probably is not even close to the full picture.
If one major company has already had to deal with more than 135,000 fake tracks, then there is a very real chance that a lot more of this is already sitting across platforms right now, either unnoticed or not yet challenged properly.
And that is where the conversation starts becoming a lot bigger than just Sony.
Because once AI gets good enough to imitate tone, voice, style and artist identity closely enough, the whole music space starts entering a very strange area. One where the line between inspiration, imitation and straight-up misuse gets harder and harder to define.
AI in music is not slowing down. If anything, it is only getting more advanced, more accessible and easier to use.
So the real question now is not whether this is going to keep happening.
It is how far it is allowed to go before the platforms, labels and the wider industry actually put proper systems in place to deal with it.
Because if they do not, this is only going to get much bigger from here.