Microsoft scrambles to update its free AI software after Taylor Swift deepfakes scandal
By: Shannon T.
Microsoft cracked down on the use of the company’s free AI software after the tool was linked to creating the sexually explicit deepfake images of Taylor Swift that swamped social media – and raised the specter of a lawsuit by the infuriated singer.
The tech giant pushed an update to its popular tool, called Designer – a text-to-image program powered by OpenAI’s Dall-E 3 – that adds “guardrails” that will prevent the use of non-consensual photos, the company said.
The fake photos – showing a nude Swift surrounded by Kansas City Chiefs players in a reference to her highly-publicized romance with Travis Kelce – were traced back to Microsoft’s Designer AI before they began circulating on X, Reddit and other websites, tech-focused site 404 Media reported on Monday.
“We are investigating these reports and are taking appropriate action to address them,” a Microsoft spokesperson told 404 Media, which first reported on the update.
“We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users,” the spokesperson added, noting that per the company’s Code of Conduct, any Designer users who create deepfakes will lose access to the service.
Representatives for Microsoft did not immediately respond to request for comment.