OpenAI now creates fake videos. Hollywood and politicians tremble
OpenAI, the Californian company that wowed the world with ChatGpt, has left everyone speechless again with Sora, the artificial intelligence capable of generating realistic videos starting from just text.
Just as ChatGpt was trained on billions of texts, Sora was trained on a large archive of videos. Based on what it has learned, Sora can transform user instructions – so-called “prompts” – into moving images.
The result is magical. In one of the videos published by OpenAI, a woman walks down a street in Tokyo. All commands given to the artificial intelligence are obeyed. "The woman is wearing sunglasses, a jacket, a red dress and black boots." Said, done. "Other people are moving behind her." Said, done. "The asphalt is wet and reflects the neon lights." He did that too. The improvised AI steering is also impressive. It starts from a whole that shows the urban context.
Then it switches to a woman's foreground. In one minute – the maximum that can be achieved for now with Sora – only two things betray the artificial nature of the images: a small OpenAI symbol – “proof” that the content comes from an artificial intelligence – and the sometimes uncertain footsteps of the “woman with red". OpenAI has also used Italy to launch its own creation. Sora – which means “sky” in Japanese – can create an Amalfi-inspired seaside town captured by a drone. Or a dog walking through the houses of Burano.
But they are just demonstrations. Sora, in fact, cannot be proven. And we don't know if, once it opens to the public, it will respond in an equally surprising way to user requests. But that's enough: OpenAI has just completed a deal that values the San Francisco company at $80 billion, tripling its valuation in 10 months. The New York Times writes: the company, which has Microsoft among its financiers, will sell its shares in a public offering led by the capital company Thrive Capital.
OpenAI is currently evaluating any risks of this technology. There will be work to do. If used incorrectly, a tool like Sora can dramatically increase deep fakes, those videos in which algorithms make an individual — often a celebrity or politician — say (or do) things they never said or did. . It is not enough to have filters or censorship, which should prevent the generation of violent, offensive and obscene content. Pop star Taylor Swift knows something about this, as she was recently stripped of Microsoft's artificial intelligence despite the security measures adopted by the company. It's easy to imagine the misinformation Sora could stir up if someone found a way to use it to clone Joe Biden, Donald Trump, or anyone else running for the White House, or a seat in the European Parliament in 2024. .
Sora worries even Hollywood and in general those who earn their living with real videos. "Please don't turn me into a homeless person," MrBeast, the most followed and famous YouTuber in the world, wrote ironically to Sam Altman, CEO of OpenAI. But there is little to laugh about. A recent study from Carnegie Mellon University in Pittsburgh pointed out that to produce an image, artificial intelligence uses the energy needed to charge a smartphone. How much will you need for a video of a few seconds? Then there is the issue of copyright. What images did Sora "study"? OpenAI claims to have used copyright-free public videos. But in the past the company has faced several lawsuits – one of them from the New York Times – for using data without permission.
Originally published on bota.al