Launched in Might 2025 at Google’s annual I/O developer convention, Google Veo 3 is the tech large’s direct problem to Microsoft-backed OpenAI’s video technology mannequin, Sora. Developed by Google DeepMind, the superior mannequin marks a serious leap in generative AI, promising high-quality, practical video creation from textual content or picture prompts.
However in an age flooded with misinformation and deepfakes, a device like Veo 3—with its capability to provide lifelike video and synchronised audio—raises urgent questions for journalism. It opens new artistic prospects, sure, but in addition invitations critical challenges round credibility, misuse, and editorial management.
What’s Google Veo?
Veo 3 touts itself as a “cutting-edge device” providing “unmatched realism, audio integration, and artistic management”. It comes at a excessive value—$249.99/month beneath the AI Extremely plan—and is presently obtainable within the US and 71 different nations, excluding India, the EU, and the UK. Moral issues loom, however Google pitches Veo as a robust useful resource for filmmakers, entrepreneurs, and builders.
In keeping with Google, Veo 3 can generate 4K movies with practical physics, human expressions, and cinematic fashion. In contrast to many rivals, it additionally produces synchronised audio—dialogue, ambient noise, background music—including to the phantasm of realism.
Additionally Learn | When AI breaks the legislation, who will get arrested—the bot or its maker?
The mannequin is designed to comply with complicated prompts with precision, capturing detailed scenes, moods, and digital camera actions. Customers can specify cinematic methods like drone photographs or close-ups, and management framing, transitions, and object motion. A characteristic referred to as “Substances” permits customers to generate particular person components—like characters or props—and mix them into coherent scenes. Veo may also lengthen scenes past the body, modify objects, and keep visible consistency with shadows and spatial logic.
Google’s web site options examples of Veo in motion, together with tasks in advertising, social media, and enterprise functions. The Oscar-nominated filmmaker Darren Aronofsky used it to create a brief movie, Primordial Soup. On social media, AI artists have launched viral Veo clips like Influenders, a satire that includes influencers on the finish of the world.
Veo 3 is built-in into Google’s AI filmmaking device Circulation, which permits intuitive prompting. Enterprise entry is offered through Vertex AI, whereas normal customers in supported nations can use it by Google’s Gemini chatbot.
The journalism dilemma
Veo’s options elevate alarms about potential misuse. It may facilitate the creation of deepfakes and false narratives, additional eroding belief in on-line content material. There are additionally broader issues about its financial impression on creators, authorized liabilities, and the necessity for stronger regulation.
The dangers will not be theoretical. As highlighted in a June 2025 TIME article, titled “Google’s Veo 3 Can Make Deepfakes of Riots, Election Fraud, Battle”, Veo was used to generate practical footage of fabricated occasions—like a mob torching a temple or an election official shredding ballots—paired with false captions designed to incite unrest. Such movies may unfold quickly, with real-world penalties.
A display seize from a video depicting election fraud generated by TIME utilizing Veo 3. Lifelike footage of fabricated occasions, paired with false captions designed to incite unrest, may unfold quickly with real-world penalties.
| Photograph Credit score:
By Particular Association
Cybersecurity threats—like impersonating executives to steal knowledge—are additionally believable, alongside looming copyright points. TIME reported that Veo might have been skilled on copyrighted materials, exposing Google to lawsuits. In the meantime, Reddit boards cite private harms, corresponding to a pupil jailed after AI-generated photographs had been falsely attributed to them.
There’s additionally the menace to livelihoods. AI-generated content material may displace human creators, significantly YouTubers and freelance editors, accelerating what some name the “useless web”—an area overrun by AI-generated junk media.
To mitigate threat, Google claims that every one Veo content material consists of an invisible SynthID watermark, with a visual one in most movies (although it may be cropped or altered). A detection device for SynthID is in testing. Dangerous or deceptive prompts are blocked, however troubling content material has nonetheless emerged, highlighting the bounds of guardrails.
What ought to newsrooms do?
Regardless of the dangers, Veo presents compelling alternatives for journalism—significantly for knowledge visualisation, explainer movies, recreating historic occasions, or reporting on under-documented tales. It may possibly assist small newsrooms produce professional-quality movies rapidly and affordably, even for breaking information.
Used responsibly, Veo may enhance storytelling—turning eyewitness accounts of a catastrophe into a visible narrative, for example, or reworking dry knowledge into cinematic sequences. Prototyping concepts earlier than committing to full manufacturing turns into extra possible, particularly for digital-first retailers.
However Veo’s strengths are additionally its risks. Its capability to provide convincing footage of occasions that by no means occurred may destabilise the data ecosystem. If deepfakes flood the information cycle, actual footage might lose credibility. The seen watermark is well eliminated, and Google’s SynthID Detector stays restricted in scope, giving malicious actors room to function undetected.
To take care of public belief, newsrooms should clearly disclose when content material is AI-generated. But the temptation to cross off fabricated visuals as actual—particularly in aggressive, high-pressure information environments—will likely be sturdy. And since AI outputs replicate their coaching knowledge, biases may sneak in, requiring rigorous editorial scrutiny.
There’s additionally the human value. Veo’s automation may eradicate roles for video editors, animators, and area videographers, particularly in resource-strapped newsrooms. Journalists might have to be taught immediate engineering and AI verification simply to remain afloat.
Additionally Learn | AI is altering work, privateness, and energy—what comes subsequent?
The authorized panorama is murky. If an outlet publishes an AI-generated video that causes hurt, accountability is unclear. Possession of Veo-generated content material additionally stays opaque, elevating potential copyright disputes.
After which there’s the burden of verification. Reality-checkers will face a deluge of artificial content material, whereas reporters might discover their very own footage handled with suspicion. Because the Pew Analysis Heart reported in 2024, three in 5 American adults had been already uneasy about AI within the newsroom.
A vital juncture
As Veo and instruments prefer it turn out to be cheaper and extra extensively obtainable, their impression on journalism will deepen. The problem is just not merely to withstand the tide however to adapt—ethically, strategically, and urgently.
In keeping with specialists, newsrooms should spend money on coaching, transparency, and detection instruments to reap the artistic rewards of AI whereas safeguarding credibility. Innovation and belief should evolve collectively. If journalism is to outlive this subsequent section of disruption, it should accomplish that with eyes huge open, they are saying.
(Analysis by Abhinav Chakraborty)