Viral AI image trends and their dark side
AI photo generation is largely used for entertainment, but it also raises concerns about safety, privacy, and the potential risk of biometric data collection and storage

Prompt: “Create a 4K HD portrait in a polaroid-style photo of me hugging Tom Cruise. Add soft lighting and a handwritten caption that says: ‘Forever Yours’. Make Tom wear a kurta pyjama and make me wear a retro Bollywood-style saree.”
This was a prompt that Sanya, an art director of a Mumbai-based ad agency gave Google’s Gemini 2.5 Flash Image model, which allows users to transform their photos into stylised artificial intelligence (AI)-generated images—from 3D figurines to retro Bollywood portraits. The result, says Sanya, was “better than expected”, based on the likes and comments the photo got when she posted it on Instagram.
For the past month, social media has been flooded with images of girls in beautiful sarees, with perfect makeup and hair, or 3D figurines of people—alone or with friends or partners, or polaroid images of people hugging or kissing their favourite celebrities (from Shah Rukh Khan to Virat Kohli). The images look so real that the improbability of such a thing ever happening goes out of the window.
The internet’s favourites have been Instagram posts where girls have posted Gemini-generated photos with Conrad Fisher from the viral series The Summer I turned Pretty with captions like: ‘Because Connie deserves better, and that’s me’, alluding to the show’s viral debate of the protagonist choosing between two brothers.
Another reason why the trend has caught on is because it allows for people to create life-like photographs with those who are not with them anymore. These images give people a sense of ‘digital closure’, allowing them to imagine moments that they never got to experience.
The Nano Banana trend came after another viral AI image trend—The Ghibli Art—took the internet by storm. Instagram was filled with images of people, looking their best Japanese animated forms, with beautiful backdrops and scenarios. Inspired by the whimsical and emotionally rich animation style of Japan’s Studio Ghibli, the trend started on ChatGPT’s paid version, and was later picked up by Indian users, for free, on other AI tools like X’s Grok. From Dilwale Dulhania Le Jayenge-style train scenes rendered in watercolour tones to portraits of grandparents in Ghibli-style village backdrops, the trend became a favourite. Public figures like cricketing legend Sachin Tendulkar and politician Shashi Tharoor also joined in, sharing Ghibli-style versions of personal milestones.
However, like with trends and all things social media, soon the dark and neglected side of the viral trends started surfacing.
The Ghibli trend faced backlash from those in the design and animation community, who said it was an insult to their work. Amid the furore, a video of legendary animator and Studio Ghibli co-founder Hayao Miyazaki also surfaced online which purportedly showed him slamming AI as “an insult to life”, though some users pointed out that it was taken out of context.
Concerns with the Nano Banana trend started when a girl posted a photograph of herself generated on Gemini with a mole on her arm, which was not visible in the original image shared with the AI tool as a prompt. The woman called the encounter “scary and creepy” and urged users to be careful while sharing images with AI.
Experts have raised red flags about the ethical, legal and psychological implications of these technologies. What began as a playful way to reimagine selfies or restore old family photos has evolved into a complex digital phenomenon, one that blurs the lines between safety and consent. Experts say that with millions of users uploading personal images to platforms like Google Gemini, concerns around data privacy, AI hallucinations and cybersecurity threats are becoming increasingly urgent. There is an ongoing debate on social media and on platforms like Reddit about the tools designed to entertain, having the potential to quietly collect sensitive biometric data.
Amidst these concerns, Gemini said its AI image generator was designed with responsibility in mind, and it is consistent with their AI principles. "To ensure that there's a clear distinction between visuals created with Gemini and original human artwork, Gemini uses an invisible SynthID (a technology that embeds digital watermarks into AI-generated content) watermark, as well as a visible watermark to show that they are AI-generated," it wrote on its website.
However, experts feel this is not enough. AI safety researcher Ben Coleman said in an interview that watermarking sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed or ignored. Hany Faridm, a professor in the department of electrical engineering and computer sciences at UC Berkeley, agrees. He wrote in a report that watermarking is not robust enough, especially in the age of deepfakes and identity thefts.
First Published: Oct 06, 2025, 12:14
Subscribe Now