This Week In AI: Social Media’s Big Tobacco Moment and AI’s Power Shift

A jury holds Big Tech liable for addictive design; AI giants pivot sharply: OpenAI doubles down on infrastructure, Anthropic balances safety with scale, and India’s Sarvam draws global capital

Last Updated: Mar 27, 2026, 17:25 IST7 min
Prefer us on Google
New
(File) Meta CEO and Chairman Mark Zuckerberg (C) leaves the Los Angeles Superior Court after testifying in the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on February 18, 2026. Photo by Apu Gomes / AFP
(File) Meta CEO and Chairman Mark Zuckerberg (C) leaves the Los Angeles Superior Court after testifying in the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on February 18, 2026. Photo by Apu Gomes / AFP
Advertisement
In a Nutshell
A US jury found Meta and YouTube liable for addictive social media design, marking a “Big Tobacco moment.” Meanwhile, OpenAI shifts to AI infrastructure, Anthropic balances safety and power, India’s Sarvam attracts major funding, and Alibaba unveils a new AI chip.

In the early days, social media felt harmless—even joyful. It helped us stay in touch, share milestones, reconnect with old friends. Many millennials still remember life before it: Conversations that ended when you left the room, photos that lived in albums instead of feeds, a clear divide between real life and online life. Social media was something we used, not something that shaped how we lived.

Over time, that balance began to shift. As platforms grew more powerful—and more profitable—the pull to stay online grew stronger. Today, the average person spends over two hours a day on social media, accounting for more than a third of all time spent online globally. Teenagers spend far more. In the US, teens now average nearly five hours a day on social platforms, with the heaviest users significantly more likely to report anxiety, depression, and suicidal thoughts.

India is no exception—if anything, the shift has been faster. A national LocalCircles survey found that 60 percent of Indian children aged 9 to 17 spend more than three hours a day on social media or gaming platforms, with a sizeable minority online for six hours or more. Mental health professionals across the country have linked prolonged, late night scrolling to sleep disruption, anxiety, attention issues, and rising depressive symptoms among teenagers—particularly girls. In less than a decade, social media in India has moved from novelty to a default environment for millions of young users.

What’s changed isn’t just the amount of time spent online, but how platforms are designed. Endless scrolling, algorithm driven recommendations, push notifications, and short video loops aren’t accidental features; they are engineered to keep users engaged for as long as possible. Increasingly, some young people find themselves living for their “reel life”—chasing likes, views, and validation—while real world routines, relationships, and boundaries quietly erode.

This week, those long running concerns crossed a critical threshold. A US jury ruled—for the first time—that social media companies can be legally responsible for harm caused by addictive design. In a landmark verdict, Meta and YouTube were found liable for building products that encouraged compulsive use and contributed to serious mental health consequences in a young user—a decision many are already calling the tech industry’s “Big Tobacco moment”.

A “Big Tobacco” Moment for Social Media

A US jury has delivered a landmark verdict in the first-ever social media addiction trial, holding Meta and YouTube legally responsible for designing products that harmed a child user. Jurors in Los Angeles ruled that features such as algorithm driven recommendations, constant notifications, and engagement maximising design created addictive usage patterns that significantly contributed to a young woman’s depression and suicidal ideation. The court awarded $3 million in compensatory damages and $3 million in punitive damages, with Meta shouldering 70 percent of the liability and YouTube 30 percent.

Crucially, the jury found both companies negligent and said they failed to adequately warn young users about potential risks—a finding advocates are already calling the tech industry’s “Big Tobacco moment”. While Meta and Google have said they will appeal, legal experts warn this verdict could open the floodgates to hundreds of similar lawsuits, potentially forcing major changes to how social platforms are designed, how risks are disclosed, and how children are protected online.

Read more:  US’s social media addiction trial and similar bans around the world

Dario Amodei, co-founder and CEO of Anthropic. Photo by Julien DE Rosa / AFP

Anthropic’s Two Track Strategy: Safer AI—and More Powerful AI

Anthropic is moving in two directions at once: Making AI agents safer to use while also pushing toward far more capable models. This week, the company detailed a new feature called “auto mode” for Claude Code, its AI tool for programmers. The idea is simple: Reduce the time developers spend clicking “approve” while still keeping guardrails in place.

Today, Anthropic says users approve 93 percent of permissions anyway, which leads to “approval fatigue”. Auto mode strikes a middle ground. Instead of asking for constant human sign off or allowing unrestricted access, it uses layered AI classifiers to decide when an action is safe. The system blocks risky behaviour—like deleting critical files, scraping credentials, or leaking data—while allowing routine coding tasks to run smoothly. Anthropic positions this as a practical way to deploy agentic AI without letting helpful models overstep user intent.

At the same time, reports suggest Anthropic is internally testing a much more powerful model, codenamed Mythos, after references to it appeared in a data leak. The company has described Mythos as a “step change” in capability, hinting at greater autonomy and general reasoning power. While no release timeline has been confirmed, the acknowledgement signals something important: AI makers are already preparing for systems that will need stronger safety, oversight, and governance frameworks from day one—not as an afterthought.

OpenAI is moving away from flashy consumer experiments and doubling down on becoming core infrastructure for the AI economy. Photo by Shutterstock

What’s Really Going On at OpenAI

At first glance, OpenAI’s past few weeks can seem chaotic: It’s shutting down products, hiring aggressively, and raising eye watering amounts of money all at once. But zoom out, and a clear strategy emerges.

OpenAI is moving away from flashy consumer experiments and doubling down on becoming core infrastructure for the AI economy.

Take Sora, its much hyped AI video product. Despite the buzz—and a planned $1 billion deal with Disney—Sora was expensive to run, legally messy, and soaked up huge amounts of computing power. OpenAI pulled the plug not because video isn’t impressive, but because it isn’t the best use of scarce GPUs right now. Those resources are being redirected to areas that bring predictable revenue and long term control: Enterprise AI tools, coding agents, and next generation foundation models.

At the same time, OpenAI is planning to nearly double its workforce to about 8,000 employees by the end of 2026. This isn’t about building more demos. The hiring is focused on engineers, product teams, sales, and “technical ambassadors”—people who actually help companies deploy AI inside banks, hospitals, software firms, and governments. In other words, OpenAI wants to be the AWS or Microsoft Office of AI, not just the company with the coolest chatbot.

Read more: Sam Altman On Future Of AI Models India And Democratizing Power

All of this explains the $120 billion funding round, the largest in tech history. That money isn’t sitting idle. Running frontier AI now costs billions a year, largely due to computing power, data centres, and energy. The raise gives OpenAI enough capital to build massive infrastructure, lock in long term compute supply, and stay competitive with rivals like Anthropic and Google — both of whom are gaining ground, especially with business customers.

Seen together, the moves are less contradictory than they appear. OpenAI is pruning what doesn’t scale cleanly (like consumer AI video), investing heavily in what does (enterprise AI and infrastructure), and raising historic sums to make sure it can afford to operate at that scale.

Put simply: OpenAI is no longer acting like a research lab or a startup. It’s behaving like a critical global utility—and reorganising itself accordingly.

Read more: SaaS isn’t dead; The way we think about it will change: OpenAI’s Brad Lightcap

Pratyush Kumar, Co-Founder and CEO at Sarvam. Photo by Amit Verma

Sarvam’s Big Backers Signal India’s AI Moment

India’s homegrown AI ambitions may be getting their strongest validation yet. Sarvam AI, the Bengaluru based startup building large foundation models from scratch, is in talks to raise $200–250 million in a new funding round that could value it at around $1.5 billion, according to reports. The prospective investors include heavyweight names: Nvidia, venture capital firm Accel, and IT services major HCLTech.

If the round closes as planned, it would be the largest capital infusion into a pure play Indian AI startup to date and could make Sarvam India’s first unicorn of 2026. The roster of investors is telling. Nvidia’s interest reflects Sarvam’s close alignment with the chipmaker’s AI infrastructure, Accel brings global scaling and market access, and HCLTech’s involvement would mark a rare move by a large Indian IT services firm into taking a strategic stake in an AI startup—potentially accelerating real world enterprise deployments.

Alibaba has unveiled what it calls a next generation server chip built for agentic AI. Photo by Shutterstock

Alibaba Builds the Hardware for Agentic AI

Alibaba has unveiled what it calls a next generation server chip built for agentic AI, highlighting China’s push to tightly integrate AI software and hardware. The new processor, called XuanTie C950, runs at 3.2 GHz, is built on the open source RISC V architecture, and is claimed to be the world’s highest performing RISC V CPU, delivering more than three times the performance of its predecessor.

Developed by Alibaba’s semiconductor arm T Head and showcased at a DAMO Academy conference, the chip is designed for high performance cloud computing and autonomous AI workloads—the kind needed to run AI agents that can plan, reason, and act independently. It pairs neatly with Alibaba’s recent launch of Wukong, an enterprise platform designed to orchestrate AI agents for business tasks.

The bigger picture is strategic: As AI model prices fall and competition intensifies, Chinese tech giants are moving down the stack, designing their own silicon to gain efficiency, control costs, and reduce dependence on foreign chipmakers.

First Published: Mar 27, 2026, 17:40

Subscribe Now
  • Home
  • /
  • Ai-tracker
  • /
  • This-week-in-ai-social-medias-big-tobacco-moment-and-ais-power-shift

Latest News

Advertisement