India’s AI Future at Stake: Why the Proxy Tradition Wars with Tech Corporations Should Finish

Shubham
14 Min Read

Final week, a seismic shift rippled throughout the globe when a subsidiary of a Chinese language hedge fund unveiled DeepSeek R1, a reasoning AI mannequin that’s not solely partly open-source but in addition competes with a number of the finest fashions on key benchmarks. The frenzy surrounding DeepSeek shortly grew to become a world speaking level—together with in India.

Some arguments on-line had been legitimate, others misplaced. A typical critique of Indian tech firms is that the nation can not develop frontier AI labs as a result of its tech business is constructed on low-cost labour. Critics argue that that is why the nation is caught producing meals supply and fast commerce apps quite than pushing the boundaries of AI and analysis.

To some extent, that is agreeable. The rise of fast commerce and meals supply companies is undeniably pushed by the provision of low-cost, scalable labour—there’s no actual counterpoint to that. Nonetheless, the dialogue mustn’t finish there. Indian fast commerce apps are among the many finest on the earth by way of design and consumer expertise.

The shortage of frontier AI labs just isn’t a results of some inherent incapacity however quite a consequence of structural points: researchers in India are grossly underpaid, and the nation lacks the graphics processing unit (GPU) clusters obligatory for coaching cutting-edge fashions. In different phrases, whereas Indian techies could not excel in a single space, they do in others—similar to app design and optimisation.

Additionally Learn | Who’s afraid of DeepSeek?

India will finally develop frontier AI labs, particularly as technological developments, architectural improvements, software program optimisation, economies of scale, and GPU shortages ease. However for that to occur, India should additionally deal with and parallelly remedy the deeper systemic points that maintain us again—the exploitation of low-cost labour, insufficient pay for researchers, and the broader social biases and divisions that permeate our society.

In June 2024, Meta launched Meta AI, an AI assistant and chatbot powered by Llama 3—a number one foundational mannequin that’s open-source (a declare that continues to be debatable)—in India. Nonetheless, inside every week of its launch, the platform confronted a backlash, with #BoycottMetaAI and #ShameOnMetaAI trending on X. The chatbot confronted allegations of being Hinduphobic. The rationale? Some customers requested it to crack jokes in regards to the Prophet of Islam, to which it refused, however when prompted to joke about Hindu deities, it complied. The irony was arduous to overlook—those that took offence did so solely after intentionally testing the chatbot towards their very own spiritual dogma.

What was much more stunning was the involvement of distinguished figures in amplifying the hashtag. Amongst those that pushed it had been Arun Yadav, the State head of social media for BJP Haryana and Vishva Hindu Parishad chief Sadhvi Prachi. A more in-depth evaluation of the development knowledge suggests patterns indicative of a coordinated hashtag marketing campaign.

Fig. 1: Tweet quantity by hour on June 30, 2024 (GMT). The chart visualises the variety of tweets posted per hour. It exhibits a major spike at 7 am GMT, adopted by a gradual decline all through the day. The height exercise suggests attainable coordinated engagement or a serious occasion occurring early within the morning.
| Photograph Credit score:
Knowledge Supply: Twitter API | #BoycottMetaAI OR #ShameOnMetaAI

Meta just isn’t the one firm to seek out itself embroiled in controversy over seemingly trivial issues. Two years in the past, OpenAI’s ChatGPT confronted comparable outrage. On January 7, 2023, Mahesh Vikram Hegde, the founding father of Postcard Information—a platform with a historical past of spreading misinformation—shared a screenshot on X. In response to Hegde’s X bio, Prime Minister Modi follows him. The screenshot confirmed the chatbot agreeing to crack a joke a couple of Hindu deity however refusing to take action when requested about Prophet Muhammad or Jesus. Hegde captioned the submit, “…Wonderful hatred in the direction of Hinduism!” The submit garnered 410.6K views throughout its cycle.

Additionally Learn | ChatGPT is bullshit — and right here’s why

Almost 10 days later, considered one of India’s most-watched Hindi personal satellite tv for pc TV channels aired a half-hour phase on ChatGPT throughout its prime-time slot, fuelled by the outrage that had erupted on X. All through the published, tickers flashed provocative headlines similar to “Excessive-Tech Conspiracy Towards the Hindu Faith”, “ChatGPT is a Hub of Anti-Hindu Ideas”, and “AI Spews Code Stuffed with Venom”.

It was hardly stunning to see that each the host and the correspondent presenting the so-called “proof” of blasphemy had little to no understanding of the know-how they had been reporting on.

Whereas Meta and OpenAI by no means publicly acknowledged these controversies, shut observers of the sector might sense their underlying anxiousness manifesting in subtler methods. OpenAI’s first rent in India, as an example, was a coverage position quite than a technical one. This isn’t to decrease the significance of non-technical hires, but it surely does reveal one thing about how issues function in a rustic like India. Even the world’s most well-funded firms can’t escape the gravitational pull of shock and should fastidiously navigate the terrain of coverage, paperwork and tradition to maintain everybody appeased.

The bullies

These dynamics should not distinctive to AI or massive language fashions. It’s a well-known proven fact that over the previous decade, almost each tech firm—huge or small, world or homegrown—has discovered itself on the coronary heart of some cultural battleground in India. It’s a sinister mechanism that doesn’t depend on the complete pressure of the federal government however as a substitute operates via proxies to bully these tech firms and to maintain them in examine. The gang does the heavy-lifting, and inside that crowd are keen cheerleaders, at all times able to spark the primary flame. The federal government gauges the scenario from afar and intervenes solely when it senses real momentum constructing inside these outrages and is aware of that its intervention will act as a supply of validation. It places the ultimate nail within the coffin of shock as a solution to reward its ardent followers.

As an example, two years in the past, when an e-commerce lingerie model skilled a knowledge breach, it was shortly given a communal spin by an X consumer, falsely claiming that the leaked buyer knowledge contained particulars solely of Hindu girls and was being offered to younger Muslim males. Nonetheless, when journalists examined the dataset, it was clear this was unfaithful—the info included data from people of assorted faiths.

Regardless of the shortage of proof, a statutory authorities physique swiftly took suo motu cognisance of the matter. What was much more placing was the language used within the official discover issued to the corporate, which claimed the info had been being offered to “Islamic teams via the darkish net for focused harassment, love jehad, girls trafficking, and abduction”. This assertion was not solely baseless but in addition extra excessive than the unique claims made on-line. Even the people who first flagged the breach had solely implied such allegations; it was the federal government physique that explicitly articulated them.

In doing so, the physique didn’t simply lend legitimacy to a factually unsupported narrative—it successfully amplified and validated it. By performing on an allegation with no materials proof past a number of selective screenshots, the establishment rewarded and strengthened communal hysteria, it basically blurred the road between governance and ideological bias.

The messaging is essentially inconsistent. If each the general public and the federal government are genuinely dedicated to technological development, why do they proceed to prioritise trivial issues? Knowledge breaches, at most, needs to be handled as nationwide safety considerations and, on the very least, as failures of the fundamental protections that firms are obligated to ensure their prospects.

Narratives of polarisation

Framing such incidents via the lens of communal disharmony solely distracts us from the true difficulty. In any case, knowledge will be misused by anybody, no matter their background. It is usually painfully clear which sorts of “allegations” obtain critical consideration—people who conveniently align with current narratives of polarisation and division.

Some Indian tech firms have tried to capitalise on the chaos pushed by tradition wars, banking on the attract of nationalism and jingoism. Nonetheless, again and again, these efforts have backfired, failing to ship the meant impression. The flip aspect is that even essentially the most fervent nationalists finally see via these theatrics. They cease viewing such firms as critical gamers in essential discussions—whether or not it’s about homegrown AI, social media platforms, or electrical automobiles.

If the federal government genuinely goals to foster technological development, it should first finish its proxy battle with tech firms over cultural points—these conflicts serve no significant objective. This isn’t a name for the federal government to control speech—that’s a wholly separate debate. Quite, it’s about recognising that the proxies it nurtures and rewards usually hinder, quite than assist, the reason for innovation.

Superficial rhetoric about constructing foundational fashions could create the phantasm of progress, however with out addressing deeper structural points—such because the exploitation of labour, stagnant wages, systemic inequality, and the pervasive affect of hate and prejudice—any positive aspects can be fragile and unsustainable. Injecting funds and artificially accelerating growth can solely go thus far. With out a sturdy, natural basis rooted in strong analysis ecosystems, essential considering, and scientific inquiry, the cycle of short-lived technological leaps adopted by stagnation will repeat itself. The expansion we aspire to as a nation will stay elusive, with every new technological wave met by the identical scramble to catch up.

Being politically right

It’s not as if different international locations have resolved these points, both. American tech firms usually discover themselves entangled in tradition wars, with their AI fashions navigating advanced terrains and trying to supply responses they deem politically right. The identical goes for Chinese language tech firms and their AI programs—questions on Taiwan, Tibet, or Tiananmen Sq. will inevitably yield solutions aligned with the Chinese language Communist Occasion’s narrative.

Nonetheless, in India’s case, the tradition battle usually takes centre stage, with public relations and political posturing steering the dialog whereas science and know-how are relegated to the again seat. For India to catch up, it should reverse this dynamic. The main focus needs to be on creating an surroundings the place scientific analysis, essential reasoning, and technological inquiry should not simply supported however prioritised. Solely by addressing each systemic societal points and the structural limitations throughout the tech ecosystem can India hope to realize sustainable, significant progress.

Kalim Ahmed is a columnist and an open-source researcher with a concentrate on tech accountability, disinformation, and Overseas Data Manipulation and Interference.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *