Exclusive | New AI law to secure rights of news publishers: Ashwini Vaishnaw – The Economic Times

author
3 minutes, 39 seconds Read

The government is looking to frame a new law on artificial intelligence (AI) that will protect the interests of news publishers and content creators while also minimising user harm, union minister for electronics and information technology Ashwini Vaishnaw told ET.
Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
Indian School of Business ISB Professional Certificate in Product Management VISIT
Indian School of Business ISB Product Management VISIT
IIT Delhi IITD Certificate Programme in Data Science & Machine Learning VISIT
IIM Kozhikode IIMK Advanced Data Science For Managers VISIT

The new law will be “very balanced” as well as “strong on securing the rights and sharing the proceeds” among news publishers, content creators and AI-enabled technologies such as large language models (LLMs), “while keeping good space for innovation”, Vaishnaw added.

It can be an independent legislation or part of the Digital India Bill, which is set to replace the 24-year-old Information Technology Act, 2000.

“There is a transition happening. Our position is that the transition should not be disruptive because lakhs of livelihoods are involved,” Vaishnaw said in an interview.

“Secondly, creativity has to be respected both in terms of intellectual property as well as financial and commercial implications. We have shared these views with tech players. More or less they are in agreement in principle.”
The companies have said that all countries are facing similar challenges and the industry needs to work with governments to find a solution.
“One thought is to form a self-regulatory body,” the minister said. “But we don’t think that would be enough. We think that this regulation should be done by legislative method. We have already consulted the industry. After elections, we will launch a formal consultation process and move towards legislation.”

This comes as demands for protecting the rights of publishers and content creators are gathering steam globally. In December, The New York Times sued Microsoft and Open AI, alleging that millions of its copyrighted articles were used to train OpenAI’s generative models and chatbots. Google was fined about $270 million by regulators in France for using news articles to train its AI model known as Gemini and failing to notify publishers. Apple has reportedly started negotiations with major news outlets to strike deals that will allow the technology giant to train its AI systems using copyrighted content.

Following the move by The New York Times, several authors, computer programmers and musicians moved court against big tech companies, including Microsoft, OpenAI, Meta and GitHub, for training their models on copyrighted information without compensation.

News publishers in India have been seeking changes to the information technology rules to ensure fair compensation for the use of their content by generative AI models in the country amid the increasing AI copyright disputes across the world.

The Digital News Publishers Association (DNPA) grouping has sent a letter and made representations to the ministries of electronics and information technology, and information and broadcasting, seeking protection from likely copyright violations by AI models, ET reported in January.

DNPA represents 17 top media publishers in the country, including Bennett, Coleman & Co. Ltd. (BCCL) that publishes ET.

The way forward could be a framework or legislation that gets stakeholders together to thrash out the contours of a working contract for fair compensation, said Jaspreet Bindra, an expert on emerging technologies and the founder of Tech Whisperer.

“It will also be interesting to see how such contracts work between LLMs and individual creators as the process is easier said than done,” he said. “What is necessary, however, is some kind of enforcing regulation so that all the stakeholders are brought to the same negotiating table.”

Earlier this year, at the Munich Security Conference, several large technology firms including Google, Amazon, Microsoft, Adobe and Meta signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections that commits them to developing misinformation detection technology, curbing distribution of such content, and driving public awareness.

While the US has yet not enacted any federal regulation to regulate AI, companies have signed multiple pacts to do so.

Europe has taken the lead on AI regulation. The European Parliament last month adopted the AI Act, introducing strict guardrails for developers of high-risk AI systems. It mandates that AI models must be compliant with the EU copyright law and detailed summaries of the data used to train such models should be made available. The landmark legislation also requires content that has been artificially generated or altered to be labelled as such.

This post was originally published on this site

Similar Posts