Artificial intelligence is harmful, unreliable, and running out of data. – Softonic EN

2 minutes, 36 seconds Read

A few days ago we informed you that AI has surpassed human capabilities in almost every field. At least in a series of performance-based tasks. But in the world of artificial intelligence, not everything is positive, as we are about to see.

The AI Index 2024 report, recently published by the Human-Centered Artificial Intelligence (HAI) Institute at Stanford University, thoroughly examines the global impact of AI.

Now that AI is integrated into many aspects of our lives, we must take responsibility for its contribution, especially in important sectors such as education, health, and finance.

Yes, incorporating AI can bring advantages – process optimization and productivity, discovery of new drugs, for example – but it also carries risks.

In which fields should AI be measured?

According to the new AI Index report, truly responsible AI models must meet the public’s expectations in key areas: data privacy, data governance, security and protection, fairness and transparency, and explainability.

  • Data privacy safeguards the confidentiality, anonymity, and personal data of an individual. It includes the right to consent and be informed about the use of data.
  • Data governance includes policies and procedures that ensure data quality, focusing on ethical use.
  • Security includes measures that guarantee system reliability and minimize the risk of misuse of data, cyber threats, and inherent system errors.
  • Impartiality means using algorithms that avoid bias and discrimination and adhere to broader social concepts of equity.
  • Transparency means openly sharing data sources and algorithmic decisions, as well as considering how AI systems are monitored and managed from creation to operation.
  • Explainability refers to developers’ ability to explain the foundations of their AI-related decisions in understandable language.

For this year’s report, Stanford researchers collaborated with Accenture to survey 1,000 organizations worldwide and ask them what risks they considered relevant. The result was the Global State of Responsible AI survey.

In this survey, the data made it clear that AI is many things, but none of them are: responsible with data privacy, data governance, security and protection, impartiality and transparency, and explainability.

They even talk about AI running out of training data. With the advances made in machine learning, an obvious question arises: Will models run out of training data?

According to researchers from Epoch AI, who contributed data to the report, it is not a matter of if we will run out of training data, but when. They estimate that computer scientists could exhaust high-quality linguistic data as early as this year, low-quality data within two decades, and image data between the late 2030s and mid-2040s.

Chema Carvajal Sarabia

Journalist specialized in technology, entertainment and video games. Writing about what I’m passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.

Latest from Chema Carvajal Sarabia

Editorial Guidelines

This post was originally published on this site

Similar Posts