Opinion | AI Garbage Is Already Polluting the Internet – The New York Times

author
4 minutes, 32 seconds Read

Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

#g-chart-box ,
#g-chart-box .g-artboard {
margin:0 auto;
}
#g-chart-box .g-aiAltText {
position: absolute;
left: -10000px;
width: 1px;
height: 1px;
overflow: hidden;
white-space: nowrap;
}
#g-chart-box p {
margin:0;
}
#g-chart-box .g-aiAbs {
position:absolute;
}
#g-chart-box .g-aiImg {
position:absolute;
top:0;
display:block;
width:100% !important;
}
#g-chart-box .g-aiSymbol {
position: absolute;
box-sizing: border-box;
}
#g-chart-box .g-aiPointText p { white-space: nowrap; }
#g-chart-335 {
position:relative;
overflow:hidden;
}
#g-chart-335 p {
font-family:nyt-franklin,arial,helvetica,sans-serif;
font-weight:700;
line-height:19px;
opacity:1;
letter-spacing:0em;
font-size:16px;
text-align:left;
color:rgb(0,0,0);
text-transform:none;
padding-bottom:0;
padding-top:0;
mix-blend-mode:normal;
font-style:normal;
height:auto;
position:static;
}
#g-chart-335 .g-pstyle0 {
font-weight:300;
line-height:16px;
font-size:13px;
}
#g-chart-335 .g-pstyle1 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(57,181,151);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle2 {
font-weight:300;
line-height:16px;
height:16px;
font-size:13px;
text-align:right;
color:rgb(128,128,128);
top:1px;
position:relative;
}
#g-chart-335 .g-pstyle3 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(200,89,206);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle4 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(254,161,81);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle5 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(50,156,215);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle6 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(188,189,34);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle7 {
font-weight:600;
line-height:18px;
height:18px;
font-size:15px;
color:rgb(227,119,194);
top:1.2px;
position:relative;
}
#g-chart-335 .g-pstyle8 {
font-weight:300;
line-height:16px;
height:16px;
font-size:13px;
text-align:center;
color:rgb(128,128,128);
top:1px;
position:relative;
}
#g-chart-600 {
position:relative;
overflow:hidden;
}
#g-chart-600 p {
font-family:nyt-franklin,arial,helvetica,sans-serif;
font-weight:700;
line-height:19px;
opacity:1;
letter-spacing:0em;
font-size:16px;
text-align:left;
color:rgb(0,0,0);
text-transform:none;
padding-bottom:0;
padding-top:0;
mix-blend-mode:normal;
font-style:normal;
height:auto;
position:static;
}
#g-chart-600 .g-pstyle0 {
font-weight:300;
line-height:16px;
height:16px;
font-size:13px;
top:1px;
position:relative;
}
#g-chart-600 .g-pstyle1 {
font-weight:600;
height:19px;
color:rgb(57,181,151);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle2 {
font-weight:300;
line-height:16px;
height:16px;
font-size:13px;
text-align:right;
color:rgb(128,128,128);
top:1px;
position:relative;
}
#g-chart-600 .g-pstyle3 {
font-weight:600;
height:19px;
color:rgb(200,89,206);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle4 {
font-weight:600;
height:19px;
color:rgb(254,161,81);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle5 {
font-weight:600;
height:19px;
color:rgb(50,156,215);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle6 {
font-weight:600;
height:19px;
color:rgb(188,189,34);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle7 {
font-weight:600;
height:19px;
color:rgb(227,119,194);
top:1.3px;
position:relative;
}
#g-chart-600 .g-pstyle8 {
font-weight:300;
line-height:16px;
height:16px;
font-size:13px;
text-align:center;
color:rgb(128,128,128);
top:1px;
position:relative;
}

/* Custom CSS */
.g-bg p {
-webkit-text-stroke: 8px;
-webkit-text-stroke-color: #fff;
}

Line chart showing the word frequency of adjectives used in scientific paper peer reviews about A.I. since 2020. The words “Innovative”, “notable”, “commendable”, “intricate” “versatile”, and “meticulous” all had sharp increases in frequency in 2024.

Adjectives associated with A.I.-generated text have increased in peer reviews of scientific papers about A.I.

Frequency of adjectives per one million words

Innovative

100

80

60

Notable

40

Commendable

Intricate

20

Versatile

Meticulous

2020

2021

2022

2023

2024

Adjectives associated with A.I.-generated text have increased in peer reviews of scientific papers about A.I.

Frequency of adjectives per one million words

Innovative

100

80

60

Notable

40

Commendable

Intricate

20

Versatile

Meticulous

2020

2021

2022

2023

2024

Note: Peer reviews are for the International Conference on Learning Representations (ICLR), one of the largest A.I. conferences.

Source: “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”

By Taylor Maggiacomo

A new study this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” almost 3,400 percent more than reviews had the previous year. Use of “commendable” increased by about 900 percent and “intricate” by over 1,000 percent. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?).

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

This post was originally published on this site

Similar Posts