ST. PAUL, Minn. — Princess Kate shocked the world with news of her cancer diagnosis, which ended months of speculation about her health.
It was just one of a hurricane of headlines involving Britain’s royal family the past few weeks.
The family faced scrutiny after sharing two pictures that were digitally altered.
They had many wondering: How easy is it to doctor a photo or video? How can you figure out what’s real and what’s not?
Questions prompted by the picture the internet couldn’t stop talking about.
The Associated Press and Getty Images flagged a seemingly innocent photo of Princess Kate. To a trained eye, like that of Dr. Manjeet Rege, it signals a turning point of sorts for AI.
“Trust is something that is at stake here, not only of the royal family but also AI as a tool,” Rege said. “When I looked at that image, it looked so realistic that even I did not actually pick that.”
Rege directs the Center for Applied Artificial Intelligence at the University of St. Thomas.
He said recent advancements in generative AI are stronger than ever before.
“It is humanly not possible to keep up with all of these images and scrutinize them with our naked eye,” Rege said.
To the point that real and fake look the same.
The tools are so powerful, he said something needs to change as they get stronger.
“The responsibility is not only on the source provider, but also on the big tech companies,” Rege said. “Very little right now is being done at the federal level. We need to have laws in place that will compel tech companies to detect in a browser or on an app that you’re using and warn consumers.”