Can AI Can Promote the Greater Good? Student and Faculty Researchers Say Yes – Fordham News

author
3 minutes, 6 seconds Read

At a spring symposium, Fordham faculty and students showed how they’re putting data science and artificial intelligence to good use: applying them to numerous research questions related to health, safety, and justice in society.

It’s just the sort of thing that’s supposed to happen at an institution like Fordham, said Dennis Jacobs, Ph.D., provost of the University, in opening remarks.

“Arguably, artificial intelligence is the most revolutionary technology in our lifetime, and it brings boundless opportunity and significant risk,” he said at the University’s second annual data science and AI symposium, held April 11 at the Lincoln Center campus. “Fordham’s mission as a Jesuit university inspires us to seek the greater good in all things, including developing responsible AI to benefit society.”

The theme of the day was “Empowering Society for the Greater Good.” Presenters included faculty and students—both graduate and undergraduate—from roughly a dozen disciplines. Their research ran the gamut: using AI chatbots to promote mental health; enhancing flood awareness in New York City; helping math students learn to write proofs; and monitoring urban air quality, among others.

The event drew dozens of students and faculty, who came to learn more about how AI is advancing research across disciplines at Fordham.

Student Project Enhances Medical Research

Deenan He, a senior at Fordham College at Lincoln Center, presented a new method for helping researchers interpret increasingly vast amounts of data in the search for new medical treatments. In recent years, “the biomedical field has seen an unprecedented surge in the amount of data generated” because of advancing technology, said He, who worked with natural sciences assistant professor Stephen Keeley, Ph.D., on her research.

From Granting Loans to Predicting Criminal Behavior, AI Must Be Fair

Keynote speaker Michael Kearns, Ph.D., a computer and information science professor at the University of Pennsylvania, spoke about bias concerns that arise when AI models are used for deciding on consumer loans, the risk of criminals’ recidivism, and other areas. Ensuring fairness requires explicit instructions from developers, he said, but noted that giving such instructions for one variable—like race, gender, or age—can throw off accuracy in other parts of the model.

Yilu Zhou, associate professor at the Gabelli School of Business, presented research on protecting children from inappropriate mobile apps.

Audits of models by outside watchdogs and activists—“a healthy thing,” he said—can lead to improvements in the models’ overall accuracy. “It is interesting to think about whether it might be possible to make this adversarial dynamic between AI activists and machine learning developers less adversarial and more collaborative,” he said.

Another presentation addressed the ethics of using AI in managerial actions like choosing which employees to terminate, potentially keeping them from voicing fairness concerns. “It changes, dramatically, the nature of the action” to use AI for such things, said Carolina Villegas-Galaviz, Ph.D., a visiting research scholar in the Gabelli School of Business, who is working with Miguel Alzola, Ph.D., associate professor of law and ethics at the Gabelli School, on incorporating ethics into AI models.

‘These Students Are Our Future’

In her own remarks, Ann Gaylin, Ph.D., dean of the Graduate School of Arts and Sciences, said “I find it heartening to see our undergraduate and graduate students engaging in such cutting-edge research so early in their careers.”

“These students are our future,” she said. “They will help us address not just the most pressing problems of today but those of tomorrow as well.”

Keynote speaker Michael Kearns addressing the data science symposium

This post was originally published on this site

Similar Posts