Can researchers produce trustworthy AI that serves national goals? – University World News

author
7 minutes, 36 seconds Read

JAPAN

Universities are playing an active role in the Japanese government’s plan to lead global development of ‘responsible’ generative artificial intelligence. Political support for university research on generative AI is also primarily aimed at finding viable solutions to entrenched domestic issues, in particular the looming labour shortage threatening Japan’s long-term economic growth.

“The focus is on establishing reliable and high-performance AI knowledge that is critical for labour-strapped Japan and other national goals,” said Yutaka Matsuo, chair of the government’s AI Strategy Council and professor in the Center for Engineering at the University of Tokyo. He is also chair of the Deep Learning Association, serving as an outside director of SoftBank Group, a leading telecommunications company.

The council, launched last June, described AI as the “arrival of a great opportunity for Japan”. It includes corporate executives, university researchers and a lawyer and is tasked with formulating regulations and legislation to minimise risks related to generative AI.

It also serves as an accreditation body to certify companies that promote responsible AI.

In February the Cabinet Office earmarked JPY12 billion (US$79 million) for AI development for 2024, to strengthen Japan’s AI research capabilities in universities. The new funds are seen by experts as an opportunity for developing cutting-edge technologies to prevent misuse, as well as for pulling ahead in generative AI research and development, where Japan has been lagging globally.

The government’s action plan for new policies and organisations to promote AI, has identified greater use of AI in medicine and healthcare, education, finance, manufacturing and administrative work while ensuring it can be controlled by humans, according to the Cabinet Office website.

Generative AI refers to tools such as ChatGPT and Large Language Models (LLMs). These can power chatbots that can analyse data, enable language translation, develop curricula and can also be used to devise strategies for organisations.

The spread of misinformation

But harnessing highly accurate information from the massive data generated by AI bots remains frustratingly elusive, according to researchers. The rapid spread of generative AI has led to misinformation, impersonation of humans using computer generated videos known as ‘deep fakes’, copyright infringement in using data to train bots, as well as bias and discrimination in generated results.

Containing these problems has become a research priority, as generative AI has made it easier for anyone to create misinformation, for example, to manipulate public opinion.

Shinichi Yamaguchi, a researcher in the Center for Global Communications at the International University of Japan, has looked at protective measures aimed at the ‘attention economy’ – the way X (formerly Twitter) encourages influencers to attract bigger viewer numbers linked to higher advertising revenue and how it is linked to the spread of disinformation.

A recent case in Japan occurred this year when thousands of social media posts that were reposted by more than two million users referred inaccurately to the New Year Day earthquake in the Noto peninsula as being artificially created by the military, according to Japan’s Asahi newspaper.

“Developing counter-technologies is essential for combating AI-generated disinformation, but I also believe it is important to concurrently develop and implement computer literacy programmes for the safe use of AI, as well as media and information literacy programmes for properly interacting with information,” Yamaguchi told University World News.

He said that his lab combines academic research with collaboration with society, reaching beyond the laboratory. For example, he has been part of a campaign with Google and Japan’s Ministry of Internal Affairs and Communications in which multiple YouTube creators produced and published short awareness videos on disinformation countermeasures.

Awareness courses for citizens on misinformation and disinformation were also conducted with the ministry, including related to COVID-19 vaccine misinformation.

In a paper published in Social Science Research Network electronic journal in 2022, titled “The Effect of COVID-19 Vaccine Misinformation on Authenticity Identification and Vaccination Behaviour”, Yamaguchi pointed to a rise in Japanese public resistance to the COVID-19 vaccine related to the spread of fake information on the internet in Japan on the potential and dangers of the vaccine.

New models of human-like AI

Distortions and other problems related to the use of generative AI, widely reported by researchers and users alike, has also led to research into different models of intelligence that do not rely on the use of language bots, and which could be used to generate information that is in line with human values.

Yasuo Kuniyoshi is director of the Next Generation Artificial Intelligence Research Center at the University of Tokyo, which researches AI for public benefit, and a professor at the university’s Graduate School of Information Science and Technology. His research focuses on the redesign of human-like AI models that are not based on LLM bots.

“As large scale language models such as GPTs [general purpose technologies] demonstrate human-like intelligence, a strong need for fundamental principles is arising to make them align with human values and morals without loopholes,” he explained.

“Accelerating government funding for AI development is based on the policy that the technology benefits society,” he said, adding: “My research focuses on redesign of AI models based on human principles of justice and integrity.”

Kuniyoshi’s laboratory is currently conducting a ‘super-embodiment’ study that explores the interaction dynamics in a human body through multiple different entities – internal organs and metabolism, musculoskeletal body with sensory organs, nervous system, and environment.

Such a system can address sensibilities, values and morals as part of ‘artificial humanity’, which he believes will be critically important for the next generation of AI.

“Our study explores, for example, the signs that the human brain of a baby entertains a sense of social justice,” he explained.

The research involves ‘Noby’, a robot representing a human nine month-old baby’s body that can be carried around. It has two cameras and two microphones on its head and is equipped with some 600 touch sensors on its ‘skin’, to analyse AI’s capabilities of human-like intelligence to hopefully be able to align this with human values and morals.

Embodiment, a technical term, refers to an AI human model simulating the body and brain with the focus on movement that involves both physical and psychological responses, such as when walking.

Kuniyoshi points out that the model is different to ChatGPT which is a RL – reinforcement learning – systems model trained by human language feed-back. He said an RL system can result in flaws, because human language is influencing machine learning.

Governance of ethical use of AI

Research in such areas as general AI and ‘artificial humanity’ are long term projects. Meanwhile, there has been a drive globally for ethical approaches to AI research.

Yohei Itoh, president of Keio University and an expert on quantum physics, will join the government led Council for Science, Technology and Innovation, set up this month following the 2023 G7 Summit to beef up responsible AI and strengthen guidance in AI governance.

Itoh told University World News that AI policies are evolving given the rapid development of the technology around the world. “Keio University prioritises the ethics of research,” he explained, pointing out that this means AI development should approach risks using a multidisciplinary approach. “Legal scholars, for example, play an active role in university AI research,” he said.

Keio University has set up a new generative AI Learning Center that facilitates AI study for students and is also an advanced research centre for developing generative AI in collaboration with companies. Among several ventures set up for this purpose is the AI and Society Laboratory established in 2016 to study the impact of AI and robot technologies on society.

Last June, Japan set the stage for its leadership in AI development with the ‘Hiroshima AI Process’ launched at the Group of Seven meeting in Hiroshima to “foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximise the benefits of the technology while mitigating its risks, for the common good worldwide,” according to the final communique.

The final document included the Hiroshima AI Process International Guiding Principles for Organisations Representing Advanced AI Systems that highlight the critical importance of generative AI developers introducing technologies that “respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human centricity, in the design, development and deployment of advanced AI systems”.

It also stressed the need for an international code of conduct to share information by disclosing AI governance and risk management policies in organisations. Another outcome was the need to include industry, government, civil society and academics in developing advanced systems.

Japan’s Prime Minister Fumio Kishida pledged to continue to support the process under Italy’s G7 leadership this year.

“Japan is dedicated to promoting the Hiroshima AI process outcomes in particular promoting dialogue with [the] multi-stakeholder community,” Kishida told the media in Kyoto after the Summit.

Later he explained: “It is critical to prioritise generative AI in transforming Japan from a manufacturing economy to a service and innovation-based model.” He was referring to ongoing government efforts to use technologies to find solutions to issues stemming from Japan’s ageing society, such as a declining labour force and lower national competitiveness.

This post was originally published on this site

Similar Posts