Report highlights strategies to accelerate AI in research – University World News

author
12 minutes, 17 seconds Read

GLOBAL

There is much policy and strategic action around generative AI and research the world over but scant exchange of knowledge between countries. A study by the globally influential International Science Council (ISC) aims to bridge this isolation and identify key issues around a technology that will massively impact research internationally.

There is a healthy appetite for engaging with other countries, to exchange knowledge and compare experiences, said Dr Mathieu Denis, head of the Centre for Sciences Futures at the council and one of the authors of the study report, published last week and titled Preparing National Research Ecosystems for AI: Strategies and progress in 2024.

“More collaboration and coordination of AI strategies for science would increase our collective capacity to use AI for the benefit of science and society and to address global challenges such as climate change,” he told University World News.

Among other important findings is a disconnect between discussions about the impacts of AI at the global versus national levels, and gnarly technical policy questions that need an international approach. A survey of 12 mostly small-to medium-sized countries revealed great variation in AI approaches and surprisingly ambitious AI strategies in some.

“While global discussions on the impact of AI, including on science, are focusing on ethical developments around the use of AI, issues of ethics seem largely absent from national roadmaps for the uptake of AI in science, despite the implications for ethical data policy, research integrity, explainability of results etcetera,” said Denis, who is also senior director at the council.

“It would be important to ensure that these two agendas stop evolving in parallel and meet,” he stated.

Only some of the AI related issues that the study found were important for research are also drivers of country plans for the uptake of AI in science. Rather, current plans are guided by a country’s overall approach to AI and try to support national economic, governance, digital and other ambitions attached to AI more generally.

This partial disconnect and pre-eminence of national strategies is understandable, but the council is worried that insufficient attention to the specific conditions for a successful uptake of AI in research will affect the quality of science. “It will be measured in poor research data policies, strengthened epistemic biases, insufficient capacity and ineffective institutional and regulatory environments. It will lead, in other words, to bad science,” the study states.

Further, Denis commented: “The work with countries is bringing to the fore a few policy questions that are quite technical in nature and very difficult for a single country to tackle alone: for example, research data policy and management for the entire research and development sector, and AI standards for research.”

One of the reasons for the study was to map out the terrain of policy and activity around AI and research internationally, so as to ascertain a path for future work of the council aimed at helping countries to respond to the opportunities and challenges of AI and science.

An ambitious project

The ISC is a Paris-based NGO that brings together more than 245 international, regional and national scientific organisations including science academies and research councils. It works at the global level to catalyse change on issues of major importance for science and society.

The study was undertaken by the ISC Centre for Science Futures and follows a discussion paper released by the council last year, evaluating AI and related technologies.

Aside from Denis, the report’s authors are Dr David Castle, a public administration professor at the University of Victoria and a science adviser to the Canadian prime minister, and ISC Science Officer Dr Dureen Samandar Eweis.

The study features a review of literature on generative AI and research, and insights from countries around the world through case studies. It offers a framework outlining key issues to consider when planning to integrate AI into research systems.

A second, more comprehensive report will be released later this year expanding the number of case studies and geographical representation, and offering “recommendations for more coordinated and collaborative science policies for AI”, states the study. Meanwhile, the ISC is consulting the science community worldwide, through regional workshops and other means.

The report targets a broad audience of science policy-makers involved in integrating new AI technologies into research systems, granting councils and philanthropies, AI specialists in companies, and scientists and science journalists.

The need for collaboration

The ISC study found that very little is known about how governments plan to accelerate the uptake of AI by research institutions, despite AI’s huge implications for national research and development systems.

“There is still little in the literature, and almost nothing in policy debates, on what countries are doing to prepare their science, technology and innovation [STI] systems for AI,” Denis told University World News.

“When you investigate, however, you do find significant reflections on the issue taking place in ministries and higher education institutions. In all countries that we approached we found people and teams actively developing-implementing a strategy for the uptake of AI by research ecosystems,” he said.

All of the authors involved in the 12 case studies are directly involved in delivering their countries’ roadmaps for integrating AI in science. The study is given depth by these short studies of countries from all world regions that are at various stages of integrating AI.

“Those reflections are taking place in countries but there is close to no exchange on that issue between countries. The working paper is an attempt to break this isolation, increasing our collective knowledge and identifying the key documents,” said Denis, stressing that the study is just the beginning of the conversation.

“Our ambition with this paper is not only to document current initiatives, but also to support the collective journey to better prepare for this critical technological transformation of science systems. Ultimately, this is about making sure that AI works for science,” he added.

Overall, the study seeks to gather knowledge and information about AI and research issues and current efforts; help countries to develop roadmaps for the uptake of AI in science systems; create regional and global networks of people involved in implementing AI for science; and help to shape a critical AI discussion among scientific and policy communities.

Among the key questions are how AI will influence research funding, research data standards and scientific outputs, and scientific careers. Also, what infrastructure investments will be needed to support AI uptake by science, and what legal adjustments will enable AI use while ensuring high standards in the responsible conduct of science?

The literature review and the big issues

A bibliometric study was undertaken in September 2023, in partnership with Nature Research Intelligence, to identify publications around the world that explore the impact of AI on science and research ecosystems.

The study identified 1,600 documents, refined down to a dataset of 317 documents published between 2018 and 2023. There are 123 journal articles, 59 book chapters, 51 preprints, 30 web pages, 20 conference proceedings, 18 policy documents, and 16 books and monographs.

“While 317 publications dealing with national plans to integrate AI in science and research ecosystems may seem relatively low, there was a tenfold steady increase in numbers of publications published annually between 2018 and 2022 (from nine to 88),” states the report.

The study ponders the low number of academic contributions, when compared to reflections and strategies developed nationally, and makes the point that the recent increase in publications related to AI suggests that discussions around AI and science are now picking up in academic journals and conferences. The Council hopes that the study will contribute to more engagement among academia with the uptake of Ai in science.

The study identified a core set of 45 issues and topics that experts and observers thought critical for the integration and uptake of AI in research systems.

These issues are captured in a simplified version of the OECD framework for technology governance, with its three themes: research and development agenda setting, technology assessment, foresight and science advice; public engagement, science communication and public accountability; and regulation, standards, private sector governance and self-regulation.

The literature review reflects the many ways in which AI is influencing how science is produced, organised and funded. It should be of considerable use to countries as they develop and implement roadmaps for the uptake of AI in science and research, states the report.

The following are just a few examples.

It is important to identify strategic sectors for AI development and uptake in science. There are needs to build and retain AI skills in the research community, and to achieve diversity in the AI workforce and the right incentives for disciplinary and interdisciplinary AI.

The study warns against AI capacity replacing merit in science funding decisions, “closing off areas of research that do not use it”; against competition in research becoming less about merit than AI access; and of the problem of machine learning from material that reproduces old biases.

There are needs to develop cloud computing and data repositories that are appropriate for science, to work against digital divides in AI access and use, and to develop AI tools in ways that ensure science is not “driven solely by the AI and machine-learning communities, but rather developed jointly with all research communities”.

Importantly, states the paper, there is a need to assess variability in AI governance and data protection between countries with impacts for international research and collaboration. Also, countries should look at cooperating through regional AI centres and research networks “if they do not have the resources to do it on their own”.

Further: “AI may generate tensions between some of the core principles and values that define today’s science,” the report points out. “Such contradictions might include openness versus rigour; privacy and confidentiality versus open science; massive data versus high quality data; or explainability versus ‘black box’ results.”

There are concerns that much of the data needed to develop scientific AI will not fall within open data initiatives, which could result in high quality data being kept confidential.

Also, developing AI for science will require the harmonisation of practices and development of communities of practice, but: “Current norms and practices for the production and use of data differ between disciplines and institutions.” And what about legal liabilities of research done with AI, copyright protection or patenting for machine-generated creations, and the risk of text and data mining infringing copyright?

The country case studies

The country cases studies were authored by experts who are at the forefront of integrating AI into their national science systems. The countries are Australia, Benin, Brazil, Cambodia, Chile, China, India, Malaysia, Mexico, Oman, Uruguay and Uzbekistan.

The council said it was important to consider the circumstances of countries of varying sizes, which are also major contributors to scientific advancements. The final report will feature another dozen or so countries with further geographical representation including: Canada, France, Jordan, Malawi, Morocco, Nigeria, Norway, United Arab Emirates, the United Kingdom, Panama, Romania, Rwanda, South Africa and the United States.

Denis told University World News: “The focus on small- to medium-sized countries provides some interesting surprises. Some countries (for example Bénin and Uzbekistan) have highly ambitious AI strategies for science, considering the size of their STI system.”

Several countries are mobilising their entire STI ecosystems – public funding, some higher education institutions and the private sector – around a few priorities, such as health and food systems. “Where this is the case, we can assume it reflects a country’s commitment and limited resources,” the report states.

Each case study references key documents framing the country’s AI approaches to research, most of which are not findable in international publication.

The case of Malaysia

Malaysia is moving towards AI innovation to support research, said Nurfadhlina Mohd Sharef of the Academy of Sciences Malaysia, and author of that country’s case study.

Its key takeaways are the involvement of actors from different sectors in developing cross-cutting guidelines and policies on AI, and guidance from the Ministry of Higher Education and Malaysia Qualification Agency for responsible AI use in academia. The approach to AI for science is focused on innovation through technology, with AI upskilling led by both the academic and industry sectors.

At a regional International Science Council workshop last October, the Asian countries involved indicated various levels of strategies taken to adapt to the disruption that AI brings, Sharef told University World News. Mainly, AI has been used for various purposes in each country, from general business operations to facilitating scientific development.

“My main personal takeaway from the discussion is the importance of positioning and strategising specifically for AI in science, making sure of the creation and usage of responsible AI,” she said.

According to Sharef, numerous government funded AI projects are underway, such as academic interventions supporting learning sciences, and in crop management and biodiversity. Industry-led projects cover the more expensive and wider AI operational impacts, for example at hospitals for brain haemorrhage analysis and oral lesions.

“These moves indicate a positive shift towards usage of AI to support evolution in how science is being managed,” Sharef told University World News. There has not yet been much reporting on new scientific discovery grounded in AI-based science processes, but AI immersion in business is providing new services to society that may also indirectly increase knowledge on science processes, and how AI is used responsibly to support it.

The Academy of Sciences Malaysia has initiatives to come up with a more coordinated AI for science implementation. “A positive AI for science can be managed by the academy and other entities in Malaysia. A guideline on AI governance and ethics has just been produced and will be released this month,” said Sharef. A request for proposals on AI for adaptive student intervention has been made, “which will help support learning sciences growth”.

What next?

During the course of this year, the council’s regional workshops and consultations will help to validate concepts outlined in the study and foster understanding of the priorities, successes and challenges encountered by countries in integrating AI into research systems.

Following the October 2023 regional workshop in Malaysia, which contributed to the report, another workshop for Latin America will be held in Santiago, Chile on 9 April 2024, and there will also be a regional engagement with Africa by mid-year.

The Council has just received an important grant from the International Development Research Centre of Canada (IDRC) to continue this work in the coming years, with the networks of people and countries that it is engaging with.

Preparing National Research Ecosystems for AI confirms the ISC’s engagement in exploring the impact of AI on science and societies. Additional studies and initiatives will develop in the coming months and years, the study states.

Denis invited science leaders involved in preparing the uptake of AI in their institutions and countries to engage with the council to share approaches, experiences and questions.

This post was originally published on this site

Similar Posts