On 18 October, Frontiers’ publishing development department and Frontiers Policy Labs co-hosted a webinar on artificial intelligence (AI) and academic research. The 90-minute webinar was designed to show how AI is transforming different fields of academia and academic research, address the approach to regulation, and foster collaboration between disciplines.
Earlier this year in May, Jean Claude Burgelman, editor-in-chief of Frontiers Policy Labs, published a commentary entitled, “Getting a grip on data and Artificial Intelligence.” In it, Burgelman raised critical questions of the strategic goals of AI and called for regulatory oversight of AI tools. The piece generated a lot of response and served as a conversation starter, with the panelists continuing the discussion in the recent webinar.
The panel took a closer look at the societal transformations driven by AI given its profound effect on human thinking, knowledge, and productivity. The multidisciplinary nature of the topic presented a unique opportunity to gather diverse perspectives from fields ranging from engineering and informatics to political science. Together, panelists shared the ways in which AI is already changing their respective field and explored the advantages and disadvantages of AI on how science is conducted given these changes. They also discussed the response required to ensure AI best serves science, touching on some of the points Burgelman made in his piece.
Moderated by Mathieu Denis, head of the Centre for Science Futures at the International Science Council, the panel consisted of six researchers from various backgrounds and geographies:
Ruth Morgan, Editorial Board Member Policy Labs, Professor of Crime and Forensic Science and Vice Dean (Interdisciplinarity Entrepreneurship), University College London, UK
Barend Mons, Professor of Biosemantics, Human Genetics Department, Leiden University Medical Center, Netherlands
Chaomei Chen, Field Chief Editor of Frontiers in Research Metrics and Analytics, Professor of Information Science in the College of Computing and Informatics at Drexel University, USA
Izuru Takewaki, Field Chief Editor of Frontiers in Built Environment, Professor of Structural Engineering, Kyoto Arts and Crafts University, Japan
Nova Ahmed, Editorial Board Member Policy Labs, Professor of Computer Science, North South University, Bangladesh
Leslie Paul Thiele, Specialty Chief Editor of Frontiers in Political Science, Professor of Political Theory, University of Florida, USA
Each panelist had the opportunity to share more about the impact AI is having on their field. For some, such as forensic science, AI has long been relied on for assistance in pattern recognition with fingerprint comparison. In other fields like political science, AI is a relatively newer addition, assisting researchers in analyzing the content of political material and forecasting public response to certain decisions. Regardless of how it is used, panelists agreed that among the benefits of AI were the ability to quickly process large amounts of data, formulate or visualize potential outcomes of a task or project, summarize information, and automate more repetitive tasks.
Moreover, AI has paved the way for more interdisciplinary thinking to find innovative ways to solve complex global challenges by taking a holistic view of data. Ruth Miller, vice dean (Interdisciplinarity Entrepreneurship) at University College London, explained how the widespread impact of AI has opened up opportunity for conversation and allowed for more collaboration across industry, discipline, geographies, and generations. She said: “In the past, our bottleneck has really been about the limitations we’ve had in terms of the breadth and depth of knowledge that we can bring to the table because we’re constrained by geography and time. And these constraints are now being lifted [by AI].”
While acknowledging the benefits, the panelists also took the time to speak to the potential drawbacks of AI. The perpetuation of stereotypes and discrimination from biased data or algorithms was a top concern. Nova Ahmed, professor of computer science at North South University in Bangladesh, explained: “It’s on us how much we are training the data, how we should include it, how we should make sure that there is representation, or, if there’s not enough representation, whether or not to use that data for making a critical decision.”
Other areas of concern included incorrect or misleading information produced by AI, ethical concerns, data privacy and security, and deskilling. The latter of which can occur either through over reliance on AI or by employing AI over younger researchers, for example, who would have developed the necessary skills through work experience.
The panelists advocated for transparency to overcome some of these challenges. Transparency would enable humans to better oversee the outputs of AI, which is an essential step when relying on such systems. It would also allow researchers to see the exact steps the algorithm followed, and the sources of data consulted, making it possible to replicate the results and determine the reliability of the information provided.
As for the future of AI, panelists were optimistic, but they also recognized the need for regulation. They agreed with Burgelman that some type of regulation is necessary to ensure that AI is embraced in the correct manner. However, panelists expressed difficulties in proposing specific rules around AI, given the multifaceted nature of the topic. They noted that regulation often deals with the intended consequences of good actors, not the unintended consequences or bad actors. The consensus was that regulation should consider several attributes that were stressed throughout the webinar: transparency, replicability, and reliability.