AI Other categories

SEI and DOD Center To Ensure Trustworthiness in AI Systems – Carnegie Mellon University

Advances in artificial intelligence (AI), machine learning and autonomy have created a proliferation of AI platforms. While these technologies have shown promise for advantages on the battlefield, developers, integrators and acquisition personnel must overcome engineering challenges to ensure safe and reliable operation. Currently, there are no established standards for testing and measuring calibrated trust in AI systems.

In 2023, Carnegie Mellon University’s Software Engineering Institute (SEI) and the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) launched a center aimed at establishing methods for ensuring trustworthiness in AI systems with emphasis on interaction between humans and autonomous systems. The Center for Calibrated Trust Measurement and Evaluation (CaTE) aims to help the Department of Defense (DOD) ensure that AI systems are safe, reliable and trustworthy before being fielded to government users in critical situations.

Since launching, CaTE has embarked on a multiyear project addressing the complexity and engineering challenges associated with AI systems while utilizing software, systems and AI engineering practices to develop standards, methods and processes for providing evidence for assurance and developing measures to determine calibrated levels of trust.

“The human has to understand the capabilities and limitations of the AI system to use it responsibly,” said Kimberly Sablon, the principal director for trusted AI and autonomy within OUSD(R&E). “CaTE will address the dynamics of how systems interact with each other, and especially the interactions between AI and humans, to establish trusted decisions in the real world. We will identify case studies where AI can be experimented with and iterated in hybrid, live, virtual and constructive environments with the human in the loop.”

CaTE is a collaborative research and development center that will work with all military branches on areas such as human-machine teaming and measurable trust. It is the first such hub led by a nongovernmental organization. CMU has been at the epicenter of AI, from the creation of the first AI computer program in 1956 to pioneering work in self-driving cars and natural language processing.

“Developing and implementing AI technologies to keep our armed forces safe is both a tremendous responsibility and a tremendous privilege,” said CMU President Farnam Jahanian. “Carnegie Mellon University is grateful to have the opportunity to support the DOD in this work and eager to watch CaTE quickly rise to the forefront of leveraging AI to strengthen our national security and defense.”

Together with OUSD(R&E) collaborators and partners in industry and academia, SEI researchers will lead the initiative to standardize AI engineering practices, assuring safe human-machine teaming in the context of DOD mission strategy.

“When military personnel are deployed in harm’s way, it’s of the utmost importance to give them not only the greatest capability but also the assurance that the AI and autonomous systems they depend on are safe and reliable,” said Paul Nielsen(opens in new window), SEI director and CEO. “Because of our work to define the discipline of AI engineering for robust, secure, human-centered and scalable AI, the SEI is uniquely positioned to support this effort.”

For more information about the SEI’s AI engineering research, visit sei.cmu.edu/our-work/artificial-intelligence-engineering/index.cfm(opens in new window).

This post was originally published on 3rd party site mentioned in the title of the post.

Related posts