The new Assessing Risks and Impacts of AI (ARIA) program will assess the societal risks and impacts of artificial intelligence systems; what happens when people interact with AI regularly in real settings. The new program is intended to help improve understanding of artificial intelligence’s capabilities and impacts. The results of ARIA will support the U.S. AI Safety Institute to help build lay the groundwork for trustworthy AI systems.
There have been several recent announcements by NIST that cover 180-days of the Executive Order on trustworthy AI. The AI Risk Management Framework, released in January 2023, helps to assess the risks and impacts by developing a set of methodologies and metrics for measuring how well a system can maintain safe functionality within the context of society.