Responsible AI Toolbox
Creating an artificial intelligence system with the accuracy needed for a given application is already a difficult endeavor. Developing reliable field-ready AI systems is even more challenging because AI algorithms are brittle and vulnerable to both intentional attacks and naturally arising conditions that can make systems behave in unexpected ways. Improved techniques and software support are needed to help researchers engineer robust and resilient AI systems and rigorously evaluate these systems both before and during operational deployment.
To address this need, Lincoln Laboratory staff who are involved in the AI Systems Engineering and Reliability Technologies (ASERT) and Robust AI Development ENvironment (RAIDEN) projects are developing the open-source Responsible AI Toolbox to equip Laboratory researchers and the broader Department of Defense (DoD) and academic communities with tools for designing, evaluating, and monitoring AI systems. Toolbox components are designed to be modular and easy to integrate into existing systems, which will allow the techniques in the toolbox to have a lower barrier to entry, greater interoperability, and a longer shelf life than typical research-quality or academic implementations that often are not designed to be composed with other libraries.
The foundation of the toolbox is a component called “hydra-zen” that works with Facebook’s Hydra library to facilitate configurable, reproducible, and scalable workflows, such as orchestrating complex machine learning experiments. Additional components that contain tools for evaluating and enhancing both the robustness and the explainability of AI models are gathered in the “rAI-toolbox” library. We plan to expand the scope of the toolbox in the future to help address other areas that fall under the umbrella of responsible and ethical AI, including fairness and environmental sustainability.