

To develop an AI explainability tool that can dissect deep neural net computer vision (DNNCV) algorithms (including object detectors and segmenters) and provide detection justifications to users without AI expertise.
Explainable AI is necessary for fully understanding DNNCV algorithms and designing systems employing them. It indicates which features in a detected object are important for classification and can reveal for missed detections what features led to that non-detection or misclassification. This has military implications in terms of DNNCV algorithms being developed and implemented, as well as utility for efforts to develop countermeasures to defeat adversary AI. Like these military applications, non-military commercial spaces would benefit from tools to improve the development and implementation of DNNCV algorithms. There is also commercial potential in exposing explainable AI to the end user to build AI literacy and consumer trust.
Currently, AI Explainability tools exist to help assist practitioners identify and analyze the features that result in the classifications made by object detector algorithms. Additionally, it can reveal inputs that cause the algorithm to return erroneous predictions and help the user understand why. Yet often they are designed to provide significant analysis that is tailored to an audience of AI practitioners. This limits their user base to only AI subject matter experts (SMEs). With the proliferation of AI tools, in the interest of providing end users with explainable and responsible AI, explainability that is more widely understandable is essential. In addition, there is a need to expand such tools to video where unexpected temporal dynamics can lead to erroneous model predictions.
To address these limitations, the goal of this topic is to develop an AI explainability tool that can be used to dissect DNNCV algorithms and provide detection justifications that are accessible to general users without AI expertise. The user should be able to input a DNNCV algorithm and a ground truth labeled image or video data set and receive an explanation of the key features that lead to object detection. The intent of this is to make AI explainability broadly accessible to non-experts.
The tool should be a standalone software package that can be approved to run on a government workstation. Open-source code and toolboxes should be used when applicable. It should conform to DoD cybersecurity requirements and not require connection to any external services or resources.
It would be preferred, but not a requirement, to begin with an existing open-source tool (e.g. DARPA’s XAITK Tool Kit) and build off of that with a foundation to refine the existing state of the art. This will assist with creating something within the short schedule of Phase 1. If proposer opts not to start with a pre-existing tool kit, then it is essential the technical plan robustly explains how the Phase 1 requirements will be met within the 6-month timeframe with minimal programmatic risk.
This topic is accepting Phase I proposals for a cost up to $250,000 for a 6-month period of performance.
Deliver a generalized algorithm dissection tool. The user should be able to input any deep neural net computer vision algorithm and image or video data set (real or synthetic) for analysis. The tool should identify features that lead to object detection and classification. For missed detections (false negatives), the tool, with ground truth labels, should identify what features lead to the prediction. The contractor shall develop a general interface between the tool and any type of AI object detector. They will define the outputs needed from a general object detector for the tool to extract salient insights on the detectors function. In Phase I, there is not a requirement to simplify communication of the features for a non-expert audience or to correlate features across images within the data set. The tool should be delivered as a standalone package that can be approved to run on a government workstation.
Deliver a generalized algorithm dissection tool that can combine analysis from images and videos across a dataset and communicate this to an end user that is not an AI practitioner or SME (e.g. providing simplified output such as a heat map or text description). The contractor shall develop a general and simple output that effectively communicates to non-SMEs the salient features of an image that any detector most heavily uses to make a detection determination. The tool should allow AI practitioners to see a more detailed analysis (e.g., visualization of model gradients or individual CNN filters). The tool should be delivered as a standalone package that can be approved to run on a government workstation.
The end state for this tool is a standalone software tool that can provide generalized algorithm dissection on an input object detector and image dataset. It would directly support military S&T programs that are developing and implementing object detectors as well as efforts to develop countermeasures to defeat adversary AI. It could either be used on the developer side to support and improve the effectiveness of these tools or be incorporated into them to provide outputs that are exposed to the end users (e.g. Soldiers). This would help improve their ability to provide good inputs and trust the outputs.
There are two primary commercialization paths for a generalized algorithm dissection tool that can provide accessible AI explainability:
It could be marketed to developers of hardware that include object detection to use in order to assist them in understanding and improving their algorithms and implementations. This includes broad hardware applications, like autonomous cars, unmanned aerial systems, security cameras, applications such as Google Lens, etc.
Alternatively, it could be marketed as a tool for developers to integrate into their products and expose to the results to end users. This would build AI literacy and improve understanding about how the above tools function. This would assist end users in understanding how object detection algorithms work, which will improve how they interact with AI both in terms of equipping them to improve the inputs they provide to the algorithms and in terms of building consumer trust about the outputs.
For more information, and to submit your full proposal package, visit the DSIP Portal.
View the SBIR Component Instructions. View the STTR Component Instructions.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References:
KEYWORDS: Explainable AI; Responsible AI; Object Detectors; Segmentation Algorithms; Computer Vision
To develop an AI explainability tool that can dissect deep neural net computer vision (DNNCV) algorithms (including object detectors and segmenters) and provide detection justifications to users without AI expertise.
Explainable AI is necessary for fully understanding DNNCV algorithms and designing systems employing them. It indicates which features in a detected object are important for classification and can reveal for missed detections what features led to that non-detection or misclassification. This has military implications in terms of DNNCV algorithms being developed and implemented, as well as utility for efforts to develop countermeasures to defeat adversary AI. Like these military applications, non-military commercial spaces would benefit from tools to improve the development and implementation of DNNCV algorithms. There is also commercial potential in exposing explainable AI to the end user to build AI literacy and consumer trust.
Currently, AI Explainability tools exist to help assist practitioners identify and analyze the features that result in the classifications made by object detector algorithms. Additionally, it can reveal inputs that cause the algorithm to return erroneous predictions and help the user understand why. Yet often they are designed to provide significant analysis that is tailored to an audience of AI practitioners. This limits their user base to only AI subject matter experts (SMEs). With the proliferation of AI tools, in the interest of providing end users with explainable and responsible AI, explainability that is more widely understandable is essential. In addition, there is a need to expand such tools to video where unexpected temporal dynamics can lead to erroneous model predictions.
To address these limitations, the goal of this topic is to develop an AI explainability tool that can be used to dissect DNNCV algorithms and provide detection justifications that are accessible to general users without AI expertise. The user should be able to input a DNNCV algorithm and a ground truth labeled image or video data set and receive an explanation of the key features that lead to object detection. The intent of this is to make AI explainability broadly accessible to non-experts.
The tool should be a standalone software package that can be approved to run on a government workstation. Open-source code and toolboxes should be used when applicable. It should conform to DoD cybersecurity requirements and not require connection to any external services or resources.
It would be preferred, but not a requirement, to begin with an existing open-source tool (e.g. DARPA’s XAITK Tool Kit) and build off of that with a foundation to refine the existing state of the art. This will assist with creating something within the short schedule of Phase 1. If proposer opts not to start with a pre-existing tool kit, then it is essential the technical plan robustly explains how the Phase 1 requirements will be met within the 6-month timeframe with minimal programmatic risk.
This topic is accepting Phase I proposals for a cost up to $250,000 for a 6-month period of performance.
Deliver a generalized algorithm dissection tool. The user should be able to input any deep neural net computer vision algorithm and image or video data set (real or synthetic) for analysis. The tool should identify features that lead to object detection and classification. For missed detections (false negatives), the tool, with ground truth labels, should identify what features lead to the prediction. The contractor shall develop a general interface between the tool and any type of AI object detector. They will define the outputs needed from a general object detector for the tool to extract salient insights on the detectors function. In Phase I, there is not a requirement to simplify communication of the features for a non-expert audience or to correlate features across images within the data set. The tool should be delivered as a standalone package that can be approved to run on a government workstation.
Deliver a generalized algorithm dissection tool that can combine analysis from images and videos across a dataset and communicate this to an end user that is not an AI practitioner or SME (e.g. providing simplified output such as a heat map or text description). The contractor shall develop a general and simple output that effectively communicates to non-SMEs the salient features of an image that any detector most heavily uses to make a detection determination. The tool should allow AI practitioners to see a more detailed analysis (e.g., visualization of model gradients or individual CNN filters). The tool should be delivered as a standalone package that can be approved to run on a government workstation.
The end state for this tool is a standalone software tool that can provide generalized algorithm dissection on an input object detector and image dataset. It would directly support military S&T programs that are developing and implementing object detectors as well as efforts to develop countermeasures to defeat adversary AI. It could either be used on the developer side to support and improve the effectiveness of these tools or be incorporated into them to provide outputs that are exposed to the end users (e.g. Soldiers). This would help improve their ability to provide good inputs and trust the outputs.
There are two primary commercialization paths for a generalized algorithm dissection tool that can provide accessible AI explainability:
It could be marketed to developers of hardware that include object detection to use in order to assist them in understanding and improving their algorithms and implementations. This includes broad hardware applications, like autonomous cars, unmanned aerial systems, security cameras, applications such as Google Lens, etc.
Alternatively, it could be marketed as a tool for developers to integrate into their products and expose to the results to end users. This would build AI literacy and improve understanding about how the above tools function. This would assist end users in understanding how object detection algorithms work, which will improve how they interact with AI both in terms of equipping them to improve the inputs they provide to the algorithms and in terms of building consumer trust about the outputs.
For more information, and to submit your full proposal package, visit the DSIP Portal.
View the SBIR Component Instructions. View the STTR Component Instructions.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References:
KEYWORDS: Explainable AI; Responsible AI; Object Detectors; Segmentation Algorithms; Computer Vision