SBIR/STTR Award attributes
The proposed effort aims to develop a Deep Learning Framework for Waveband-Specific Global Scene Background Generation to address US Air Force’s need to create a global and multispectral scene generation capability. Using deep neural networks and publicly-available Geographic information system (GIS) data, we propose to develop a tool for background database generation that automates the extraction of available GIS data at a user-defined Region of Interest (ROI) on Earth, decomposes radiometric properties and content from a range of sensor data, and transfers the radiometric properties of the sensor data to GIS data extracted from the ROI. The salient aspects of the proposed solution are (1) interfacing with GIS database tools for automated extraction of basemap data; (2) semantic segmentation of GIS imagery into material masks and association of known physical properties with these masks; (3) super-resolution applied to source data to enhance imagery, simulating reduced ground sample distances of sensor data; and (4) automated transfer of radiometric properties from target domain data to source GIS data. In Phase I, key components including a Dataset Generation Module, Image Enhancement and Segmentation Module, and Waveband Transfer Module will be developed. Feasibility will be demonstrated via case studies of US Air Force interest, including demonstrations of the framework’s functionality using labeled datasets for initial verification, followed performance assessment on relevant sensor data provided by Air Force. The Phase II effort will focus on capability extension, algorithm optimization, prototype maturation, integration of the tool into FLITES, extensive validation and demonstration, and technology insertion into Air Force workflows.