SBIR/STTR Award attributes
Self-driving cars are coming. Building on the successes of the DARPA Grand Challenge and Urban Challenge autonomous vehicle competitions, commercial companies such as Tesla, Uber, Google and others, have invested hundreds of millions of dollars and achieved millions of road hours of autonomous operation. In the meantime, driver safety aids that detect lane departures and cars in the driver’s blind spot, and that sense and avoid rear end and back-up collisions are here now and offered in most new cars. Today’s Army vehicles have no such capabilities to provide drivers with automated or aided obstacle avoidance, threat detection, and navigation. Deep R-CNN for Infrared Video Exploitation (DRIVE) system will take advantage of recent advances in Deep Learning Convolutional Neural Networks (CNN) to achieve pixel level scene labeling capabilities based on a fused pipeline that exceed the results of any one modality. DZYNE’s approach will fuse sensor data at the feature level within the CNN with proper adaptation in order to take advantage of the diversity of information provided by IR, depth maps derived from IR video, and other data sources when available. Accurate real-time scene labeling is a key enabling technology that will enable Cognitive Processing for Autonomy applications.

