Patent attributes
The invention features a system wherein a recognition environment utilizes comparative advantages of automated feature signature analysis and human perception to form a synergistic data and information processing system for scene structure modeling and testing, object extraction, object linking, and event/activity detection using multi-source sensor data and imagery in both static and time-varying formats. The scene structure and modeling and testing utilizes quantifiable and implementable human language key words. The invention implements real-time terrain categorization and situational awareness plus a dynamic ground control point selection and evaluation system in a Virtual Transverse Mercator (VTM) geogridded Equi-Distance system (ES) environment. The system can be applied to video imagery to define and detect objects/features, events and activity. By adapting the video imagery analysis technology to multi-source data, the invention performs multi-source data fusion without registering them using geospatial ground control points.