SBIR/STTR Award attributes
Training simulations that currently support Small Unit Decision-Making (SUDM) training are laborious to configure and expensive to manage with live personnel, which results in training that is limited in scope. Current simulations require numerous “pucksters” to control simulated entities, driving up the manpower costs of conducting simulation-based training. To fulfill the roles of pucksters in simulation-based training environments, Charles River Analytics proposes to design and demonstrate Simulated Teachable Agents for Training Environments (STATE). STATE features (1) virtual terrain procedural content generation and a terrain reasoning application programmer’s interface to create a robust testing environment for the virtual agents; (2) a computer-generated forces engine based on recognition-primed decision making; and (3) agent behavior learning algorithms that incorporate Bayesian reasoning, Monte-Carlo Tree Search, and Deep Reinforcement Learning. The result of the STATE effort will be a suite of tools that empowers trainers to improve agent behaviors for simulation-based training and scales to meet future training needs at low cost.