Human-Machine Teaming and Understanding

Building trust and patterns of collaboration are core requirements for AI and autonomous agents to function effectively in the battlespace of the 21st century and beyond. The proliferation of sophisticated sensor and computing systems has created opportunities to augment human performance with new capabilities and to improve machines’ understanding of their human partners to more effectively care for their well-being and distribute workloads to efficiently execute missions.

AI/ML on Edge for Situational Awareness

This project seeks to develop algorithms and methodologies that provide greater reasoning and situational awareness through the use and contextualization of heterogeneous data modalities from various sensors and devices. Anticipated Outcomes: semantic understanding algorithms to explain how complex, real-world scenarios unfold, including how participants may experience changes of physical or emotional state; speech technologies including speaker identification capable of handling ARL-relevant scenarios; a demonstrable AI/ML on the edge for wide area surveillance; a multimodal transformer for federated learning for object detection and tracking tasks; and novel training of personalized federated learning models.

Human-Machine Teaming

This project seeks to effectively combine human intelligence, particularly flexibility and adaptability with machine performance such as speed and accuracy, to create a partnership that can surpass the capabilities of humans and machines in isolation. Anticipated Outcomes: the ability to specify complex goals using language that adapts domain randomization to facilitate generalization across different environments; an object detection model fine-tuned using hierarchical reinforcement learning with simulated and real images; a pipeline for multi-agent reinforcement learning systems that leverages input from human trainers so that agents can communicate and share knowledge with one another; a graph-based architecture to learn low-level goals; and simulated and experimental demonstrations on real world hardware at R2C2 for location and perspective matching, coordinated scanning, and lightweight scene reconstruction.

Perception-Based Teaming

This project seeks to incorporate and evaluate new technologies and software modules for aerial video recognition technologies into the ARL aerial autonomy stack and implement them on edge hardware. Anticipated Outcomes: investigation of techniques based on differential simulation and improved sampling; generation and evaluation of synthetic datasets using deep learning methods; investigation of the domain gap between real and synthetic datasets; optimization of resource-constrained machine learning models for pre-deployment adaptation by leveraging hybrid sets of real and synthetic data; techniques to estimate the pose of objects and articulated bodies from aerial videos; lightweight and accurate mobile architectures that have low memory and power overhead; and implementation and evaluation using multiple datasets with the ARL aerial autonomy stack.

Metareasoning to Improve Team Performance

This project will develop new metareasoning algorithms and policies for autonomous agents and will use simulated and field experiments with air and ground robots to evaluate their performance in various scenarios. Anticipated Outcomes: metareasoning algorithms and policies that adapt planning and other algorithms on mobile ground and aerial robots; simulated and experimental results showing how metareasoning policies affect performance, including in the presence of obstacles and/or uncertainty; improved ability to efficiently use onboard computational resources with limited size, weight, and power for planning and reasoning.

Human Machine Teaming & Effective Aggregation of Information in Complex Systems

This project will perform systems-engineering research integrating cognitive science, human factors, team sciences, and artificial intelligence to address challenges in directable AI and information integration. Specifically, it will develop testing methodologies and artifacts to assess the performance of human-machine teaming. Anticipated Outcomes: a prototype algorithm to rapidly communicate the state of an autonomous agent and the upcoming goals and actions of the agent such that a human could direct the agent when necessary; information displays to distill uncertain situational data in order to facilitate rapid comprehension and incorporation into situation awareness; and an interface with the ARL autonomy stacks and the ARL cross-reality common operating environment.

Inducing Intelligent Behaviors into Next Generation Combat Vehicles via Multi-Agent Reinforcement Learning

This project seeks to enable learning of novel tactical behaviors in collaborative team settings for command and control of next-generation combat vehicles using multi-agent reinforcement learning (MARL). Anticipated Outcomes: novel algorithms for learning with limited experience; novel collaborative behaviors among agents; novel policy-gradient algorithms for continuous state-action spaces; global optimality analysis; platform simulations; and real-world experiments at R2C2.

Causal Reasoning for Autonomous Systems

This project seeks to investigate AI inference techniques to find and interpret relevant causal relationships between the features and outcomes of various autonomous systems, which then can be transformed strategically to optimize outcomes under distributional shifts and adversarial conditions when the enemy is unpredictable. Anticipated Outcomes: learning paradigms that use disentangled information of core and spurious features in their predictions; deep models using core and/or contextual features; techniques for multi-modal interpretations of model predictions; and robustness analysis of deep models against spurious correlations.

Collaborative Decision Making Through Automated Reasoning Over Documents

The project will develop a new set of methods that integrate human and machine intelligence to augment decision-making through explainable automated reasoning. It will enable more efficient and accurate sorting and annotation of documents with natural language text and image data. Anticipated Outcomes: novel techniques for robustly integrating external knowledge in explanations, including a deep learning explanation system to highlight background from external documents and knowledge sources with uncertainty; a user study to evaluate systems according to accuracy, reliance and trust; a user interface to collect feedback from users to inform the training of the explanation system and to let users control what information is displayed.

 Conceptualizing and Assessing AI Technological Fluency

This project seeks to improve the measurement of a person’s ability to use and adapt to advanced technologies like machine-learning-enabled assistants, autonomous agents, and multi-agent systems. Anticipated Outcomes: a human interface to interact with existing ARL autonomy stacks in order to create assessments that can be administered on appropriate hardware; an assessment rubric based on a model of AI technological fluency; and an experimental design and candidate testbed for evaluating assessments.

Multi-Agent Reinforcement Learning for Command and Control

This project seeks to develop novel multi-agent reinforcement learning (MARL) techniques to learn optimal generalizable strategies that can eventually be deployed in the field for command and control tasks. We aim to use methods like augmentation, regularization, and state-of-the-art contrastive learning methods to develop generalizable strategies. Anticipated Outcomes: novel algorithms for posterior sampling-based reinforcement learning based MARL; novel collaborating behaviors among agents; computationally efficient posterior sampling-based reinforcement learning and Monte Carlo tree search algorithms; simulation results on OpenAI gym; and real-world experiments at R2C2.

Robust and Improved Visual Perception

This project seeks to improve the overall robustness and performance of autonomous perception under adversarial conditions and situations due to hardware, software, and/or environmental factors by training on degraded and low-quality images. Anticipated Outcomes: a systematic sensitivity analysis on the learned task of semantic segmentation and object recognition using RGB and lidar data; adversarial data augmentation and training to make this task more robust; and integration into the ARL ground autonomy stack.

Visual Grounding of Navigational Concepts

This project seeks to develop the perceptual and motoric representations for grounding parts of speech that will enable meaningful dialogue between humans and robots, specifically, for the purpose of recognizing concepts related to navigation. Anticipated Outcomes: a module for introducing feedback into the process of object classification by utilizing additional knowledge; a set of visual modules for recognizing affordances and object properties and a language resource; a set of modules based on hyper-dimensional computing that learn attributes using curriculum learning.

Closed-Loop Control of Spraying Actions Using Vision

This project seeks to provide legged robotic platforms with enhanced capability to autonomously navigate in heterogeneous terrain including tall grass, low brush, and other terrain typically navigable by humans by using an online self-supervised machine learning model trained on labeled datasets captured at R2C2. Anticipated Outcomes: an algorithm for detecting the height and stiffness of tall grass using cameras and lidars to determine the navigability; the definition of appropriate metrics for traversability that determine the level of difficulty; a system design and implementation for an experimentation platform, which includes mounting sensors and computers on a Boston Dynamics Spot robot; an evaluation of integrated perception and planning system in varying light and grass heights at R2C2.