The Intelligent Systems Laboratory (ISL) conducts groundbreaking research and development of revolutionary intelligent systems technologies. We aim to lead the world in making robust intelligent systems that are:
We achieve research breakthroughs and continuously innovate upon them to engineer technologies and systems that can and will have a revolutionary impact in real world missions.
Our work extends across 4 technology centers that focus on separate but complementary capabilities required by robust intelligent systems: Proficient Autonomy, Operational Autonomy, Knowledge Navigation and Augmented Intelligence. Together, these centers invent innovative, mission-critical capabilities for both unmanned and human-in-the-loop autonomous systems through research collaborations with our LLC Member companies, government customers, industry partners, and leading academic institutions. These capabilities are applied widely to national security and commercial missions including intelligence, surveillance and reconnaissance, sensor exploitation, electronic warfare, air combat, space warfare, intelligence analysis, cybersecurity, commercial autonomous vehicles and advanced manufacturing.
ISL’s goal is to solve important mission problems that don’t have a feasible, off-the-shelf, scalable, and assurable solution. We meet this challenge by understanding the mission’s needs and developing novel algorithmic systems that meet the data, computation and security requirements for the platforms, environments and users to which they are deployed. Each of our technology centers focus on a different complementary aspect of these challenges, and our strength lies in bringing them together to create technologies that are ready (TRL 6/7) to be transitioned to the field in the cloud, at the edge, and in embedded systems
The simple goal of PAC is to make autonomous systems more proficient. This means we create new autonomous machine capabilities and enhance their performance with a focus on effectiveness and efficiency. As autonomous systems are deployed with minimal to zero human guidance or intervention for prolonged periods of time, they are likely to encounter unexpected events ranging from novel parametric variations to known and unknown unknowns. There is a critical need for machines which are endowed with human-like or supra-human reasoning and metacognition capabilities to adapt seamlessly without any human guidance to the scale and complexity of new situations in unstructured and potentially deceptive environments during deployment.
The current field of artificial intelligence (AI), dominated by Silicon Valley and driven mainly by deep learning, generative AI models such as large language models, has several foundational gaps, such as its over-reliance on “big” annotated and curated data and its lack of reliability and trustworthiness for high-stakes, safety-critical operational missions and applications. With our unique focus on novel brain- and biologically-inspired neural system architectures from interdisciplinary subfields of neuroscience, cognitive psychology and computer science, PAC is poised to make breakthrough advances in the quest for general-purpose AI to solve important problems for society and national security. We are currently focused on these cutting-edge research topics:
Our technologies will improve performance, reduce cost as well as create and scale new mission capabilities for our LLC partners (Boeing and GM) and the U.S. Government.
The goal of the Operational Autonomy Center (OAC) is to make machine learning and artificial intelligence (AI) technologies robust and reliable in real-world mission settings. The deployment of AI systems in mission-critical applications requires the ability to quantify and bound their behavior to ensure they meet mission requirements. These systems require tools and techniques that can predict, monitor and assure their reliability in real-world environments. Current AI systems, heavily reliant on foundations in deep learning, cannot be trusted for applications where safety and performance are critical requirements. OAC focuses on building tools for discovering weaknesses and vulnerabilities in autonomous systems and their machine learning components, assuring safe operation and improving the overall robustness of the underlying algorithms and systems- even in adversarial settings.
Because OAC is focused on operationalizing machine learning and AI for autonomy in the real-world, we welcome the opportunity to work with customers on practical mission-oriented applications.
The goal of AIC is to enhance the richness of collaboration to the extent that the human-machine team (HMT) can solve important problems that neither could accomplish alone. Despite rapid advances in machine learning and artificial intelligence (AI), humans do not trust machines with full autonomy and still interact with them as recommenders and assistants - not true partners. In particular, a machine's state is uninterpretable to humans and human intent is not predictable to machines. To effectively combine the human’s unique intuitive decision making and a machine’s fast, scalable computation, HMT systems for semi-autonomy need a two-way trusted collaboration.
AIC’s fundamental approach is to build algorithms and systems on both sides of the HMT to foster a rich and trusted collaboration. For machines to understand the dynamic context of the human and effectively communicate with them, we develop high-fidelity cognitive models and multimodal adaptive interfaces using insights from psychology, cognitive science, neuroscience and human factors along with advanced machine capabilities in language, reasoning and metacognition. We are working on discriminators in:
Taken together, our unique emphasis on both high-level cognition and metacognition (awareness and understanding of one’s own thought processes) gives us an advantage to achieve breakthroughs in machine autonomy and cohesive teaming with human partners in a variety of mission spaces.
The goal of KNC is to enable humans and machines to make robust, timely and explainable decisions using scalable algorithms which distill large volumes of complex input, context and domain knowledge. Today’s decision makers, both human and machine, must make effective choices in environments that are multi-domain, multi-modal, multi-scale, highly dynamic, and incomplete; plus these situations range from noisy to potentially deceptive and adversarial. They need to autonomically uncover knowledge from disparate data streams and connect those streams to provide transparent and interpretable explanations to play out likely outcomes of different decisions at speed and scale. AI systems in operation now are powerful for extracting correlation, but miss the underlying causal connections required to make sound decisions.
To make decision systems both effective and efficient, we design and deploy a diverse set of algorithms from network science, generative AI, signal processing, and game theory. Core discriminators include: