Image Courtesy of Admiral John Richardson
Admiral John Richardson served as the 31st Chief of Naval Operations of the United States Navy. Prior to leading the U.S. Navy, he served as Director of the Naval Nuclear Propulsion Program where he oversaw the design, construction, operation and disposal of the Navy’s over 100 nuclear power plants operating on warships deployed around the world. An engineer by training, with Masters degrees from MIT, the Woods Hole Oceanographic Institution and the National War College. Since retiring from the Navy, Admiral Richardson has joined the Board of Directors for the Exelon Corporation, The Boeing Company, BWX Technologies and SparkCognition Government Systems.
COGNITIVE TIMES: Admiral Richardson, thank you so much for speaking with Cognitive Times. You have recently stated that successful employment of AI is the key to future victory. As competition accelerates and leading AI developments now occur at a global scale, how can the Department of Defense maintain its edge and implement a practical, successful AI strategy?
Admiral John Richardson: Well as you know, the Department has been moving aggressively to establish the structure, policy, and guidance to implement an approach to AI. I would recommend reading “Implementing Responsible AI in the DoD” (26 May 2021) and “DoD AI Ethical Principles” (21 February 2020) as a start to understanding the DoD’s approach.
If I can add anything to these strong documents, it would be that we now ALL have to get started doing things. We can expedite making our data useful for AI algorithms – past data where it can be useful, but certainly any data that we generate going forward. It seems to me that a “sandbox” for testing and experimenting with AI would be extremely useful. Work still left to do includes developing the testing, validation, and verification methodology for AI. To be relevant, this must occur at a much quicker pace that will be new for DoD, but must also result in trust and confidence in the outcomes. And finally, how do we “push” these tools out to the de-ployed force? So, a lot to do – time to move into the action phase. And not a moment to lose – we’re in a heated competition.
CT: There are many uses for AI, as you’ve cited. But one important area of impact is in increasing the efficacy of existing systems. Both in terms of their lethality and their readiness. How does artificial intelligence impact mission readiness for our Navy and increase safety for deployed personnel?
AJR: I think that this is a gold mine for AI! What better way to validate and build trust and confidence in a new AI tool than to check it against a system that we already know? We can start by developing AI systems that will help monitor the “health and welfare” of our own systems – the electrical, mechanical, information systems that drive our platforms. Simply removing the cognitive burden of system monitoring from the operators will free up more brainpower to concentrate on the more sophisticated aspects of operations and warfighting. And then of course the AI system will help us glean even more performance out of our systems – optimizing and exploring new ways to run them. So a double benefit – more automatic monitoring and improved performance.
Another area where AI could be used to tremendous effect is in improving our safety performance. This was an idea that former Deputy Secretary of Defense Robert Work contributed to a discussion of near-term uses for AI. Imagine bringing the power of AI to optimizing safe vehicle routes, collision avoidance, and other areas of performance that bring risk to our service members. Setting an AI capability to search and sift through all of our safety reports could easily yield insights to make things safer.
CT: As the former Chief of Naval Operations for the United States Navy, you have seen our oceans transform and become increasingly more digitalized. In context of stealthier mines, next generation submarines, hypersonics at sea, more advanced sensors and unmanned maritime aerial capabilities, there are a host of innovations coming to our oceans. At a time when large increases in our defense budget may no longer be possible, how do the US and its allies continue to adapt to these shifts?
AJR: To my thinking, we’re on the cusp of a technological revolution. The future, including the future of warfare, will be quickly divided into two groups: those that incorporate, adopt and adapt to the potential that AI brings, and those that miss this opportunity. The gap between these two groups is already starting to open, and will rapidly open even more. One group will have advantages that may be definitive. The nature of the digital age is that once an advantage is established, it’s unclear if it will be possible to “catch up.”
So this is not an option if we want to retain primacy or even remain competitive. The other thing about this tech revolution is that it will require different structures and different approaches than the current system employs. If defense budgets stay relatively constant, then it’s a matter of prioritization. What’s the best way to reduce the financial commitment to systems that add less and less value, while increasing the commitment to those systems that will make a difference going forward.
My final thought in this is that our way ahead MUST be guided by rigorous experimentation, exercises, wargames, and prototyping. We will learn and reduce risk faster by DOING than we will by studying. And then reducing the cycle time of acquisition to allow the incorporation of lessons learned will enable staying ahead of the competition.
CT: You often comment on the potential of human-machine teams assisting our warfighters. Can you tell us more about this and the impact it can have?
AJR: Well, I think that just about every warfighter in the future will be participating in some form of human-machine team. People’s performance will be enabled and enhanced by machines. The question will be how to divide the work to be done. What things are best done by the machine part of the team, and what is best done by people? It’s an important question to get right because it will determine our recruiting, education, and training in the future.
My sense is that the machine contribution could be a relatively flat playing field – just look at technology today for a sense of how this could go. Technological advantages can be had, but the margin is slim and perishable. So we need to be the best at making continuous improvements as I describe above. We need to learn and develop quickly.
And our future competitive advantage will be determined by the creativity, collaboration, risk, learning, and decision-making of our people. Just as we’ve learned so much about how algorithms can be more powerful, we have also learned a tremendous amount about how humans make decisions. Our biases, influences, vulnerabilities, and strengths. How do we develop a training and education program to maximize all these human traits and skills? Where can technology help sharpen the human dimension? Big questions that need vigorous exploration through experimentation.
CT: Looking to the future… perhaps two, three decades out, as AI applications in the field of battle continue to scale, what will the future of air, land, sea and subsea applications look like?
AJR: I hope that I’ve already given a small glimpse of what I think that future will look like. In 2-3 decades, only the teams that have fully adopted the potential of AI will be relevant on the field of battle. The rest will have already been swept away. This future will be defined by rapidly learning teams. I think this future will make even more clear that warfare is, at its essence, a human endeavor – a competition between learning and adapting enemies. The team that achieves the best synthesis of creative people who are fully aware of their vulnerabilities, and technology to enhance that creativity and mitigate weaknesses – to learn and adapt faster – will be the team that gains advantage and stays on top. This is an existential question, and the winner may very well take all. Let’s get to it!