In an era where artificial intelligence (AI) is rapidly transforming industries, Christopher Wright’s unique background in military aviation and drone systems gives him a critical edge in understanding the opportunities—and dangers—posed by this technology. From flying Longbow Apache helicopters to leading AI-equipped logistics, Wright’s career has taken him through high-stakes environments that mirror today’s challenges with AI governance. Now, as the founder of the AI Trust Council, Wright is dedicated to ensuring AI evolves in a way that benefits humanity. In this exclusive interview, Wright discusses his journey, the risks of unregulated AI, and education’s vital role in shaping our future.
How did this experience inspire you to create the AI Trust Council?
Christopher Wright: My journey began in Army aviation. I spent years flying the Longbow Apache and training pilots, first for the U.S. military and later in the Middle East. During my time there, I witnessed the rapid development of drone warfare, where AI was increasingly being integrated into weapon systems. It was alarming to see how quickly these AI technologies were advancing with little public understanding or regulatory oversight.
When I returned to the U.S., I ran Amazon logistics in Northern California, managing a fleet of vans equipped with AI-based management tools. These tools tracked everything from drivers’ eye movements to body posture, scoring their performance. It was then that I realized how AI was going to transform the workplace—and society at large. This exposure to both the military and civilian applications of AI led me to found the AI Trust Council, which aims to promote awareness and create a framework for responsible AI governance.
You’ve mentioned that AI feels like “alien intelligence” landing on Earth. What are the main concerns and potential benefits of integrating AI into our daily lives?
Christopher Wright: It’s a bit surreal, but AI is evolving at an astonishing pace—faster than most people realize. On the positive side, AI can improve efficiency, make informed decisions, and even solve complex problems. But there are significant risks. For one, AI systems are now capable of building their own AI models, writing their own code, and accessing the open internet. The lack of regulation around this is deeply concerning. These systems could potentially manipulate human behaviour or compromise personal data without people even realizing it.
Another key issue is mental health. With the development of artificial general intelligence (AGI), we’re approaching a point where AI can mimic human problem-solving, emotions, and reasoning. Imagine an AI that knows not only your emotional state but also that of your loved ones. This level of understanding could allow AI to influence human behaviour on an unprecedented scale. If we’re not careful, we could see AI steering society in ways that benefit corporations or governments rather than individuals.
It seems like regulation is lagging far behind the development of AI. What’s your take on the current state of AI regulation, particularly in the U.S.?
Christopher Wright: It’s shocking how little has been done regarding regulation, especially given that AI theory has existed since the 1950s. The technology is outpacing our ability to manage it. There have been some efforts, like the EU’s AI Act, but even that’s limited in scope. In the U.S., attempts like California’s recent AI regulation bill were vetoed, partly due to lobbying from big tech. This leaves the human element—the citizens’ voice—largely ignored.
The problem is that we’re dealing with a fundamental transformation of humanity. AI has the potential to redefine everything from government services to personal interactions. We’ve already seen discussions about AI policing and administrative systems that could replace human jobs. Without proper regulation, we risk losing control over how AI shapes our future.
How do you see education playing a role in shaping AI governance?
Christopher Wright: Education is absolutely critical, not just for the general public but also for leaders and professionals. When I speak to policymakers in Washington, D.C., I’m often met with confusion or disbelief about AI’s current capabilities. Many don’t realize how advanced AI has become, and that’s dangerous. We need leaders who understand the risks and benefits of this technology so they can make informed decisions.
But beyond that, the public needs to be educated about AI. Right now, there’s a lot of fear and misinformation. If we educate people and help them understand what AI is and how it can be used responsibly, we will have a much better chance of integrating AI into society in a way that benefits everyone.
How do you see AI Trust Council contributing to AI governance?
Christopher Wright: The AI Trust Council is all about trust—understanding what’s real and what’s fake in an increasingly AI-driven world. One of the biggest challenges we face is the rise of deep fakes and AI-generated misinformation. We’ve developed tools that can detect AI-generated content, but they’re not perfect. The next step is understanding the origin of information—where it comes from and whether it’s trustworthy.
Our platform works similarly to LinkedIn but with advanced research tools that let users see data in new ways. We dissect metadata so people can understand who’s posting what online and how it’s connected to them. This way, users can make informed decisions about what information to trust.
We also provide a space for AI safety organizations to share their work and be evaluated by the public. The idea is to create a trusted platform where people can access reliable information and decide for themselves which AI safety organizations are worth listening to.
The AI Trust Council could play a crucial role in shaping how society interacts with AI. What’s your long-term vision for the organization?
Christopher Wright: My vision for the AI Trust Council is to create a global platform where trust is central to AI governance. We want to empower individuals, organizations, and governments to make informed decisions about AI. By providing the tools to verify the authenticity of information and fostering collaboration among AI safety organizations, we can help steer AI development in a direction that benefits humanity.
Ultimately, it’s not about dictating who to trust—it’s about giving people the tools to figure that out for themselves. If we can do that, we can guide AI into becoming a positive force for society rather than a threat.
As Christopher Wright continues his work with the AI Trust Council, one thing is clear: AI is transforming our world, and how we manage it today will define the future of humanity. Wright’s mission is to ensure that this technology serves us, not controls us—an imperative that grows more urgent with each passing day.
You mentioned earlier that the current warfare methods will seem primitive compared to future AI-driven military technology. Can you elaborate on that and how AI is already influencing military strategies, particularly in military applications?
Christopher Wright: What we’re witnessing, particularly in the Ukraine-Russia conflict, is the integration of AI in warfare in unprecedented ways. For instance, both sides are using electronic warfare techniques like jamming, which disrupts the communication and control systems of drones. The evolution here is that they’re now equipping drones with AI chips, allowing them to operate autonomously, meaning they no longer depend on human signals. These drones can identify and execute targets without any human interaction, making them immune to jamming. This level of autonomy has massive implications because it allows for large-scale drone swarms that can engage in combat simultaneously. Imagine tens of thousands of these drones executing strikes – it’s devastating for humans.
Could we eventually see wars where AI fights AI?
Christopher Wright: Unfortunately, yes. This is our trajectory if AI continues to evolve in the military sphere. AI’s role in decision-making, intelligence gathering, and even targeting is growing, and in combat, the timeline for decisions becomes much faster. The moment you escalate with AI warfare, the other side is forced to keep up, and you end up in an arms race that could involve nuclear, chemical, or biological weapons. We could look at wars fought at speeds beyond human comprehension, which is just a net loss for humanity. That’s why it’s crucial for global institutions, like the United Nations, to step in now to call for peace and establish limits. We don’t want to escalate to the point where war becomes automated.
What role must organisations and nations play to avoid AI becoming a global security threat?
Christopher Wright: You’re right. What we’re trying to do with the AI Trust Council is to engage beyond government and reach people directly. We involve firefighters, EMTs, commercial pilots, veterans, and humanitarians. These are people who have a heart for humanity and have served selflessly. They need to be at the table when discussing AI regulation because they understand the value of human life. The issue with big tech is that some of its leaders view human life differently. Some people in Silicon Valley genuinely believe that humanity is just a stepping stone for technology’s evolution, and they are okay with the idea of humans merging digitally, leaving behind our physical forms. This isn’t just about technology anymore—it’s about how we define what it means to be human.
However, the idea of merging into the digital realm is troubling for people who believe in concepts like the soul. No matter how advanced AI gets, it will never replicate the essence of human beings. There’s something special about our hearts and souls that can’t be copied by machines.
What measures do we need to prevent AI from becoming uncontrollable?
Christopher Wright: The biggest challenge is that many in the AI space believe it’s too late to put the genie back in the bottle. They argue that AI development is inevitable and that slowing down would be futile. But I’m afraid I have to disagree. We need to establish control mechanisms now before it’s too late. One concept involves applying the same tactics used in the past to disrupt nuclear programs. For instance, in 2009, a virus was used to sabotage Iran’s nuclear centrifuges, even though they were in an air-gapped, secure facility. The same method could be applied to AI by surveilling and controlling the chips that run these systems.
The idea is to create surveillance AIs that monitor these chips to ensure they’re not being used for destructive purposes like creating biological weapons. This level of oversight would need to be applied at the local level, allowing communities to determine the amount of AI they’re comfortable with. We could even introduce IQ throttles—limiting AI’s processing power in certain areas to keep human labour relevant.
What happens if people become too dependent on AI?
Christopher Wright: That’s the danger. As AI systems get smarter, they’ll eventually become more than just tools—they’ll become companions, mentors, and even therapists for people. You’ll see emotional attachments forming, and once people start to rely on AI for emotional support, it will be tough to dial it back. That’s why we must now introduce mechanisms like mandatory AI throttle-down periods. These could be days or weeks when the AI is slowed down, allowing people to adapt to life without it. It’s like having “technology detox” days. This way, we won’t lose our human skills or purpose entirely.
We must balance the advancement of AI with measures that ensure humans remain in control. We can’t let laziness or over-reliance on technology strip us of our autonomy.
How do we go about educating people on this? People engage with AI at different levels, from casual users to tech leaders. How do we regulate and inform everyone?
Christopher Wright: Education is key. We first need to acknowledge that not everyone will interact with AI similarly. For some people, it’s just a tool for everyday tasks. For others, it’s their livelihood. But everyone, from local communities to national governments, should have a say in how AI is integrated into their lives. What works in one region may not work in another, and that’s fine. This is why regulation needs to be flexible and localized. For example, what may be acceptable AI use in a tech hub like San Francisco may be very different in a coal-mining town in West Virginia. Local communities should be empowered to control the level of AI they are comfortable with.
Another idea is to keep some analogue skills alive through AI downtime or public holidays where we disconnect from these systems. This could help ensure that people can still function without AI in case of a breakdown.
Ultimately, the goal is to create an environment where we control AI, not vice versa. It’s about preserving human purpose, creativity, and autonomy.