The Pathway Blog

Interviews

  • Home
  • Blog
  • About
  • Contact

1/27/2026

Sarah Kreps on AI, Drones, Guardrails, and the New National Security

Read Now
 
Sarah Kreps has built a career around one persistent problem: how technology reshapes—sometimes outright disrupts—national security faster than institutions can adapt. Shaped by her Air Force background and training at MIT, she’s worked across drones, cyber, AI, and even nuclear security, and she approaches each new “scary” technology with the same instinct: pattern-match across history, cut through hype cycles, and stay empirically grounded rather than swept up by either techno-optimism or techno-doomerism.
In our conversation, Kreps explains why she thinks less in terms of “persuasion” and more in terms of delivering usable evidence to the audiences positioned to act—militaries, governments, and the public. She traces her pivot from environmental security into what she calls the “bombs and bullets” side of the field during the era of Kosovo, 9/11, and Iraq, and describes how the think tank ecosystem and interdisciplinary collaborations (with philosophers, engineers, and computer scientists) keep her work tethered to real-world problems that can’t be solved from a single discipline.
We also dig into the hardest practical question: what guardrails around AI can survive contact with the battlefield. Kreps argues that “human in the loop” matters—but may be more indeterminate than policymakers assume, because we still lack strong data on how different people actually interact with AI decision-support under pressure. She closes with advice for students trying to contribute in the next year: cultivate breadth, learn to ask better questions, and rebuild mental discipline through deep reading—because in a world optimized for short-form input, sustained focus is becoming a rare advantage.

—Sarah Kreps is a professor at Cornell University and a scholar of technology and national security
The through-line: technology disrupting national security
Benjamin Wolf: I’d love to begin by asking: when you look across your work at the intersection of technology and national security, what’s the core question you keep coming back to—and why does it feel urgent right now?

Sarah Kreps: A lot of people ask what the through-line is for my work. Broadly, it’s the way technology is changing—sometimes disrupting—national security.
The motivation comes from my background in the military. I was in the Air Force. I did my training at MIT. So these ideas—technology and national security—are very much embedded in how I think about things. I’ve worked on everything from drones to cyber to AI to nuclear weapons.
I have a book coming out on these questions about how technology has disrupted national security. In it, I try to pattern-match—to think about hype cycles, the tech optimists and the tech pessimists—and position myself in ways that are historically and empirically grounded.
The reason it feels important now is that what’s “relevant” keeps changing. In each moment, a different technology seems scary and disruptive. I’m trying not to offer simple solutions, but to frame questions: what can we, as a society—and as a national security establishment—do to temper the excesses of technology while harnessing the opportunities?

Audiences and institutions: not persuasion, but usable evidence
BW: You’ve moved between institutions and audiences—academia, law and policy communities, and public-facing writing. How do you decide who you’re speaking to on a given project, and what changes when the audience changes?

SK: I’ve never thought of myself as trying to persuade anyone. I would frame it instead as bringing insights to audiences that are in a position to protect society and take advantage of opportunities.
Sometimes that’s members of society—how can they guard against disingenuous AI? Sometimes it’s militaries—how can they take advantage of drones but guard against others’ use of drones? Sometimes it’s government—how do we develop institutions that protect, for example, in a nuclear security context?
Too often, both sides engage in hyperbole—either tech solutionism or tech pessimism, tech doomerism. Often the answer is somewhere in between. What I’m trying to do is make sense of an appropriate equilibrium—not persuasion so much as providing evidence that helps clarify the problem.

From environmental security to “bombs and bullets”
BW: If I’m not mistaken, you studied environmental studies and public policy as an undergrad. What led you into the military and eventually into the work you do now?

SK: I grew up in the D.C. area, so I was always marinating in public policy questions and national security. My dad worked for the Department of Energy in the nuclear space. So I was always interested in some version of security.
As an undergrad and master’s student, that took the form of environmental security. I did a lot of work on environmental engagement. But as I did ROTC training—and especially once I was in the military—I pivoted to what people might call “hard” national security, or what folks in the business refer to as the “bombs and bullets” side of security.
Part of that was the era: Kosovo, 9/11, and then Iraq. These were big military engagements. My work in the military was developing new intelligence, surveillance, and reconnaissance systems.
It seemed to me there were big questions within the military that didn’t always have the analysis. You had practitioners without the analytics, and analysts without the military experience. My background allowed me to bridge those audiences in ways most people can’t—either because they don’t have the credibility or they don’t have the experience.

Career arc and the value of the think tank ecosystem
BW: After the military, how did your career evolve? And with fellowships and affiliations—are those things you pursued, or did they come to you through the work?

SK: Some of both.
Think tanks let you stay engaged with real-world questions. It’s not that the “ivory tower” deserves all the derision it gets, but it’s certainly more insulated than the think tank community. Being involved in those conversations keeps ideas fresh.
If you look at the arc of my publications, I try to think hard about difficult national security problems. In 2009, before many people were paying attention to drones and U.S. counterterrorism, I started working on drones because I’d been in that space earlier. A friend from high school—he was a philosopher—came to me and said, “I’ve been watching what’s happening with drones. You were in the military; you’re a political scientist. Do you want to collaborate?” So I said yes.
That’s also true of my work more generally: it’s interdisciplinary. That drones work was with a philosopher. My work now on semiconductor supply chains is with mechanical engineers. My AI work is with computer scientists.
In a way, it comes full circle to being an undergrad working in labs. I took a lot of hard science classes—chemistry, physics, math—so I can be credible not just in national security, but also across disciplines.
The questions that are pressing today are inherently interdisciplinary. They need voices not just of engineers or computer scientists, but also philosophers, political scientists, and people who study national security. And especially in the last few years, these have become big societal questions—AI’s impact on employment, the battlefield, the classroom—so many of them require interdisciplinary answers.

A battlefield-proof guardrail (if one exists)
BW: Staying on AI: if you had to propose one realistic guardrail that could actually survive contact with modern conflict, what would it be?

SK: The important guardrail would be ensuring there is a human in the loop. But I’m not completely optimistic that it can survive contact with the battlefield.
Some of the work I’m doing right now is trying to figure out—data-driven—how individuals in battlefield settings interact with AI decision-support systems. People are developing these systems and putting them out in the field, but we don’t yet have great data about how people interact with them.
For example: do people respond to confidence thresholds in the same way? Are some more likely to override than others? We’re often assuming one size fits all in how these systems are used, but we don’t have good evidence for that.
So even saying “keep a human in the loop” is itself indeterminate, because we don’t really know—within a group of ten people—whether those ten will respond similarly to the same outputs.

Grants, rejections, and the “show up” principle
BW: You’ve certainly earned a lot of awards and grants for that kind of research. How do those processes start? How often do they work out? And how do you not get discouraged by the rejections?

SK: It’s definitely a numbers game. You have to apply to a lot of things, and some will work out. Like a lot of things in life, it’s about showing up over and over.

BW: When you win an award, do you already have a detailed plan for how you’ll implement it? Or does it adjust as you go?

SK: Part of the reason those processes are so long is that they require high-granularity thinking about what you’re actually going to do. Execution is often more straightforward than idea generation.

Regulation without delusion: the pragmatic middle path
BW: When governments try to govern emerging technology, they often default to either overconfidence in rules or fatalism that rules won’t matter. What’s your pragmatic middle path—how should institutions build adaptive governance without outsourcing responsibility to the technology?

SK: It’s a tricky question, and I grapple with it in the book. There’s no one-size-fits-all approach.
A lot depends on values. Europe is approaching this differently than the United States, which is approaching it differently than China. These regions have different values. Europe has long been more skeptical of new technologies, so the response tends to be more precautionary—even when technologies are still nascent.
We see that in the AI Act, which leans more aggressively into regulation than the U.S. The U.S. approach has been more: let evidence and data unfold so we can understand what the technology means before responding.
Policymakers face a conundrum. If you act too early, you may not understand the technology and could impede progress—for example, AI applications in medicine. But if you don’t act soon enough, you risk being caught flat-footed as new threats emerge.
In AI, we’ve seen a lot of existential language where the reality is more ambivalent. In the U.S., it’s also complicated because the U.S. has many of the tech firms. Aggressive regulation isn’t only about stifling technology and opportunity; it also has economic implications, because these firms are among the most thriving parts of the economy.
Certain states are taking regulation more seriously—California, New York—but what those steps can ignore is that capital and talent are mobile. They can move from state to state, country to country. In Europe, you don’t have the same thriving AI tech sector in part because people come here to do the work—because they can.

Why academia: the privilege of thinking (with students)
BW: Outside your research and writing, you’re also a professor at Cornell. Why did you decide to pursue academia alongside everything else, and what’s been most rewarding about teaching?

SK: I went into my PhD at a place known for cultivating practitioner types—Georgetown—so I thought I wanted to go back into the policy world.
But once I got into my studies, I realized what a privilege it is to wake up every day and think about questions that are important in the real world—or at least I hope they’re important. And also to educate the next generation on these issues. What better position than a university professor?
Someone from a think tank once said think tanks are great because they’re universities without students. And I thought: why would you want to be at a university without students? Students are one of the best parts of my job.
I teach law students, business school students, PhD students, undergrads—the whole range. Each group thinks about these topics differently, and my interactions with them enrich the way I think about the questions.

Two skills in 12 months: breadth and better questions
BW: As we wrap up, if a motivated undergrad wanted to contribute meaningfully to this field in the next 12 months, what are two concrete skills or habits—one analytical, one practical—that would make them more competent?

SK: I read this recently—and maybe it validates the approach I’ve taken—but the world today is a world suited for generalists.
Practically, I would recommend breadth. Some of the most interesting people can combine philosophy and computer science, or economics and political science. My recommendation is: don’t stovepipe yourself. Be well-versed across disciplines so you can look at problems not in silos, but as the real world presents them.
Analytically, it follows from that: learn how to ask the right questions. These aren’t falsifiable math questions. It’s about asking: how can societies, polities, and economies get the most out of new technologies without the negative externalities and risks? You try to get closer to the answer, even if there isn’t just one answer.

What she wishes she’d done earlier—and why reading still matters
BW: What’s a piece of career advice you wish you had taken earlier?

SK: Even though it contradicts what I said a little, I wish I had taken more math and more computer science earlier.
There’s a lot of debate about whether computer scientists will become obsolete because of AI, but I think that’s overblown. You need some understanding of coding to engage meaningfully with AI and to ask the right questions. I often feel like I’m outsourcing some of those parts to students.
On the other hand, that’s what teams are for. Not everyone can be good at everything. The best teams bring together people who are excellent at coding with people who are excellent at other things. But yes—I’d recommend that students load up on math and computer science while also engaging the bigger philosophical questions.

BW: One last one: if someone wants to follow a path like yours, is there any reading you’d recommend?

SK: I would just recommend more reading. I lament that people are engaging more and more online and on their phones. As someone who studies technology, I’m becoming more of a Luddite—wanting to put my phone aside and read something dense.
Read a classic. Sit down with a dense novel—Dostoyevsky—something that’s a slog, something that requires mental discipline. There’s a lot of awareness of physical discipline, but I think mental discipline is atrophying. Our ability to sustain focus is lower, and it takes a conscious decision to retrain that part of the mind.

BW: I’ve honestly never heard that response before, and it feels right. We live in a world built around short-form video—even platforms like Netflix and major news outlets are including clips on their platforms.

SK: The advice to read a dense book can sound out of step. But I do think that kind of mental discipline—something fewer people have—will set you apart.

BW: My computer is actually propped up on Walter Isaacson’s Steve Jobs biography right now--definitely different than Dostoyevsky but certainly dense!

SK: Biographies are great. I love reading biographies because there’s so much we can learn from people who’ve been successful.
And weaving insights together across different figures helps you map them onto your own personality. There isn’t one-size-fits-all. You read about one person, then another, and you can see, “I like this quality,” or “I hadn’t thought about that.” It becomes like a menu of skills and attributes. But how would you know that without reading deeply about people?
I read a biography of Martin Luther and found it fascinating—this figure from 500 years ago. He walked from Germany to Rome in the early 1500s. Putting yourself in a completely different time period—no trains, no modern travel—forces a mental exercise you don’t get online.
What struck me was that he disrupted the status quo. We think of technology as disrupting the status quo, but that’s what he did as an individual—enabled in part by technology like the printing press. Bringing those insights to the present is valuable. And I think if I’d read that book two years earlier, I would have gotten something different out of it. That’s why reading is so useful—it stimulates thoughts that short-form content just won’t.

Closing
BW: Professor Kreps, I can’t thank you enough for your time. It’s been an honor speaking with you today.
​
SK: Thank you so much, Ben. 

Share


Comments are closed.
Details

      Get the latest sent to your inbox.

    Subscribe!

The Pathway blog

The Pathway Blog is an independent interview platform focused on governance, public decision‑making, and career discovery.

  • Home
  • Blog
  • About
  • Contact