Politico: 5 Questions for Vilas Dhar
Hello, and welcome to today’s edition of the Future in Five Questions. Steven interviewed Vilas Dhar, president and trustee of the Patrick J. McGovern Foundation, a philanthropic organization spending millions of dollars to build artificial intelligence for the public good. Dhar argues there’s a power struggle between humans and machines, and that if we want humans to win, that means thinking beyond just regulations, and finding ways to strengthen human agency when it comes to technology. This conversation has been edited and condensed for clarity. Listen to an interview with Dhar on this week’s episode of the POLITICO Tech podcast, available on Apple, Spotify, Amazon or your preferred podcast player.
What’s one big, underrated idea?
If we invested as much in innovating how we built social structures around technology as we do in technology itself, we could live in a vastly different world. That means building new academic programs and institutions to help people learn. It’s also about restructuring how we think about work, how we think about access to healthcare, and areas where technology changes the foundational assumptions of our economic, political and social lives.
What if instead of having a few companies build technology that the rest of us consume, we let communities be architects and owners of those technologies themselves? They could build solutions to the problems they face every day. If we invested in building the compute capacity and the data architecture and, most importantly, the talent, you’d see this actual flowering of economic and social and moral virtue that people would feel like they built for themselves.
What technology right now do you think is overhyped?
I’ve been a computer scientist for 25 years working in AI. We’ve always known that these systems would be powerful when we could have them do many tasks. We didn’t need to coin the term “agentic AI” in order to explain that, but now it’s become the buzzword that describes something more than what the technology is capable of.
We build these agentic systems and we give them autonomy, but we also don’t really know yet how to control what they will be able to do and what they’ll become. We use “agentic” and “agency” sometimes as if they’re the same thing, but agency is a human virtue. It’s about my ability to express my interest in the world. An agentic system doesn’t do that. It maybe goes off and does a task, but it won’t do it with compassion or kindness or empathy. It won’t do it in ways that protect my view of what a just world looks like. Agentic isn’t the same as human agency, and we’ve forgotten that.
What do you think the government could be doing now about tech that it isn’t?
We’re so stuck in these big conversations about massive bills and regulations that need to be passed that we’ve forgotten there are things the government can do that can be really useful in the short term. The first is something really easy, like digital ID. We’ve just gone through, as a society, this whole painful process of getting to Real ID. It would have been so easy for us to say, now that you’ve gone through a process of verification, we’re going to give you a digital ID that has equivalent value to a physical ID.
For the first time, it would give us a sense of self-sovereign identity for all electronic commerce like online services, including our interactions with AI. There’s a bill that I think is coming together that will let this happen in the fall. It’d be low cost, it’d be high impact, and it would almost build for us the start of a muscle of technology regulation that I feel like we’ve lost.
What has surprised you most this year?
What surprised me this year is how quickly conversations have moved from some of the existential risks of AI, or security and governance and regulation, to what are we doing with these tools today that actually help us? And we are accelerating into that in a really great way. People are saying, ‘Wait, I already have problems. I want the solutions.
It’s not happening from the frontier companies, because they’re just building for the sake of building. The pragmatism is happening in people who are proximate to problems and now are proximate to answers as well. It’s a massively good thing, because it means a lot more smart, caring, dignified people are now saying, ‘This is within my realm of possibility.’ It means they feel agency and ownership. And when that happens throughout history that’s always been a good thing.
What book most shaped your conception of the future?
bell hooks wrote in “All About Love” that love is a choice to take responsibility for the well-being of others. It’s a choice to take action. That idea energizes me as we make decisions about the AI future we build. A conversation about responsible AI often uses words like performance, safety and fairness, but we rarely ask how our tools shape the way people relate to one another or whether they reflect any sense of care or responsibility. If we want a future where technology builds trust rather than undermines it, we need to be clear about the values behind the systems. I sincerely hope we can center them on love for each other and our common humanity.