The Real Cost of Building AI Without Guardrails
Enterprise leaders call for balancing rapid AI innovation with ethical safeguards.
Analytics India Magazine
By: Smruthi Nadig, Technology Journalist
As AI becomes more deeply integrated into our workflows and authorities leverage it to improve our public infrastructure, the questions around who it serves, who it harms, and who is accountable when it fails have never been as urgent. The debate around the ethics of AI took centre stage at the India AI Impact Summit in New Delhi, with global institutions and Indian policymakers arguing that innovation without ethics is no longer viable.
At the summit, UNESCO produced the first global framework on ethical AI governance, stressing in the report that as AI systems scale across sectors, “it is imperative to ensure that human rights and dignity remain at the core of innovation.”
India also emphasised the development of inclusive, safe, and responsible artificial intelligence, with Prime Minister Narendra Modi, through the ‘MANAV’ vision, talking about using AI for global common good that fosters welfare.
This signals a shift from merely discussing the economic benefits of AI to addressing safeguards, accountability, and long-term societal impacts.
“Price of Admission”
Vilas Dhar, President of AI-focused philanthropic organisation Patrick J. McGovern Foundation, describes a profound shift in how AI ethics is understood within enterprise and policy circles.
“For so long, when we’ve talked about ethics in business… building ethical AI has been a cost of doing business,” Dhar tells AIM. “I think that’s fundamentally changed. Now it’s the price of admission. If you don’t build with ethical and responsible AI principles at the core of your product, in five years, you won’t be able to sell it into any of the regulated markets around the world.”
His words align closely with UNESCO’s warning that AI must be human rights and global standards to remain legitimate. Ethics, in this framing, is not a reputational layer but a market access strategy.
Dhar also introduces the idea of “trust debt,” drawing a parallel to technical debt in software systems. Trust, once lost, is expensive to rebuild, whether in consumer markets or democratic institutions.
Governance vs Speed
Abhinav Singh, Founder and CEO of mobile app development company Techugo, acknowledges that while clients are often driven by urgency, speed cannot override safeguards. “We advocate for organisations to begin establishing responsible artificial intelligence protocols at their earliest stages… We will refuse to accept designs that compromise transparency and compliance requirements.”
This insistence on embedding ethics into implementation roadmaps mirrors the summit’s broader emphasis on responsible deployment. The tension between rapid deployment and ethical guardrails is becoming increasingly untenable as regulators and procurement teams demand proof of accountability.
Akshesh Shah, Co-founder and CEO of AI consulting firm Cogniify.ai, describes how boardrooms are recalibrating. “In 2026, we are seeing a definitive shift: boards are moving beyond the ‘tick-box’ approach toward integrated financial and operational assurance,” he says.
He notes that ethical AI is now factored into return-on-investment calculations. “A biased model doesn’t just present a social risk; it creates systemic financial harm,” Shah says.
Security and the Expanding Threat Surface
Meanwhile, AI-specific security risks continue to be a key challenge. In 2025, security researchers at ReversingLabs uncovered a new class of supply chain attacks that hide malware inside AI and ML models hosted on the AI community platform Hugging Face.
Pankaj Thapa, CEO and Co-founder of cybersecurity company Mirror Security, says that AI agents vastly expand the threat landscape. “The challenge with AI agents is that they do not just store data; they act on it. They make decisions, call APIs, and touch multiple systems autonomously. That changes the threat surface completely,” he explains.
Data poisoning, prompt injection, and autonomous decision-making amplify both technical and ethical vulnerabilities. As Thapa warns, “A lot of organisations are so focused on what their agents can do that they forget to secure the environment those agents are operating in.”
Public Accountability vs Private Innovation
A recurring concern at the India AI Impact Summit was whether assuming roles that should belong to democratic institutions.
Dhar offers a careful distinction. “One is the decision-making around public accountability, responsibility and what ethical guidelines should look like, and that should very much sit with democratic institutions,” he says.
At the same time, “you need the private sector’s innovation” to operationalise those frameworks at scale, he adds. “Bringing the private sector to an ethical frame that’s defined by government is not going to be easy, but it’s critical.”
Perhaps the most powerful takeaway from both the summit and enterprise voices is the rejection of a false binary.
“I think the biggest myth is that companies have to choose either to move quickly or responsibly,” Dhar says. “If organisations are going to make that false choice, then you’re not going to lead to a profitable or a purposeful future.”
That argument resonates with Techugo’s ethical AI philosophy, which integrates explainability, bias mitigation, and governance into delivery cycles rather than layering them afterwards. It also aligns with Cogniify’s observation that procurement teams increasingly demand auditability as a baseline requirement.
Ethics, in this view, accelerates sustainable growth rather than obstructing it.
Toward an Ethical AI Society
Dhar reframes the debate one step further. “I don’t often put the words [like] ‘ethical’ and ‘AI’ next to each other,” he says. “I often say what we need to do is build an ethical society that’s powered by AI.”
This formulation shifts responsibility from algorithms alone to the ecosystems that design, deploy, and regulate them. Enterprises must evaluate inputs, design choices, and potential disparate outcomes. Governments must define the moral and legal parameters. Civil society must hold both accountable.
As AI systems increasingly shape decisions that affect livelihoods, access to services, and democratic processes, ethical design becomes inseparable from economic strategy.