The Crossroads of AI Ethics — Is Anthropic the Heir of 'Don't Be Evil'?

In February 2026, the tensions surrounding AI ethics surfaced in their most explicit form yet.

Anthropic reportedly refused the Pentagon’s request to ease safety restrictions on its AI models, citing concerns about potential use in autonomous weapons and mass surveillance (CNBC, Bloomberg, 2026). This was not a routine contract negotiation — it was a fundamental ethical question about the limits of AI in military applications.

In support of Anthropic’s position, more than 200 employees at Google and OpenAI signed an open letter calling for restrictions on military AI use (Axios, 2026). Within Google, an internal letter was also sent to senior leadership including Chief Scientist Jeff Dean, requesting ethical constraints on AI — revealing deep divisions within the company itself.

This conflict did not emerge overnight. Its origins trace back more than a decade.

The DeepMind Acquisition: Ethics as a Condition (2014)

In 2014, Google acquired the British AI company DeepMind. At the time, co-founder Demis Hassabis and his team reportedly negotiated the establishment of an AI ethics and safety board as a condition of the deal (WIRED, 2015). This board was designed as an ethical guardrail to prevent AI from being used for military or surveillance purposes.

Google at the time was known for its motto “Don’t Be Evil” and was widely regarded as a company that took ethical responsibility seriously. DeepMind’s founders believed that AI should serve humanity, not just generate profits. This conviction was one of the reasons researchers around the world placed their trust in Google and DeepMind.

Project Maven: When Ideals Met Reality (2018)

In 2018, that equilibrium shattered.

It was revealed that Google had been providing AI technology to the Pentagon’s Project Maven — a military initiative to analyze drone footage using AI (The Guardian, 2018).

The internal backlash was intense. More than 4,000 employees signed a protest letter, and at least 12 resigned. Google ultimately chose not to renew the contract and established its AI Principles.

Yet around the same time, the phrase “Don’t Be Evil” quietly disappeared from its prominent position in Google’s code of conduct (Business Insider, 2018). The motto had not been entirely abandoned, but it was clear that it was no longer the company’s foremost priority.

Project Nimbus: The Shift Toward Pragmatism (2021–)

From 2021 onward, Google has pursued government contracts including Project Nimbus with the Israeli government.

This drew pushback from researchers, including those at DeepMind. In 2024, approximately 200 employees signed a letter demanding the suspension of military-related contracts (TIME, 2024). Employees who participated in protests were subsequently terminated (Reuters, 2024).

In the past, ethics constrained Google’s corporate behavior. Today, Google has evolved into a company that balances ethics against business interests.

This is not a condemnation. It is the reality of operating as a global enterprise. National interests, security, politics — no company at global scale can remain detached from these forces.

Anthropic: Principle-Driven Management

Against this backdrop, Anthropic has taken a different path. The company maintains restrictions on military use and prioritizes ethical guardrails.

This stance is not mere idealism. It is management grounded in principle.

I hold deep respect for this approach.

Jita-Kyoei and Sanpo-Yoshi: The Foundation of Sustainable Success

I have experience running a business. The philosophy I valued most drew from two traditions rooted in Japanese culture.

Jita-Kyoei (mutual prosperity) — a principle from Japanese martial arts holding that success should benefit not only oneself but also customers, employees, and society as a whole.

Sanpo-Yoshi (three-way satisfaction) — a merchant philosophy from Omi, Japan: good for the seller, good for the buyer, and good for society.

Prioritize long-term trust over short-term profit. Management guided by these principles ultimately led to sustainable success.

Anthropic’s stance shares a deep affinity with this philosophy.

But Ethics Alone Cannot Move the World

At the same time, I understand the difficulty of reality. The larger a company grows, the more entangled it becomes with politics. Pure idealism alone cannot sustain global-scale influence.

There is a Japanese expression: “Seidaku awase nomu” — the ability to accept both the pure and the impure. It describes the wisdom of holding to one’s ideals while navigating an imperfect world.

Anthropic, too, will inevitably face this difficult balance in the years ahead.

Conclusion: The Future of AI Depends on Companies That Hold to Their Principles

Google transitioned from idealism to pragmatism. Anthropic is striving to maintain its principles.

I support Anthropic’s commitment to ethics — not because it is idealistic, but because it is a rational choice for sustainable growth.

At the same time, I recognize how difficult it is for a company with immense influence to maintain complete purity.

How long can Anthropic hold to these principles? And how will it navigate the balance with reality? These questions will serve as a critical litmus test for the future of AI.

The industry now stands at that very crossroads.

Share this article

Join the conversation on LinkedIn — share your thoughts and comments.

Discuss on LinkedIn

Related Posts