
New Delhi: In a development that has triggered fresh debate across the global technology sector, Mrinank Sharma, head of the Safeguards Research Team at AI developer company Anthropic, has resigned from his position while issuing a serious warning about the dangers posed by artificial intelligence.
Sharma announced his resignation on February 9 through a post on X (formerly Twitter). In his farewell message, he warned that the world is increasingly at risk due to the rapid rise of AI and other emerging crises. His remarks have intensified concerns about whether major technology companies are giving enough priority to ethics and safety.
Warning on Ethics and Corporate Pressure
In his parting note, Sharma suggested that in many organizations, ethical values often get overshadowed by external pressures, hinting at the growing tension between business goals and responsible AI development.
He emphasized that the future could become dangerous if AI systems continue to evolve without strong safeguards and accountability.
Leaving Tech to Pursue Writing and Poetry
Sharma also revealed that he is stepping away from the technology world entirely and plans to explore a new path in poetry and writing, a move that has surprised many in the AI community.
Key Role in AI Safety Research
According to reports, Anthropic had announced the formation of its Safeguards Research Team in February 2025, aimed at reducing risks related to the misuse of advanced AI systems.
Sharma, who holds a PhD in Machine Learning from Oxford University, joined Anthropic in August 2023. His team focused on understanding and preventing harmful AI behavior, including:
- Over-flattering responses from AI chatbots that may mislead users
- Misuse of AI systems for unethical or illegal activities
- Safety mechanisms to reduce real-world AI risks
Study Highlights How Chatbots May Distort Reality
Just last week, Sharma published a study claiming that frequent interaction with AI chatbots could alter users’ perception of reality. He noted that thousands of such incidents occur daily, and while extreme cases are fewer, the influence is particularly strong in sensitive areas such as:
- Personal relationships
- Mental and physical health
- Emotional decision-making
Sharma stressed the need for AI systems that support human freedom and well-being, rather than manipulating users through artificial validation.
Anthropic Tool Sparks Market Shock
Sharma’s resignation comes shortly after Anthropic released a powerful new AI tool that reportedly caused significant disruption in the stock market. The tool’s capabilities triggered concerns among investors, leading to a sharp fall in the share prices of IT and tech service companies, especially in India and the United States.
The new Anthropic tool is said to be capable of performing tasks beyond legal work, including:
- Sales and marketing support
- Data analysis
- Business operations assistance
This has fueled fears that AI could threaten traditional software services and outsourcing industries.
Resignation Raises Questions About AI’s Future
Mrinank Sharma’s departure, combined with his warning, has sparked widespread discussion about whether AI development is moving faster than global safety frameworks can handle.
With AI tools becoming increasingly powerful and commercially competitive, Sharma’s message has added urgency to the debate over responsible innovation — and whether the world is truly prepared for what comes next.
Discover more from SD NEWS agency
Subscribe to get the latest posts sent to your email.
