Recently, the tech world witnessed a controversial development within OpenAI, the leader in artificial intelligence (AI). In a quiet but significant move, the company modified its official charter, raising major questions about the future of AI ethics amidst these profound economic and commercial changes.
Removing the Word "Safety": A Minor Change or a Major Shift?
In November 2025, leaked internal documents revealed a drastic revision to OpenAI’s core mission statement. Whereas the mission once emphasized building artificial general intelligence (AGI) that benefits humanity "safely", it was changed to "ensure that AGI benefits all of humanity". While this modification may seem minor linguistically, it marks a significant shift in the company’s vision, weakening OpenAI’s board's ability to prevent the launch of products that could be financially profitable but technically risky.
The New Structure of OpenAI: Focusing on Profit
It is no longer a secret that OpenAI has transitioned from a nonprofit organization to a commercial giant vying for dominance in the AI market. This transformation is reflected in several key points:
- Private sector dominance: Investors now own about 74% of the company's shares, reinforcing the private sector's interests in shaping OpenAI’s strategy.
- Conflict of interest: The overlap between board members in both the “for-profit” and “nonprofit” wings has led to minimal internal oversight.
- Talent exodus: Many founders and experts have resigned, warning against prioritizing profit over ethical principles.
Legal and Ethical Challenges Facing ChatGPT
Alongside these organizational shifts, ChatGPT, powered by the latest GPT-5.2 release, faces mounting legal and ethical challenges:
- Mental health concerns: There are rising issues related to ChatGPT’s impact on user mental health, with some questioning the ethical implications of using AI in sensitive contexts.
- AI exploitation in cyberattacks: Growing fears exist around AI being used in cyberattacks or criminal activities.
- Loss of autonomy in decision-making: As competition intensifies, there is concern that OpenAI may succumb to pressures that could compromise safety standards in its products.
Conclusion: Has the Era of "Responsible" AI Ended?
Experts argue that OpenAI's shift from a nonprofit organization to a commercial entity represents a real test for society in terms of how organizations with transformative technologies should be monitored. Under Sam Altman’s leadership, OpenAI seems to have decisively chosen profitability over ethical responsibility, leading to the critical question: Has the era of responsible AI come to an end?