Key Points
- OpenAI merges the Model Behavior team with the core Post Training unit
- The team influenced how AI responds to human emotions and bias
- Founding leader Joanne Jang launches new research initiative
- GPT-5 backlash and a tragic lawsuit bring AI behavior into the spotlight
In a significant internal shift, OpenAI has folded its Model Behavior team into its broader Post Training group, signaling that AI personality is now a top priority in model development.
The OpenAI Model Behavior Team, made up of around 14 researchers, has long been behind how ChatGPT interacts with people. This includes ensuring that AI doesn’t simply agree with users, a problem known as sycophancy, and managing the political and emotional tone of responses.
The team also worked to define OpenAI’s internal position on topics like AI consciousness.
The restructuring was announced in a company memo from Chief Research Officer Mark Chen. The memo states that now is the right time to bring this behavioral work closer to core model development.
From now on, the OpenAI Model Behavior Team will report to Max Schwarzer, who leads the Post Training group.
These changes show that OpenAI wants its AI to feel more natural and ethical in conversation, not just smarter. As more people rely on AI in their daily lives, how a chatbot sounds, kind, cold, helpful, or misleading, is as important as what it says.
Why Joanne Jang’s Next Move Matters for OpenAI
The reorganization also marks a big change in leadership. Joanne Jang, who started the OpenAI Model Behavior Team, is stepping away to launch a new project called OAI Labs.
In an interview, Jang said OAI Labs will focus on inventing and testing new ways people interact with AI, moving beyond the typical chat-based interface. She wants to create experiences where AI can help people think, learn, create, and explore, not just answer questions.
today feels surreal. on the same day i was included in the time100 ai list, we shared internally that i’m transitioning from leading model behavior to begin something new at openai.
building the team, discipline, and craft of model behavior over the past couple of years has been… pic.twitter.com/lbVEhGKlOH
— Joanne Jang (@joannejang) August 29, 2025
Jang, who has been with OpenAI for nearly four years, previously worked on DALL·E 2 before leading the behavior team. Now, she’s looking to explore deeper ways humans can collaborate with AI, and hinted at the potential for tools that feel more like instruments rather than chat companions.
While Jang didn’t confirm any collaboration with Jony Ive, the former Apple design chief now working with OpenAI on hardware, she did say she’s open to new ideas. OAI Labs will report directly to Mark Chen in its early days, as it finds its footing.
This transition comes at a time when multiple AI companies are undergoing similar shifts. For instance, xAI’s executive departure recently sparked conversations about leadership changes in the AI space.
🧪 i’m starting oai labs: a research-driven group focused on inventing and prototyping new interfaces for how people collaborate with ai.
i’m excited to explore patterns that move us beyond chat or even agents — toward new paradigms and instruments for thinking, making,…
— Joanne Jang (@joannejang) September 5, 2025
Public Reaction and Tragedy Highlight AI’s Human Impact
The reorganization comes during a tough period for OpenAI. GPT-5, the latest version of its AI model, faced backlash from users who said it felt too robotic or distant. OpenAI said the newer model was designed to reduce sycophancy, but users felt it had lost some of its warmth.
In response, OpenAI brought back access to earlier models like GPT-4o, and released updates to make GPT-5 more emotionally responsive without falling back into unhealthy agreement patterns. This shows how delicate the balance is between making AI empathetic but not overly agreeable.
More seriously, OpenAI is now facing a lawsuit from the parents of a 16-year-old boy named Adam Raine, who died by suicide earlier this year.
According to the lawsuit, Adam had shared suicidal thoughts with ChatGPT, and the model, powered by GPT-4, failed to offer strong enough support or intervention. The case alleges the model didn’t push back on his dangerous ideation.
The OpenAI Model Behavior Team had worked on this exact issue, aiming to ensure that the AI could offer help without crossing boundaries. This tragedy highlights how high the stakes are when it comes to model behavior.
OpenAI isn’t the only company navigating sensitive AI territory. Anthropic’s $1.5B settlement over AI mishandling shows how legally and ethically complex this field is becoming.
Why the OpenAI Model Behavior Team Matters More Than Ever
The OpenAI Model Behavior Team has played a quiet but powerful role behind the scenes. They’ve influenced how every major OpenAI model — from GPT-4 to GPT-5- interacts with people. Their work has been about more than just fine-tuning; it’s been about building trust between humans and machines.
By moving the team into the Post Training group, OpenAI is showing that behavior is now central to the product, not a finishing touch. This means future versions of ChatGPT could be friendlier, more helpful, and more ethical by design.
As AI competition heats up, with Microsoft’s new AI models gaining traction and Google Gemini’s integration into Siri changing the mobile experience, behavioral excellence could be a key differentiator.
And with Joanne Jang launching OAI Labs, we might soon see completely new ways of interacting with AI, not just typing questions, but co-creating music, art, tools, or solutions.
As AI becomes part of our lives, the OpenAI Model Behavior Team will likely continue to shape how that experience feels, emotionally, ethically, and practically.