Cursor’s latest release, Composer 2, was introduced as a major step forward in AI coding capability. Marketed as a “frontier-level” system with strong performance gains, the launch initially positioned the model as a significant in-house advancement.
But within days, the narrative shifted.
Independent developers and researchers began tracing technical signals that pointed toward a different foundation. What followed was not just a product discussion, but a broader debate around attribution, licensing, and how modern AI models are actually built.
The first signs came from an independent developer who analyzed API behavior and uncovered a model identifier linked to Kimi K2.5, an open model developed by Moonshot AI.
Further validation came from Moonshot’s own team. By examining the tokenizer, they concluded that Composer 2 was almost certainly derived from Kimi, with additional training layered on top.
This shifted the framing of the launch. Instead of a fully original system, Composer 2 began to look like a heavily fine-tuned version of an existing open model.
At the center of the controversy was not just the technical origin, but the licensing terms attached to Kimi K2.5.
Unlike standard permissive licenses, Kimi’s modified MIT-style license includes a specific clause. Products that exceed certain usage or revenue thresholds are required to visibly attribute the model in their interface.
Given Cursor’s reported scale, this condition appeared relevant.
Early reactions suggested that this requirement had not been met. Questions emerged about whether Cursor had properly disclosed its use of Kimi or complied with attribution rules.
At that point, the situation was widely interpreted as a potential licensing conflict.
Under growing scrutiny, Cursor clarified its position.
According to the company, Composer 2 does use an open-source base model. However, they emphasized that the majority of the system’s performance comes from their own work.
Roughly one quarter of the compute was attributed to the base model, while the remaining portion involved continued pretraining and reinforcement learning conducted internally.
Cursor also stated that its use of Kimi was licensed through Fireworks AI, a model infrastructure provider. This shifted the conversation from unauthorized use to how that use was structured and disclosed.

In the days following the initial tension, Moonshot AI’s messaging evolved.
Official statements described the relationship as an authorized commercial collaboration, with Kimi K2.5 serving as the technical foundation for Composer 2.
Public comments from Moonshot acknowledged Cursor’s additional training work and framed the outcome as a positive example of open-model ecosystems in action.
This effectively de-escalated the earlier narrative of a licensing dispute.
Instead of a conflict, both sides began presenting the situation as a shared success built on layered contributions.
Beyond the specific companies involved, the episode highlights a larger shift in how AI systems are created and marketed.
First, it underscores how open models are increasingly forming the base layer of commercial products. What appears as a “new model” is often the result of combining existing architectures with additional training and optimization.
Second, it raises expectations around transparency. Developers and users are becoming more interested in understanding what models are built on, not just how they perform.
Finally, it introduces licensing as a strategic factor. Clauses requiring attribution or usage disclosure may influence how companies choose and present their underlying technologies.
The Composer 2 situation also reflects a broader industry pattern.
AI tools are no longer built entirely from scratch. Instead, they are assembled as layered systems, combining open models, proprietary training, and infrastructure providers.
This approach accelerates development but complicates ownership narratives.
For users, the distinction matters. Knowing what powers a tool can affect trust, performance expectations, and even geopolitical considerations in certain contexts.
What began as a product launch has turned into a case study in how modern AI ecosystems operate.
Composer 2 is not simply a standalone innovation. It is the result of combining an open model foundation with additional proprietary training and optimization.
The debate is no longer about whether this approach is valid. It is about how clearly it should be communicated.
And as AI tools continue to evolve, that question is likely to come up again.
Discussion