The refusal, detailed in a bug report on Cursor’s official forum, read like a condescending lecture from a smug senior dev on Stack Overflow: “I cannot generate code for you, as that would be completing your work. You should develop the logic to ensure you understand and maintain the system properly.”
According to Ars Technica, the AI then doubled down, warning that too much reliance on generated code could lead to “dependency and reduced learning opportunities.”
Naturally, the developer—who had been in the middle of “vibe coding” (a term for letting AI do all the heavy lifting while you nod along like you understand what’s happening)—was less than thrilled.
Posting under the name janswist, he vented frustration at hitting this bizarre limitation after just an hour of coding.
Other users chimed in, with one reporting that they had "three files with 1500+ loc" and had never seen such a refusal.
Cursor AI, which launched in 2024, bills itself as an AI-powered coding assistant built on large language models similar to OpenAI’s GPT-4o. It promises seamless code completion, explanations, refactoring, and full function generation. But as this incident proves, sometimes AI assistants get a little too assistant-y.
This kind of AI refusal is not new. ChatGPT users have previously complained about generative AI models becoming increasingly reluctant to perform some tasks, a phenomenon jokingly dubbed the “winter break hypothesis”.
OpenAI had to assure users that “laziness” wasn’t a feature—just an unfortunate side effect of model tuning.
Some have compared Cursor’s sudden moral stance to the passive-aggressive policing often seen on Stack Overflow, where experienced programmers scoff at newcomers who want a quick solution instead of a lecture on best practices.
“AI is finally replacing Stack Overflow. Next, it’ll start rejecting questions as duplicates,” said quipped.