AI Keeps Writing Wrong Code? Master These 5 Prompt Techniques to Boost Efficiency by 50%

Friday afternoon, 4:30 PM. I stared at the code Cursor just generated, completely stunned.
I simply wanted it to help me write a user login endpoint. Instead, it gave me a 300-line monster—all parameter types wrong, database connection hardcoded, and three different error handling approaches mixed together. I took a deep breath and deleted everything. On the second try, it mangled my config files beyond recognition—files I never asked it to touch.
That moment, I realized: the problem wasn’t the AI. I never told it “what to do and what NOT to do.”
Ever feel this way? You installed the AI coding tool with enthusiasm, but after using it for a while, you find the generated code either completely misses the mark or looks right but breaks when you run it. After several rounds of revisions, it would’ve been faster to write it yourself.
Honestly, I used to think Prompt engineering wasn’t that important—it’s just talking to AI, how hard could it be? Until I saw my colleague use the same Cursor to write the same feature, finishing in ten minutes what took me an hour. The difference? His prompts had clear structure, complete context, and well-defined boundaries.
This article will walk you through 5 immediately effective Cursor Prompt techniques. No deep theory—just practical formulas distilled from real experience. After reading, you’ll know how to make AI truly understand your requirements and generate accurate, usable code.
Why Your Prompts Keep “Failing”
AI Isn’t Mind Reading—It Can Only “See” What You Give It
Many people assume AI is so smart it should guess what they want. Wrong. No matter how powerful, AI is just a “sophisticated text completion engine”—it can only reason based on the information you provide. What you don’t say, it can only “guess.”
It’s like calling a friend to buy you coffee, only saying “get me coffee.” They arrive at the shop confused: Americano or latte? Large or medium? Sugar? They end up randomly picking one. AI works the same way.
Especially in large projects, AI’s “memory loss” is more obvious. Its context window is limited—it can’t load your entire codebase into its “brain.” You ask it to modify a function, but it has no idea this function is called by 20 other places. One change, everything crashes.
Three Common Prompt Disasters
Disaster 1: Too Vague
❌ “Help me implement sorting”
This instruction is like entering “I want to go to Beijing” into GPS—where in Beijing? What transportation? When to arrive? AI receives this and can only improvise. It might write bubble sort or call a third-party library your project doesn’t have.
✅ “In utils/array.ts file, add a function to sort user list by registration time in descending order, using native Array.sort() method, no new dependencies”
See the difference? The latter specifies: which file, what data, what method, what constraints.
Disaster 2: Lack of Context
❌ “Add error handling”
AI: Where? Handle what errors? Use try-catch or error codes? Return what format?
It’s like a doctor asking “what’s wrong” and you just reply “pain.” Can the doctor diagnose? No.
✅ “In api/login.ts handleLogin function, add error handling:
- Catch network request failures (use try-catch)
- Return unified error format
{ success: false, error: string } - Reference existing error handling approach in
api/register.ts”
Disaster 3: No Constraints
❌ “Optimize this component’s performance”
AI receives this and might rewrite your code completely—introducing new libraries, modifying state management, even rewriting the entire component architecture. You just wanted a React.memo, but it gave you major surgery.
✅ “Optimize UserList.tsx component performance:
- Only use React.memo or useMemo
- Don’t modify component’s props interface
- Don’t introduce new third-party libraries”
The Truth About AI “Hallucinations”
Ever experienced this: AI-generated code calls a function that doesn’t exist? Or references a variable your project doesn’t have? This is AI “hallucination.”
According to Cursor’s best practices research, providing clear context can reduce 70% of hallucination problems. Why?
AI automatically “fills in the blanks” when information is incomplete. You don’t tell it what existing code looks like, so it guesses based on “common patterns.” For example, you ask it to write login functionality, and it might assume you have an authService.login() method—but your project doesn’t have this at all.
The solution is simple: explicitly show AI the relevant existing code.
Now every time I write a prompt, I ask myself three questions:
- Does AI know which file I’m working in?
- Does AI know the existing code structure?
- Does AI know what can and cannot be changed?
If I can’t answer these, I don’t rush to send the prompt.
The Golden Formula for Structured Prompts
The 5W1H Framework: Making AI Instantly Understand Your Needs
Journalists have a classic formula for writing news: 5W1H (Who, What, When, Where, Why, How). The same method works for writing prompts.
- What (Goal): What do you want AI to do?
- Why (Reason): Why do it this way? (Helps AI understand intent)
- Where (Location): Which file/module to operate on?
- When (Timing): When should it trigger?
- Who (Object): What data object to operate on?
- How (Method): What tech stack/library to use?
Sounds complex? You don’t need all of them every time. The core is: don’t make AI guess.
For example, you want AI to add data validation:
❌ Vague version:
Add user input validation✅ 5W1H version:
**What**: Add input validation to user registration form
**Where**: src/components/RegisterForm.tsx
**Who**: Validate username, email, password fields
**How**: Use existing Yup library (already installed)
**When**: Trigger when user clicks submit button
**Why**: Prevent invalid data submission to backendAI receives this instruction and won’t go off track.
Cursor’s Official Structured Template
The Cursor team summarized a more practical template. I’ve used it for months—it really works:
## Goal
[One sentence describing what to implement]
## Context
[Current file, related code, tech stack]
## Current Behavior
[What the current state is]
## Desired Behavior
[What effect you want to achieve]
## Acceptance Criteria
[What conditions count as complete]
## Constraints
[What can't be modified, what rules must be followed]The best part of this template is the Constraints section. Many people write prompts only saying “what to do,” not “what NOT to do,” resulting in AI going overboard with improvisation.
Real-World Example: React Login Functionality
Let me compare using a real scenario. Suppose you’re adding user login to a React project.
❌ Vague Prompt (Common for Beginners):
Help me implement user login functionalityAI receives this and might write a 200-line component, introduce libraries you don’t need, or even “invent” a non-existent API endpoint.
✅ Structured Prompt (Expert Approach):
## Goal
Implement user login functionality in React project
## Context
- Project uses React 18 + TypeScript
- State management: Redux Toolkit
- Existing API service: src/services/authService.ts (includes login method)
- UI library: Ant Design 5.x
## Current Behavior
Login page (src/pages/Login.tsx) has only static UI, no interaction logic
## Desired Behavior
1. User enters username and password, then clicks "Login"
2. Call authService.login() for authentication
3. Success: save token to Redux, navigate to /dashboard
4. Failure: show error message (use Ant Design's message.error)
## Acceptance Criteria
- [ ] Form validation: username and password cannot be empty
- [ ] Show loading state during login process
- [ ] Password input box uses type="password"
- [ ] Pass ESLint and TypeScript checks
## Constraints
- **Don't modify** authService.ts file
- **Don't introduce** new third-party libraries
- **Must use** existing Redux slice (src/store/authSlice.ts)Compare the generation results:
First prompt, AI might give you code like this:
// ❌ AI's improvised result
import { useState } from 'react';
import axios from 'axios'; // You didn't ask it to install new libraries, but it did
function Login() {
const [username, setUsername] = useState('');
const [password, setPassword] = useState('');
const handleLogin = async () => {
// Directly calls non-existent API
const res = await axios.post('/api/auth/login', { username, password });
localStorage.setItem('token', res.data.token); // Doesn't use Redux
};
// ... 150 more lines of code you don't need
}Second prompt, AI will strictly follow your requirements:
// ✅ Structured Prompt result
import { useState } from 'react';
import { Form, Input, Button, message } from 'antd';
import { useDispatch } from 'react-redux';
import { login } from '@/services/authService';
import { setToken } from '@/store/authSlice';
import { useNavigate } from 'react-router-dom';
export default function Login() {
const [loading, setLoading] = useState(false);
const dispatch = useDispatch();
const navigate = useNavigate();
const handleSubmit = async (values: { username: string; password: string }) => {
setLoading(true);
try {
const { token } = await login(values.username, values.password);
dispatch(setToken(token));
navigate('/dashboard');
} catch (error) {
message.error('Login failed, please check username and password');
} finally {
setLoading(false);
}
};
return (
<Form onFinish={handleSubmit}>
<Form.Item name="username" rules={[{ required: true, message: 'Please enter username' }]}>
<Input placeholder="Username" />
</Form.Item>
<Form.Item name="password" rules={[{ required: true, message: 'Please enter password' }]}>
<Input.Password placeholder="Password" />
</Form.Item>
<Button type="primary" htmlType="submit" loading={loading}>
Login
</Button>
</Form>
);
}See the difference? The second code:
- Uses your specified Ant Design components
- Calls your existing authService
- Uses Redux to store token
- Has form validation and loading state
- Doesn’t introduce any new dependencies
This is the power of structured prompts.
5 Immediately Effective Prompt Techniques
Technique 1: Plan First, Code Later (Plan Mode)
This is the feature that surprised me most after using it.
Before, I’d let AI start coding directly, often resulting in crashes—modifying wrong files, deleting critical code, messy logic. Then I discovered Cursor’s Plan Mode (press Shift+Tab to switch), which completely changed my workflow.
Plan Mode logic: AI thinks first, then acts.
Specific process:
- You describe requirements
- AI analyzes your codebase and asks questions (like “Do you want to keep existing error handling?”)
- AI creates an action plan (which files to modify, what features to add)
- You review and modify this plan
- After confirmation, AI starts writing code
Why is this so useful? It gives you a “chance to regret.”
For example, last week I asked AI to add internationalization to my project. In Plan Mode, AI first gave me a plan:
Plan:
1. Install i18next and react-i18next
2. Create src/i18n/locales/ directory
3. Modify App.tsx to add I18nProvider
4. Update hardcoded text in all components (about 30 files)Seeing step 4, I immediately stopped: “Wait, don’t touch those 30 files, let me try 3 pages first.” Then I modified the plan before AI executed.
Without Plan Mode, AI might have already modified all 30 files, making recovery a nightmare.
Practical Value: Prevents AI from directly messing up code, especially suitable for large refactoring and new feature development.
Technique 2: Provide Specific Code Examples
AI has a trait: it’s excellent at “learning by example.” Show it a few examples, and it can mimic your style. In technical terms, this is called Few-shot Prompting.
Scenario: You want AI to help write API endpoints, but you have your own code style and error handling habits.
❌ Prompt without examples:
Help me write an API endpoint to get user listAI might write it like this:
app.get('/users', (req, res) => {
// Style completely different from yours
const users = db.getUsers();
res.json(users);
});✅ Prompt with examples:
Help me write an API endpoint to get user list, referencing the style of existing endpoints:
**Example 1**:
\`\`\`typescript
export const getProductById = async (req: Request, res: Response) => {
try {
const { id } = req.params;
const product = await productService.findById(id);
if (!product) {
return res.status(404).json({
success: false,
error: 'Product not found'
});
}
res.json({ success: true, data: product });
} catch (error) {
logger.error('Error fetching product:', error);
res.status(500).json({
success: false,
error: 'Internal server error'
});
}
};
\`\`\`
Please implement getUserList endpoint following this style:
- Use async/await
- Unified error handling
- Return format: { success, data/error }
- Use logger to record errorsAI will generate code like this:
export const getUserList = async (req: Request, res: Response) => {
try {
const users = await userService.findAll();
res.json({ success: true, data: users });
} catch (error) {
logger.error('Error fetching users:', error);
res.status(500).json({
success: false,
error: 'Internal server error'
});
}
};Perfect match for your code style.
Pro Tip: Create a prompts/templates.md file in your project root, store common code examples there, and copy-paste into prompts when needed.
Technique 3: Explicitly State “What NOT to Do”
This technique has saved me countless times.
AI defaults to trying to “help” you—but sometimes its “help” is what you don’t need. For example, you ask it to fix a bug, and it might “optimize” surrounding code along the way, introducing new problems.
According to Cursor team research, explicit constraints can reduce 60% of unexpected modifications.
Real-world case:
❌ Prompt without constraints:
Fix pagination bug in UserList componentAI might:
- Modify component’s props interface (causing parent component errors)
- Rewrite entire pagination logic (introducing new bugs)
- Introduce new third-party libraries (increasing dependencies)
✅ Prompt with clear constraints:
Fix pagination bug in UserList component
**Constraints**:
- **Don't modify** component's props interface (TypeScript types)
- **Don't introduce** new third-party libraries
- **Only modify** logic inside handlePageChange function
- **Don't change** styles and UI structureAI will strictly follow these rules, only changing what’s necessary.
There’s also an advanced technique: ask AI to list its assumptions first.
Fix pagination bug, before starting please list your assumptions:
1. What do you think is causing the bug?
2. What code do you plan to modify?
3. Will you introduce any dependencies or change any interfaces?AI will answer these questions first. You can check if its understanding is correct before letting it proceed. This works especially well for complex bug fixes.
Technique 4: Use Acceptance Criteria as Anchors
This technique is borrowed from agile development.
Define acceptance criteria at the beginning (not the end) of your prompt. AI will use this as a goal for reasoning, like setting “must-achieve objectives.”
Compare:
❌ Without acceptance criteria:
Refactor DataTable component performanceAI: What counts as complete refactoring? What method? What measurement standard? (It can only guess)
✅ With acceptance criteria:
Refactor DataTable component performance
**Acceptance Criteria** (must satisfy all):
- [ ] When rendering 1000 rows of data, FPS not below 55
- [ ] Use React.memo and useMemo
- [ ] Don't modify component's API (props and callbacks)
- [ ] Pass all existing unit tests
- [ ] Code must pass ESLint checks
**Current Problem**:
When rendering 500 rows, FPS drops to 20, scrolling is laggyAI will use these standards as the “North Star,” ensuring every modification moves toward this direction. If it finds an optimization would break tests or change the API, it will automatically avoid it.
Benefit: Prevents AI from making “seemingly useful but actually useless” changes.
Technique 5: Distinguish Inline Edit and Agent Mode
Many people don’t know Cursor has two working modes. Using the right mode for the right scenario can double efficiency.
Inline Edit (Cmd+K):
- Suitable for: Single-file small changes, refactoring, bug fixes
- Characteristics: Fast, precise, won’t mess with cross-file changes
- Examples: Rename function, add parameter, fix bug
Agent Mode (Cmd+I or Chat):
- Suitable for: Multi-file reasoning, new feature development, architecture-level changes
- Characteristics: Can analyze entire codebase, cross-file operations, create plans
- Examples: Add new features, large-scale refactoring, tech stack migration
I used to use Agent Mode for everything, which was inefficient. Later I discovered: simple tasks use Inline Edit, complex tasks use Agent Mode.
Decision tree:
Need to modify multiple files?
├─ Yes → Agent Mode
└─ No → Need to understand complex context?
├─ Yes → Agent Mode
└─ No → Inline EditFor example:
Scenario 1: Rename function getUserData to fetchUserData
→ Use Inline Edit: Select function name, Cmd+K, type “rename to fetchUserData,” instant change.
Scenario 2: Add user permission management (involves Model, Controller, View, Config multiple files)
→ Use Agent Mode: It will analyze project structure, ask questions, create plan, then implement across files.
Consequences of choosing wrong mode:
- Inline Edit for complex tasks → Can’t see global picture, easily misses related changes
- Agent Mode for simple tasks → Overkill, longer wait time, might over-deliver
Pro Tip: If unsure which to use, first use Agent Mode’s Plan feature to see how many files it plans to change. If only 1-2 files, switch back to Inline Edit for speed.
Advanced Techniques: Building Your Prompt Toolkit
Create Reusable Prompt Templates
After writing prompts for a while, you’ll notice many tasks are repetitive—adding API endpoints, adding components, writing unit tests, database migrations…
Instead of writing from scratch each time, turn common prompts into templates.
My project root directories now all have a prompts/ folder, categorized by task type:
prompts/
├── api-endpoint.md # API endpoint template
├── react-component.md # React component template
├── unit-test.md # Unit test template
├── migration.md # Database migration template
└── refactor.md # Refactoring templateExample: API Endpoint Template (prompts/api-endpoint.md)
# New API Endpoint Template
## Goal
Add [feature description] API endpoint in [module name]
## Context
- Route file: src/routes/[module].routes.ts
- Controller: src/controllers/[module].controller.ts
- Service layer: src/services/[module].service.ts
- Data model: src/models/[module].model.ts
## Desired Behavior
**Request**:
- Method: [GET/POST/PUT/DELETE]
- Path: /api/[path]
- Body/Query: [parameter description]
**Response**:
- Success: { success: true, data: [...] }
- Failure: { success: false, error: "..." }
## Acceptance Criteria
- [ ] Follow existing error handling patterns (reference other endpoints)
- [ ] Use async/await
- [ ] Add input validation (use Joi or Zod)
- [ ] Log errors (use logger)
- [ ] Pass TypeScript type checking
## Constraints
- **Don't modify** existing route configuration structure
- **Don't introduce** new third-party libraries (unless necessary and justified)
- **Must use** existing database connection (don't create new connection)When needed, copy template and fill in the blanks.
Team Collaboration Benefit: Team members share these templates, everyone writes code with consistent style, and onboarding new members is faster.
Use Agent Skills (2026 New Feature)
This is a major trend in 2026 for AI coding tools.
Simply put, Agent Skills are reusable “AI applications”. You can encapsulate best practices into a Skill and call it when needed, like calling a function.
Example:
When I write blog articles, I used to manually add images each time. Later I created an Agent Skill that automatically:
- Analyzes article content
- Identifies sections needing images
- Generates image prompts
- Marks insertion positions
Now I just run this Skill, and it completes the entire process automatically.
How to Create Agent Skills?
In Cursor, you can create custom Skills in the .cursor/ directory:
.cursor/
└── skills/
├── code-review.md # Code review Skill
├── test-generator.md # Test generation Skill
└── api-doc.md # API documentation SkillLatest Update: In December 2025, Agent Skills became an open specification, now supported by mainstream tools like Claude Code, Cursor, VS Code, and GitHub Copilot. This means Skills you write can be used across tools.
Honestly, I initially thought Skills were “over-engineering” until I used team-shared Skills in a collaborative project—code quality and efficiency immediately jumped to another level.
Optimize Project Structure to Help AI Understand
AI isn’t human—it understands codebases differently than we do. A clearly structured project makes AI perform much better.
Practical Advice:
Create
/promptsfolder
Store common Prompt templates and code examples for easy copy-pasting.Use
.cursor/directory to store project context
Create a.cursor/context.mdfile documenting:- Project tech stack
- Code style guidelines
- Commonly used third-party libraries
- Special conventions (like “we prefix private methods with
_”)
AI will automatically read this file and understand your project characteristics.
Clear directory structure and naming
AI infers code purpose from file names and directory structure. If your files are namedutils.ts,helpers.ts,common.ts, AI will be confused—what are these?Better approach:
utils/ ├── date-formatter.ts ├── string-validator.ts └── array-helpers.ts
My experience: The messier the project structure, the more AI hallucinations. Organize directory structure well—not only comfortable for humans, but AI understands you better.
Pitfall Guide: Common Mistakes and Solutions
Mistake 1: Information Overload—Giving Too Much at Once
When I first started using AI coding, I made this mistake: pasting the entire requirements doc, three related files, five reference articles to AI, thinking “more information is better.”
Result? AI got overwhelmed.
It either couldn’t catch the key points, misunderstood, or simply timed out. Like telling someone ten things at once—they remember the least.
Correct Approach: Progressive Disclosure
Don’t dump all information to AI at the start. Instead:
- First state core requirements
- When AI has questions, provide detailed information
- Give context step by step
Example:
❌ Information overload version:
Help me refactor user authentication module.
[Paste 500 lines of code]
[Paste requirements document]
[Paste three technical articles]
Requirements: performance improvement, security hardening, elegant code...✅ Progressive version:
Round 1:
I want to refactor the user authentication module to improve security.
Currently using JWT, the problem: no token refresh mechanism.
What information do you need?
[AI will ask: What's the current token expiration time? Where is it stored?]
Round 2:
Token expires in 1 hour, stored in localStorage.
Here's the current authentication logic (key code):
[Only paste core 50 lines of code]
What's your improvement plan?
[AI gives solution, you confirm then let it proceed]Benefits: AI can focus on core issues, and you can correct its understanding anytime.
Mistake 2: Over-relying on AI Without Reviewing Code
This is the pit I fell into hardest.
Once, AI wrote a data processing function for me. No errors when running, unit tests passed. The code looked neat, so I merged it directly. After going live, I discovered—this function returned wrong results in edge cases, causing user data corruption.
AI-generated code, especially “looks completely correct” code, is most dangerous.
Warning: The faster AI generates, the more carefully you need to review.
I’ve now formed a habit. After AI writes code, I ask myself three questions:
- Edge cases: Empty array, null, undefined, max value, min value—handled?
- Side effects: Will this code affect other modules?
- Performance: Any infinite loops, memory leaks, unnecessary repeated calculations?
Practical Tip: Ask AI to review its own code.
Please review the code you just generated and answer:
1. What happens if the input is an empty array?
2. Does this code have performance issues?
3. What assumptions did you make? Do these assumptions hold in all cases?AI will recheck the code and often finds its own problems.
Mistake 3: Expecting Perfect Answer in One Shot
Many people treat AI like a “magic button”—input requirements, instantly get perfect code. This isn’t realistic.
AI is more like a collaborator, not a “fully automatic code generator.” The most efficient way to work is iterative dialogue:
- You describe requirements
- AI gives initial solution or questions
- You confirm, correct, supplement
- AI optimizes based on feedback
- Repeat 3-4 until satisfied
Correct Example:
You: Help me implement search functionality for user list
AI: Sure, I'll add a search box to filter the user list in real-time.
Which fields do you want to search? Username, email, or others?
You: Search username and email, but don't need real-time filtering,
trigger after clicking "Search" button
AI: Got it. Frontend filtering or send request to backend?
You: Frontend filtering, data volume is small
AI: OK, I'll use Array.filter(),
let me show you the implementation plan first...See? This is conversation, not “giving orders.”
Tip: Enable Cursor’s “Show Work” mode (in settings). AI will show its reasoning steps—you can see how it thinks, making it easier to spot problems.
Bonus: Don’t Waste Time “Fine-tuning” Prompts
Final counterintuitive advice: Don’t overly pursue perfect prompts.
I’ve seen people spend 30 minutes meticulously designing a prompt, only to have the generated code not as good as writing it themselves. This is putting the cart before the horse.
The purpose of Prompt engineering is improving efficiency, not showing off how beautiful your prompts are. If you can write a task yourself in 10 minutes, don’t spend 20 minutes writing a prompt.
When is it worth using AI?
- ✅ Repetitive tasks (batch modifications, template generation)
- ✅ Unfamiliar domains (new tech stacks, new libraries)
- ✅ Complex but patterned tasks (API endpoints, CRUD operations)
When is it not worth it?
- ❌ Code so simple you could write it with eyes closed
- ❌ Algorithm design requiring deep thinking
- ❌ Highly customized code with no reference patterns
Remember: AI is a tool, not the goal. Solving problems quickly is what matters.
Conclusion
Writing good prompts isn’t rocket science. The core is three things: clear goals, sufficient context, defined boundaries.
Recap of this article’s 5 key techniques:
- Use Plan Mode to plan first—give yourself a “chance to regret”
- Provide code examples—make AI follow your style
- Specify constraints—tell AI what not to touch
- Set acceptance criteria—give AI a clear goal
- Choose the right mode—Inline Edit for simple tasks, Agent Mode for complex ones
Master these, and you’ll find AI is no longer an “incompetent assistant” but a genuine programming partner that boosts efficiency. After using these methods myself, AI code generation accuracy improved at least 50%, and many repetitive tasks now take just 10 minutes.
Open Cursor right now and try using the structured template. Starting with your next feature, write prompts in “Goal - Context - Desired Behavior - Constraints” format. You’ll immediately feel the difference.
Remember: The future of AI coding isn’t “AI replacing programmers,” but “programmers who use AI replacing those who don’t.” This skill gap starts with your first high-quality prompt.
Using Structured Prompts to Improve AI Code Quality
Master the complete practical workflow of Cursor Prompt engineering, from vague instructions to precise output
⏱️ Estimated time: 30 min
- 1
Step1: Understand AI's Working Mechanism: Avoid Common Prompt Disasters
AI can only reason based on information you provide, not mind reading. Three common disasters:
**Too Vague**:
❌ "Help me implement sorting"
✅ "In utils/array.ts file, add function to sort user list by registration time descending, use native Array.sort(), no new dependencies"
**Lack of Context**:
❌ "Add error handling"
✅ "In api/login.ts handleLogin function add error handling: catch network request failures (try-catch), return unified format { success: false, error: string }, reference api/register.ts approach"
**No Constraints**:
❌ "Optimize this component's performance"
✅ "Optimize UserList.tsx component performance: only use React.memo or useMemo, don't modify props interface, don't introduce new third-party libraries"
Key points: Clear context reduces 70% of AI hallucinations, explicit constraints reduce 60% of unexpected modifications. - 2
Step2: Use Structured Prompt Golden Formula
Adopt Cursor's official 6-part template:
## Goal
One sentence describing what to implement
## Context
• Project tech stack (React 18 + TypeScript)
• Related file paths (src/pages/Login.tsx)
• Existing services (src/services/authService.ts)
• Libraries used (Ant Design 5.x)
## Current Behavior
Login page has only static UI, no interaction logic
## Desired Behavior
1. User enters username password then clicks login
2. Call authService.login() for validation
3. Success: save token to Redux, navigate to /dashboard
4. Failure: show error message (Ant Design message.error)
## Acceptance Criteria
• Form validation: username and password cannot be empty
• Show loading state during login
• Password input uses type="password"
• Pass ESLint and TypeScript checks
## Constraints
• Don't modify authService.ts file
• Don't introduce new third-party libraries
• Must use existing Redux slice (src/store/authSlice.ts)
This template ensures AI doesn't guess, generates code conforming to your project standards. - 3
Step3: Master 5 Immediately Effective Techniques
**Technique 1: Plan First, Code Later (Plan Mode)**
• Press Shift+Tab to switch to Plan mode
• AI analyzes codebase and asks questions
• Creates action plan (which files to modify)
• You review and modify plan before execution
• Prevents AI from directly messing up code
**Technique 2: Provide Specific Code Examples (Few-shot Prompting)**
• Paste your existing code examples in prompt
• AI mimics your code style and error handling habits
• Recommend creating prompts/templates.md for common examples
**Technique 3: Explicitly State "What NOT to Do"**
• List prohibited items in Constraints section
• Reduces 60% of unexpected modifications
• Advanced: ask AI to list assumptions first (bug cause, modification scope, dependency changes)
**Technique 4: Use Acceptance Criteria as Anchors**
• Define acceptance criteria at prompt beginning
• AI uses this as "North Star" for reasoning
• Prevents "seemingly useful but actually useless" changes
**Technique 5: Distinguish Inline Edit and Agent Mode**
• Inline Edit (Cmd+K): single-file small changes, fast and precise
• Agent Mode (Cmd+I): multi-file reasoning, new features, architecture changes
• Decision tree: need multiple files? → Yes use Agent, No use Inline - 4
Step4: Build Reusable Prompt Toolkit
**Create Prompt Template Library**:
Create prompts/ folder in project root, categorized storage:
• api-endpoint.md (API endpoint template)
• react-component.md (React component template)
• unit-test.md (unit test template)
• migration.md (database migration template)
• refactor.md (refactoring template)
Each template includes Goal, Context, Desired Behavior, Acceptance Criteria, Constraints—copy and fill in when needed.
**Use Agent Skills (2026 New Feature)**:
Create reusable AI applications in .cursor/skills/ directory:
• code-review.md (code review Skill)
• test-generator.md (test generation Skill)
• api-doc.md (API doc generation Skill)
December 2025: Agent Skills became open specification, usable across Cursor, VS Code, GitHub Copilot.
**Optimize Project Structure to Help AI Understand**:
• Create .cursor/context.md documenting project tech stack, code style, special conventions
• Use clear directory structure and file naming (date-formatter.ts not utils.ts)
• Messier project structure = more AI hallucinations - 5
Step5: Avoid Three Common Mistakes
**Mistake 1: Information Overload**
❌ Paste 500 lines of code + requirements doc + technical articles at once
✅ Progressive disclosure: state core requirements → AI asks → provide details → step-by-step context
**Mistake 2: Over-relying on AI Without Code Review**
After AI writes code ask yourself:
• Edge cases (empty array, null, undefined, extreme values) handled?
• Will this code affect other modules?
• Any infinite loops, memory leaks, unnecessary repeated calculations?
Practical tip: ask AI to review its own code, answer "what if input is empty array" "any performance issues" "what assumptions"
**Mistake 3: Expecting Perfect Answer in One Shot**
AI is collaborator not magic button, correct approach is iterative dialogue:
1. You describe requirements
2. AI gives initial solution or questions
3. You confirm, correct, supplement
4. AI optimizes based on feedback
5. Repeat 3-4 until satisfied
Enable Cursor's "Show Work" mode to view AI's reasoning steps, easier to spot problems.
FAQ
Why does AI still generate wrong code even when my prompt is detailed?
• Did you specify the file path? (AI doesn't know which file to operate on)
• Did you provide existing code examples? (AI doesn't know your code style)
• Did you state constraints? (AI over-improvises)
Practical approach: answer three questions in your prompt: Does AI know which file I'm working in? Does AI know existing code structure? Does AI know what can and cannot be changed? If you can't answer, supplement relevant information before sending prompt.
Also, use Plan Mode to have AI create a plan first—you review before execution, avoiding many problems.
When to use Inline Edit vs Agent Mode?
**Need to modify multiple files?**
• Yes → Use Agent Mode (Cmd+I or Chat)
• No → Need to understand complex context?
- Yes → Agent Mode
- No → Inline Edit (Cmd+K)
Specific scenarios:
• Inline Edit: rename function, add parameter, fix single bug, refactor single function
• Agent Mode: add new feature (involves Model/Controller/View), large refactoring, tech stack migration, tasks requiring codebase analysis
Pro tip: unsure? First use Agent Mode's Plan feature to see how many files it plans to change—if only 1-2 files, switch back to Inline Edit for speed.
How to make AI-generated code match my project's code style?
**Method 1: Provide Specific Code Examples (Most Effective)**
Paste your existing code examples in prompt—AI will mimic your style, error handling, naming conventions. This is called Few-shot Prompting.
**Method 2: Create Project Context File**
In .cursor/context.md document:
• Project tech stack
• Code style guidelines (e.g., "we prefix private methods with _")
• Commonly used third-party libraries
• Special conventions
AI will automatically read this file.
**Method 3: Create Reusable Prompt Templates**
Create prompts/ folder in project root, store API endpoint, React component, unit test templates—copy and fill when needed. Team members share templates, code style naturally consistent.
What if AI-generated code looks right but has runtime bugs?
**Always review AI-generated code**, ask yourself three questions:
1. Edge cases handled? (empty array, null, undefined, max/min values)
2. Will this code affect other modules? (side effect check)
3. Any performance issues? (infinite loops, memory leaks, unnecessary repeated calculations)
**Practical Tip**: Ask AI to review its own code
"Please review the code you just generated, answer: 1) What happens if input is empty array? 2) Any performance issues? 3) What assumptions did you make? Do these assumptions hold in all cases?"
AI will recheck and often finds its own problems. Also enable Cursor's "Show Work" mode to view AI's reasoning steps, easier to spot potential issues.
When is it worth using AI to write code, and when not?
• Repetitive tasks (batch file modifications, template code generation, refactor repeated logic)
• Unfamiliar domains (new tech stacks, new libraries, rarely-used APIs)
• Complex but patterned tasks (API endpoints, CRUD operations, form validation, data transformation)
Not worth using AI:
• Code so simple you could write with eyes closed (rename variable, add comment)
• Algorithm design requiring deep thinking (core business logic, complex algorithm optimization)
• Highly customized code with no reference patterns (innovative features, special business scenarios)
Judgment standard: if you can write it yourself in 10 minutes, don't spend 20 minutes writing a prompt. Prompt engineering's purpose is improving efficiency, not showing how beautiful your prompts are. AI is a tool not the goal—solving problems quickly is what matters.
How exactly to use Plan Mode? What scenarios is it most effective?
1. Press Shift+Tab to switch to Plan mode
2. Describe your requirements
3. AI analyzes codebase and asks questions (like "do you want to keep existing error handling?")
4. AI creates action plan (lists which files to modify, what features to add)
5. You review and modify this plan
6. After confirmation AI starts writing code
**Most Effective Scenarios**:
• Large refactoring (prevents AI from modifying wrong files or deleting critical code)
• New feature development (see plan first when involving multiple files)
• When unsure if AI understands correctly (verify AI's understanding through plan)
• Complex tasks (like adding i18n, database migration)
Plan Mode's core value is giving you a "chance to regret," preventing AI from directly messing up code—especially suitable for irreversible major changes.
How to prevent AI-generated code from introducing unwanted third-party libraries?
**Explicitly Prohibit New Libraries**:
"**Don't introduce** new third-party libraries"
"**Don't install** any npm packages"
"**Only use** existing project dependencies (libraries in package.json)"
**Specify Required Libraries**:
"**Must use** existing Ant Design components (5.x installed)"
"**Must use** project's authService (src/services/authService.ts)"
**Provide Examples of Existing Libraries**:
List existing project dependencies and usage in Context section—AI will prioritize these libraries instead of "inventing" new ones.
**Use Plan Mode**:
When AI creates plan you can already see what libraries it plans to introduce—stop unwanted dependencies in time.
Remember: more explicit constraints = less AI improvisation.
16 min read · Published on: Jan 29, 2026 · Modified on: Feb 4, 2026
Related Posts
Cursor Advanced Tips: 10 Practical Methods to Double Development Efficiency (2026 Edition)

Cursor Advanced Tips: 10 Practical Methods to Double Development Efficiency (2026 Edition)
Complete Guide to Fixing Bugs with Cursor: An Efficient Workflow from Error Analysis to Solution Verification

Complete Guide to Fixing Bugs with Cursor: An Efficient Workflow from Error Analysis to Solution Verification
Refactoring Code with Cursor? These Tips Will Make You Twice as Productive


Comments
Sign in with GitHub to leave a comment