AI tools show up in everyday life now, from autocomplete in email to software that drafts documents and summarizes meetings. For newcomers, that visibility can feel overwhelming. The promise of saved time often clashes with an awkward first attempt, where a vague request leads to a bland or unhelpful response.
AI becomes more effective when paired with the right mindset. In a recent interview, Itai Liptz emphasized that technology should support, not replace, human judgment in care delivery—and that structured focus is how innovation delivers real value. The same principle applies to AI tools: clear, structured input combined with human oversight produces results that are far more useful.
“The common problem lies in how requests are framed,” says Liptz. “A short, unspecific command usually produces generic output that fails to meet expectations. By contrast, adding context about the goal, the audience, and the format leads to results that are relevant and practical.”
Liptz highlights one straightforward habit that reduces frustration: writing better instructions. No technical expertise is required, only a change in how requests are expressed. And this often means the difference between a disposable draft and something genuinely valuable.
The Beginner Struggle
Many new users expect AI to “just know.” It doesn’t. It reacts to what is given. A short, vague request forces the system to guess at intent, and that guess is often wide of the mark.
The mismatch shows up as flat, generic text. The tone may feel stiff, the advice obvious, or the response focused on a question that was never meant to be asked. After a couple of these kinds of experiences, it’s easy to conclude that the tool isn’t worth the time. In fact, 75 percent of users report that chatbots struggle with complex issues and often fail to provide accurate answers, underscoring how quickly vague or underspecified prompts can lead to frustration.
Another frequent misstep is treating the first output as a finished product. Many beginners paste the draft into an email or document without asking for changes. In reality, the more effective approach is to request revisions: shorten the text, soften the tone, or highlight a specific point. That quick cycle of adjustment is often where the most useful results appear.
There is also a misplaced concern with getting the prompt “exactly right” on the first try. Some believe there is a formula or secret phrase required for success. In practice, plain language works best. If a request can be explained clearly to a colleague, it can be phrased in a way that guides an AI system to deliver meaningful output.
Itai Liptz: The Top Tip
“The simplest way to improve results is to add context,” says Liptz. “Be clear about what is needed, who the output is for, and how it should be presented. The more detail supplied, the closer the response will be to something useful.”
Consider a simple contrast. “Write me a resume” will produce a bland, one-size-fits-all draft. A stronger prompt would be: “Write a one-page resume for a marketing internship. Highlight Excel proficiency, part-time retail experience, and a class project on email automation. Keep it to bullet points and short phrases.” The second example points the system in a much narrower and more effective direction. The same applies to emails. Instead of “Write a follow-up email,” try: “Write a polite follow-up to a potential client who hasn’t replied in two weeks. Keep it under 120 words. Acknowledge their busy schedule and suggest a 15-minute call next Tuesday or Thursday morning.” That draft is far more likely to be ready with minimal editing.
A recent study found that task-specific prompts with GPT-4 improved performance on code summarization by 8.33 BLEU points compared to fine-tuned models. The measurable boost shows how much difference a clear, well-structured instruction can make.
Why It Works
AI systems respond to the boundaries they are given. A well-described goal narrows the range of possible outputs, increasing the chance of receiving something aligned with expectations.
Specificity enhances relevance. A follow-up email drafted after two weeks of silence, for example, balances politeness with a reminder. A resume written for a marketing internship highlights campaign experience and technical skills rather than unrelated details. Clarity also reduces the time spent editing. A draft that already matches the desired tone and structure requires fewer changes. The workflow becomes faster, more predictable, and less frustrating.
There is also a psychological benefit. When prompts consistently yield relevant outputs, the tool feels less like a gamble. Users begin to recognize which details matter most and how to provide them, which builds confidence in both the process and the results.
Practical Ways to Try This Out
Assign the tool a role. A request might say, “Act as an editor reviewing a 600-word blog post for clarity and flow,” or “Take the perspective of a career coach reviewing a resume for entry-level roles.” A role frames the response in a way that matches the intended outcome.
Be sure to specify the format. If two short paragraphs with a headline are needed, say so. If the task requires a checklist, ask for a numbered list. Mention the audience as well. If the audience is non-technical, request plain language. These instructions provide a framework that shapes the response before any editing begins.
You should also set guardrails that matter. Word limits keep drafts concise. Key facts (such as a product name, event date, or specific terminology) anchor the text. If there are phrases to avoid, state them clearly. These boundaries keep the output focused and relevant.
Refining prompts can be just as effective as using a more advanced model. A study from MIT Sloan found that when users upgraded to a stronger AI system, only half of the performance gain came from the model itself—the other half came from improved prompting. This demonstrates that clarity in instructions is as important as the technology behind them.
Imagine a user asks for “a follow-up email after a product demo.” The first draft sounds stiff. A refinement follows: “Make it warmer and cut it to 110 words. Offer two time options for a 15-minute call this week.” The second draft improves but still feels formal. Another adjustment is made: “Use shorter sentences and avoid corporate phrases like ‘regarding.’” The third draft is suitable to send without edits.
Common Mistakes
One of the most common errors is using one-word prompts. “Ideas?” or “Resume?” gives the tool almost nothing to work with. Even two or three added details can dramatically change the quality of the result.
Another mistake is assuming the system understands personal style. It does not. If a message should sound informal, that must be specified. If jargon should be avoided, state that directly. Preferences appear in the output only when they are written into the request.
Some users also stop after the first draft. Treating the initial response as final is like accepting the roughest sketch without revision. A quick follow-up—shorten it, reorganize it, or replace jargon—polishes the draft in seconds.
Finally, asking for too much in one prompt often backfires. Requesting a summary, a slide outline, and a press pitch in a single instruction usually leads to muddled text. Breaking complex work into smaller steps produces cleaner, sharper results.
Clear, specific prompts make AI practical. There is no need for formulas or secret tricks. The only requirement is to write out what is wanted, who it is for, and how it should appear. That small habit raises the quality of output and reduces the time spent editing.
The process works best when treated like briefing someone new to a project. Provide the role, the goal, and the limits. Then request revisions until the draft fits. The seconds spent on a clearer prompt save minutes or hours later. Start with a task that is already familiar, such as a client email, a weekly update, or a resume entry. Add context, request a draft, and refine it once or twice. The difference will be clear almost immediately.
With practice, users learn which instructions consistently produce the results they want. At that point, AI shifts from a novelty to a resource that can be trusted to improve everyday work.

