In today’s column, I have put together a comprehensive compilation of the topmost prompt engineering techniques that I’ve covered to date in my column postings. Those column postings have amassed over 500,000 views and showcase ongoing keen interest in prompt engineering best practices.
For those of you who might be new to my column, I broadly explore the latest advances in AI across the board, including topics such as embodied AI, AI reasoning, high-tech breakthroughs in AI, prompt engineering, training of AI, fielding of AI, AI regulations, architecture and new hardware for AI, governance of AI, and so on. It is an eclectic mix.
I typically include prompt engineering into my mix of AI topics on a round-robin basis and aim to bring to the fore the latest breakthroughs and exemplary prompting approaches. My coverage extensively delves into the best practices of prompting. I prefer to identify and recommend only prompt engineering practices that are solidly backed by practical research and that I leverage in my own daily use of generative AI.
Prompt engineering is a field of endeavor that continues to be of crucial value. I predict that prompt engineering has long legs and necessitates having sufficient skills and techniques to be undertaken proficiently and effectively. It is for that reason that I keep on top of where prompt engineering is headed and bring the best new techniques to the reader’s attention.
For my ongoing overall coverage of the latest AI advances, discoveries, and trending AI innovations, see the link here.
Introduction To This Prompt Engineering Compilation
A few remarks before jumping into the compilation.
Each technique is briefly summarized, and a handy link is provided to my detailed coverage that explains how the technique is actively performed and carefully notes the necessary ins and outs. Examples are also shown in those in-depth postings.
I posted a similar compilation in May 2024 (see the link here) containing, at that time, about 50 vital prompting techniques. Since then, I’ve continued to regularly cover the latest in prompt engineering. I’ve included, therefore, the additional prompt engineering postings from the latter part of 2024 and up to now in 2025. The grand total amounts to 82 keystone prompting techniques. Please be aware that many other of my postings also mention ad hoc prompting practices and provide tips and insights here and there, while the ones I’m covering here had an entire column devoted to their efficacy.
One of the most frequent questions I get at my presentations and talks is how someone can become a proficient prompt engineer.
Here’s my recommendation.
Go through every prompt engineering technique that I lay out here. Make sure to use the provided online links and fully read the detailed indications that underpin each technique (no skipping, no idle eyeballing). Try extensively using the technique in your favored generative AI app. Quiz yourself to double-check that you really know how to use each technique. Be honest. Be fair and square.
Upon completion of that quest, I would say that you are on your way to being a top-flight prompt engineer. The follow-up would be to practice using the techniques and feel fully comfortable that they are at your fingertips and readily contained in your prompt engineering mental toolkit.
My Recommended Prompt Engineering Techniques
I’ve listed the techniques in alphabetical order.
Regarding the naming of each technique, there isn’t a standardized industry-wide naming convention, thus, I have used the name or phrases that I believe are most often utilized. The aim is to invoke a generalized indication so that you’ll immediately land in the right ballpark of what each technique entails.
Here we go.
Add-On Prompting
You can use special add-ons that plug into generative AI and aid in either producing prompts or adjusting prompts. For various examples and further detailed indications about the nature and use of add-ons for prompting, see my coverage at the link here.
Agentic AI Prompting
Agentic AI is hot. The idea is that you elaborately use generative AI, undertaking tasks on an end-to-end basis. A popular example would be using generative AI that not only advises about planning a trip but also proceeds to make all the flight and hotel bookings needed. The advent of agentic AI has also brought forth the need for additional prompt engineering techniques to suitably lean into AI agents. For various examples and further detailed indications about the nature and use of agentic AI prompting, see my coverage at the link here.
AI Hallucination Avoidance Prompting
One of the most pressing problems about generative AI is that the AI can computationally make up falsehoods that seem to be portrayed as truths, which is an issue known as AI hallucinations (I disfavor the catchphrase because it tends to anthropomorphize AI, but it is unfortunately caught on as a phrase and we seem to be stuck with it). For various examples and further detailed indications about the nature of AI hallucinations, see my extensive coverage at the link here, the link here, and the link here.
Atom-of-Thoughts (AoT) Prompting
This technique expands upon the famous chain-of-thought technique. It goes like this. You tell AI to perform stepwise reasoning and do so by dividing a problem down into its most atomic steps. The beauty is that not only does this tend to keep the AI honest in terms of figuring out the proper steps, but the AI can also process the execution of the steps in parallel, assuming the AI is set up for parallelism. For various examples and further detailed indications about the nature and use of atom-of-thoughts prompting, see my coverage at the link here.
Beat the “Reverse Curse” Prompting
Generative AI is known for having difficulties dealing with the reverse side of deductive logic; thus, make sure to be familiar with prompting approaches that can curtail or overcome the so-called “reverse curse”. For various examples and further detailed indications about the nature and use of beating the reverse curse prompting, see my coverage at the link here.
“Be On Your Toes” Prompting
The phrase “Be on your toes” can be used to stoke generative AI toward being more thorough when generating responses, though there are caveats and limitations that need to be kept in mind when using the prompting technique. For various examples and further detailed indications about “be on your toes” prompting, see my coverage at the link here.
Browbeating Prompts
A commonly suggested prompting technique consists of writing prompts that seek to browbeat or bully generative AI. You need to be cautious in using such prompts. For various examples and further detailed indications about browbeating prompting, see my coverage at the link here.
Catalogs Or Frameworks For Prompting
A prompt-oriented framework or catalog attempts to categorize and present to you the cornerstone ways to craft and utilize prompts. For various examples and further detailed indications about the nature and use of prompt engineering frameworks or catalogs, see my coverage at the link here.
Certainty And Uncertainty Prompting
You can explicitly indicate in your prompt that you want generative AI to emit a level of certainty or uncertainty when providing answers to your questions. For various examples and further detailed indications about the nature and use of the hidden role of certainty and uncertainty when prompting for generative AI, see my coverage at the link here.
Chain-of-Continuous-Thought (CCoT) Prompting
Suppose that instead of dealing with tokens flowing here and there, we took the chain-of-thought (CoT) prompting method and moved that further down the line. It goes like this. A component would receive a chain-of-thought from some other component. The component receiving the CoT uses that as the grist for doing whatever the component is supposed to do. The result from the component is yet another newly devised chain-of-thought that then flows further along the line. For various examples and further detailed indications about the nature and use of the chain-of-continuous-thought prompting, see my coverage at the link here.
Chain-of-Density (CoD) Prompting
A shrewd method of devising summaries involves a clever prompting strategy known as chain-of-density (CoD), which aims to bolster generative AI toward attaining especially superb or at least better-than-usual summaries. For various examples and further detailed indications about the nature and use of CoD or chain-of-density prompting, see my coverage at the link here.
Chain-of-Feedback (CoF) Prompting
A variation on Chain-of-Thought (CoT) consists of the Chain-of-Feedback (CoF) prompting technique, which seems to reduce the degree of generative AI hallucinations. For various examples and further detailed indications about the nature and use of chain-of-feedback prompting, see my coverage at the link here.
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting has been heralded as one of the most important prompting techniques. When you enter a prompt, you invoke CoT by simply telling generative AI to work in a stepwise fashion. For various examples and further detailed indications about the nature and use of Chain-of-Thought (CoT) prompting, see my coverage at the link here.
Chain-of-Thought Factored Decomposition Prompting
You can supplement conventional Chain-of-Thought (CoT) prompting with an additional instruction that tells the generative AI to produce a series of questions and answers when doing the chain-of-thought generation. Your goal is to nudge or prod the generative AI to generate a series of sub-questions and sub-answers. For various examples and further detailed indications about the nature and use of chain-of-thought by leveraging factored decomposition, see my coverage at the link here..
Chain-of-Verification (CoV) Prompting
Chain-of-Verification (known as COVE or CoVe, though some also say CoV) is an advanced prompt engineering technique that in a series of checks-and-balances or double-checks tries to boost the validity of generative AI responses. For various examples and further detailed indications about the nature and use of CoV or chain-of-verification prompting, see my coverage at the link here.
Checklist Prompting
When employing checklist prompting, you tell generative AI to produce a checklist for whatever question or problem you want the AI to solve. Doing so aids the AI in undertaking a more structured approach to the matter. A secondary plus is that you have the AI verify via the derived checklist that all the questions or parts of the problem were indeed solved or at least considered by the AI during the solving process. For various examples and further detailed indications about the nature and use of checklist prompting, see my coverage at the link here.
Conversational Prompting
Be fluent and interactive when prompting while avoiding the myopic one-and-done mindset that many unfortunately seem to adopt when using generative AI. For various examples and further detailed indications about the nature and use of conversational prompting, see my coverage at the link here.
Conversational-Amplified Prompt Engineering (CAPE)
An advanced mode of prompting involves carrying on a back-and-forth conversation with generative AI to get your prompt intentions fully stipulated. Sometimes, it is better to use multiple prompts than just one prompt. For various examples and further detailed indications about the nature and use of CAPE, see my coverage at the link here.
DeepFakes To TrueFakes Prompting
You undoubtedly know about Deepfakes, while a different angle involves establishing via generative AI a Truefake, namely a fake version of yourself that is “true” in the sense that you genuinely want to have your fake digital twin devised. For various examples and further detailed indications about the nature and use of going from Deepfakes to Truefakes via prompting, see my coverage at the link here.
Directional Stimulus Prompting (DSP) And Hints
Using subtle or sometimes highly transparent hints in your prompts is formally known as Directional Stimulus Prompting (DSP) and can substantially boost generative AI responses. For various examples and further detailed indications about the nature and use of hints or directional stimulus prompting, see my coverage at the link here.
Disinformation Detection And Removal Prompting
The volume of disinformation and misinformation that society is confronting keeps growing and, lamentedly, seems unstoppable. A notable means of coping consists of using generative AI to be your preferred filter for detecting disinformation and misinformation. For various examples and further detailed indications about the nature and use of prompting to detect and mitigate the flow of misinformation and disinformation, see my coverage at the link here.
Doubling-Up Chain-of-Thought Prompting
Many of the latest generative AI and LLMs now have a built-in feature that automatically invokes chain-of-thought. This is fine. But for those who are used to directly invoking chain-of-thought via a prompt, a new dilemma exists. The issue is that if you ask for a chain-of-thought, and the AI is already automatically going to do a chain-of-thought, doubling-up can gum up the works. For various examples and further detailed indications about the nature and use of doubling-up chain-of-thought prompting, see the link here.
Echo Prompting
Echo prompting is just like the name says: you tell the AI to echo back to you the prompt query that you’ve entered. When used suitably, this makes a notable and positive difference in the responses that the AI will generate. It is an easy technique and can be used when the circumstances warrant it. For various examples and further detailed indications about the nature and use of echo prompting, see the link here.
Emotionally Deceptive Prompting
Users have figured out that they can emotionally express their prompts to get their way when it comes to using generative AI that is running customer service online chats and the like. I go into added depth about emotionally expressed prompting that I previously covered and reveal the latest tricks used by those trying to deceive or direct AI in a desired direction. For various examples and further detailed indications about the nature and use of emotionally deceptive prompting, see the link here.
Emotionally Expressed Prompting
Does it make a difference to use emotionally expressed wording in your prompts when conversing with generative AI? The answer is yes. And there is a logical and entirely computationally sound reason for why generative AI “reacts” to your use of emotional wording. For various examples and further detailed indications about the nature and use of emotionally worded prompting, see my coverage at the link here.
End-Goal Prompting
A highly recommended prompting strategy consists of identifying what your end goal is while working in generative AI and aiming to solve or delve into a particular topic or problem of specific interest. For various examples and further detailed indications about end-goal planning, see my coverage at the link here.
Essay-Compression Prompting
Sometimes, instead of getting a summary, you want to have an essay compressed, meaning that it contains the same words as the original source but tosses out words that aren’t necessarily needed. For various examples and further detailed indications about essay-compression prompting, see my coverage at the link here.
Expert Personas Prompting
A handy prompting technique involves telling generative AI to pretend to be an expert in a given domain. You then have the AI answer prompts as though it were such an expert. This can be further enhanced by having the AI pretend to be a multitude of expert personas, debating and challenging each other. For various examples and further detailed indications about expert personas prompting, see the link here.
Fair-Thinking Prompting
You can use clever prompts that will get generative AI to lean in directions other than the already-predisposed biases cooked into the AI, aiming to get a greater semblance of fairness in the generated responses. For various examples and further detailed indications about the nature and use of fair-thinking prompting, see my coverage at the link here.
Fallback Response Prompting
If you’ve used generative AI with any frequency, you know that occasionally, the AI will tell you that it can’t or won’t answer your question posed in a prompt. You will get a said-to-be fallback response that the AI has been tuned to provide. There are smart ways to respond to a fallback response. For various examples and further detailed indications about fallback response prompting, see my coverage at the link here.
Flipped Interaction Prompting
You can flip the script, as it were, getting generative AI to ask you questions rather than having you ask generative AI your questions. For various examples and further detailed indications about the nature and use of flipped interaction, see my coverage at the link here.
Gamification Prompting
Gamification is a helpful and engaging way to improve your prompting skills. Whether you are a newbie or an expert at writing prompts, it turns out that gamification can notably enhance your prompt engineering capabilities. For various examples and further detailed indications about the nature and use of gamification prompting, see my coverage at the link here.
Generating Prompts Automatically
Rather than directly composing your prompts, you can ask generative AI to create your prompts for you. This requires knowing what kinds of prompting will get you the best AI-generated prompts. For various introductory examples and further detailed indications about generating prompts prompting, see my coverage at the link here.
Generating Prompts Versus By-Hand
A prompt generator is essentially the use of generative AI to generate prompts for you. The straightforward idea is that you tell AI what aspect you want to ask about or indicate a problem that you want to have solved, and a prompt will be generated accordingly. Are prompt generators better than composing prompts by yourself? I compare when each approach is best. For various examples and further detailed indications about prompt generators versus by-hand composition, see my coverage at the link here.
Hackathons Prompt-A-Thon
A handy way to learn new prompts and exercise your prompt engineering acumen entails participating in a prompt-a-thon. A prompt-a-thon is akin to a programming hackathon except that instead of coding, you make use of prompts. For various examples and further detailed indications about prompt-a-thons, see my coverage at the link here.
Hard Prompts Prompting
A hard prompt is a prompt that presents an especially hard or arduous problem to generative AI. The AI might not be able to solve the problem at all, but it will at least try to do so. The AI might consume lots of time and cost while trying to do so. Worse still, while trying to solve the question or problem that was provided in a hard prompt, there is a real possibility that a so-called AI hallucination will occur and provide a false result. Tradeoffs exist when composing hard prompts. For various examples and further detailed indications about hard prompts prompting, see my coverage at the link here.
Illicit Or Disallowed Prompting
Did you know that the licensing agreement of most generative AI apps says that you are only allowed to use the generative AI in various strictly stipulated ways? For various examples and further detailed indications about the nature of what is considered illicit prompts (i.e., that you aren’t supposed to use), see my coverage at the link here.
Imperfect Prompting
Imperfect prompts can be cleverly useful. For various examples and further detailed indications about the nature and use of imperfect prompts, see my coverage at the link here.
Importing Text As Prompting Skill
There are circumstances involving importing text into generative AI that require careful skill and necessitate the right types of prompts to get the text suitably brought in and properly infused. For various examples and further detailed indications about importing text prompting, see my coverage at the link here.
Interlaced Conversations Prompting
Most of the popular generative AI apps require that each conversation be distinct and separate from your other conversations with the AI. The latest trend entails allowing for the interlacing of conversations and requires rethinking how you compose your prompts. For various examples and further detailed indications about interlaced conversation prompting, see my coverage at the link here.
Kickstart Prompting
A wise move when prompting is to grease the skids or prime the pump, also known as kickstart prompting, which involves doing an initial prompt that gets generative AI into the groove of whatever topic or problem you want to have solved. For various examples and further detailed indications about the nature of kickstart prompting, see my coverage at the link here.
Knowledge Distillation Prompting
When you want to data-train AI, you can make use of another AI that essentially aids in transferring data or “knowledge” from that source AI to the target AI. This is known as knowledge distillation. There are numerous prompting intricacies involved. For various examples and further detailed indications about the nature of knowledge distillation prompting, see my coverage at the link here.
Large Concept Model (LCM) Prompting
Some believe that the conventional foundation of generative AI is going to have to change, such as embracing the use of “concepts” rather than focusing solely on words and tokens. These new approaches make use of a large concept model (LCM). We are still in the early days of whether LCMs will be a breakout hit or a dud. Knowing about LCMs will give you insights when prompting with such an approach. For various examples and further detailed indications about the nature of LCMs, see my coverage at the link here.
Least-to-Most Prompting
Least-to-Most prompting (LTM) is a technique that involves guiding generative AI to work on the least hard part first and then proceed to the harder part (an alternative approach is Most-to-Least or MTL prompting). For various examples and further detailed indications about the nature of LTM and MTL prompting, see my coverage at the link here.
Logic-of-Thought (LoT) Prompting
Logic-of-thought involves telling generative AI to work through a question or problem in a highly logical fashion and lean into logical reasoning as much as possible. When doing so, there are three crucial elements: (1) logic extraction, (2) deriving the solution using propositions, and (3) generating an explanation in plain language. For various examples and further detailed indications about the nature of logic-of-thought prompting, see my coverage at the link here.
Macros In Prompts
Similar to using macros in spreadsheets, you can use macros in your prompts while working in generative AI. For various examples and further detailed indications about the nature and use of prompt macros, see my coverage at the link here.
Mega-Mega Personas Prompting
Mega-personas prompting has now been further expanded into mega-mega personas prompting. Whereas mega-personas conventionally were in the thousands of personas, the mega-mega entails invoking millions or even billions of personas. For various examples and further detailed indications about the nature and use of mega-mega personas prompting, see my coverage at the link here.
Mega-Personas Prompting
Mega-personas consist of the upsizing of multi-persona prompting. You ask the generative AI to take on a pretense of perhaps thousands of pretend personas. For various examples and further detailed indications about the nature and use of mega-personas prompting, see my coverage at the link here.
Meta-Prompts
A meta-prompt is a special kind of prompt that provides instructions or indications about the nature of prompts and prompting. In its simplest use, meta-prompts are especially handy if you aren’t already familiar with the ins and outs of advanced prompting techniques. The AI can readily do the heavy lifting for you and add notable wording that boosts your original prompt. For various examples and further detailed indications about the nature and use of meta-prompts, see my coverage at the link here.
Multi-Persona Prompting
Via multi-persona prompting, you can get generative AI to simulate one or more personas. For various examples and further detailed indications about the nature and use of multi-persona prompting, see my coverage at the link here.
Near-Infinite Memory Prompting
There is ongoing speculation that we are heading toward near-infinite memory for generative AI and LLMs. The idea is that rather than the existing limitations of how much an LLM “knows” at a moment in time while conversing with the AI, it can have ready access to as much memory as it needs. Current prompting requires you to be aware of the prevailing memory limitations, while if we do attain near-infinite memory, your prompting ought to adjust accordingly. For various examples and further detailed indications about the nature and use of near-infinite memory prompting, see my coverage at the link here.
Overcoming “Dumbing Down” Prompting
Knowing when to use succinct or terse wording (unfairly denoted as “dumbing down” prompting), versus using more verbose or fluent wording is a skill that anyone versed in prompt engineering should have in their skillset. For various examples and further detailed indications about the nature and use of averting the dumbing down of prompts, see my coverage at the link here.
Persistent Context And Custom Instructions Prompting
You can readily establish a context that will be persistent and ensure that generative AI has a heads-up on what you believe to be important, often set up via custom instructions. For various examples and further detailed indications about the nature and use of persistent context and custom instructions, see my coverage at the link here.
Plagiarism Prompting
Your prompts can, by design or by happenstance, stoke generative AI toward producing responses that contain plagiarized content. Be very careful since you might be on the hook for any liability due to plagiarism. For various examples and further detailed indications bout the nature and use of prompts that might stir plagiarism, see my coverage at the link here and the link here.
Politeness Prompting
A surprising insight from research on generative AI is that prompts making use of please and thank you can stir AI to produce better results. Make sure to use politeness while prompting, though do not go overboard and be judicious in such wording. For various examples and further detailed indications about politeness prompting, see my coverage at the link here.
Preemptive Detection Prompting
Research shows that you can often successfully tell generative AI to try and avoid getting entangled in generating an AI hallucination, and, notably, the AI will usually comply. Therefore, it pays off to explicitly caution AI to not produce AI confabulations by explicitly saying so in your prompts. This is known as preemptive detection prompting. For various examples and further detailed indications about preemptive detection prompting, see my coverage at the link here.
Privacy Protection Prompting
Did you realize that when you enter prompts into generative AI, you are not usually guaranteed that your entered data or information will be kept private or confidential? For various examples and further detailed indications about the nature and use of prompts that might give away privacy or confidentiality, see my coverage at the link here.
Prompt Development Life Cycle (PDLC)
Akin to how programming or software engineering has a systems development life cycle (SDLC), the same can be said about prompt engineering. Usually referred to as a prompt development life cycle (PDLC), there are many variations available in the marketplace. Knowing what a PDLC contains and how it can improve your prompting skills is a must. For various examples and further detailed indications about the nature and use of PDLCs, see my coverage at the link here.
Prompt Shields and Spotlight Prompting
The emergence of prompt shields and spotlight prompting has arisen due to the various hacking efforts trying to get generative AI to go beyond its filters and usual protections. Here’s a useful rundown of what you need to know. For various examples and further detailed indications about the nature of prompt shields and spotlighting prompting, see my coverage at the link here.
Prompt-To-Code Prompting
You can enter prompts that tell generative AI to produce conventional programming code and essentially write programs for you. For various examples and further detailed indications about the nature and use of prompting to produce programming code, see my coverage at the link here.
Purpose Prompting
An AI system that lacks an internally bound purpose is presumably going to wander in the analogous way that a human would wander without a purpose. We ought to ensure that AI systems always have a codified purpose. The AI would then be able to refer to the purpose when taking any action or performing whatever capacities it can muster. For various examples and further detailed indications about the nature and use of purpose prompting, see my coverage at the link here.
Reasoning Model Prompting
One of the biggest changes in the latest iteration of generative AI and LLMs is that AI makers have opted to include chain-of-thought (CoT) reasoning provisions directly in the inner mechanisms of the AI. This is a monumental change and worthy of close attention. The nature of your prompts needs to reflect that the AI is going to automatically perform stepwise reasoning. For various examples and further detailed indications about the nature and use of reasoning model prompting, see my coverage at the link here.
Rephrase-and-Respond Prompting
If you aren’t quite sure how to phrase a particular prompt, you can enter it “as is” and tell the AI to do a rephrase-and-respond. This informs the AI that rather than interpreting your prompt directly, it should first rephrase the entered prompt, which hopefully improves the prompt, and then proceed to respond to this better prompt. For various examples and further detailed indications about the nature and use of rephrase-and-respond prompting, see my coverage at the link here.
Re-Read Prompting
Most of today’s popular generative AI apps tend to do a single pass on an entered prompt. Running a second pass would potentially aid in doing a further detailed inspection of the question. A kind of clean-up of whatever might have been missed or miscalculated about the question. You can invoke a second pass by using the re-read prompting technique. For various examples and further detailed indications about the nature and use of re-read prompting, see my coverage at the link here.
Response Time Speed-up Prompting
Several clever approaches can be used to speed up the response time to your prompts. The wording of your prompt makes a significant difference in the amount of processing time that will be consumed by the AI when generating a response. These techniques aim to reduce latency or delays in getting generated responses, thus essentially speeding up your response time. For various examples and further detailed indications about the nature and use of response time speed-up prompting, see my coverage at the link here.
Retrieval-Augmented Generation (RAG) Prompting
Retrieval-augmented generation (RAG) is hot and continues to gain steam. You provide external text that gets imported and, via in-context modeling, augments the data training of generative AI. For various examples and further detailed indications about the nature and use of retrieval-augmented generation (RAG), see my coverage at the link here.
Self-Ask Prompting
Self-ask prompting consists of you telling generative AI to solve problems by pursuing an internal question-and-answer divide-and-conquer approach that is to be made visible to you during the solving process. The AI is performing a stepwise self-ask that is an added-value version of chain-of-thought (CoT). For various examples and further detailed indications about the nature and use of self-ask prompting, see my coverage at the link here.
Self-Reflection Prompting
You can enter a prompt into generative AI that tells the AI to essentially be (in a manner of speaking) self-reflective by having the AI double-check whatever generative result it has pending or that it has recently produced. For various examples and further detailed indications about the nature and use of AI self-reflection and AI self-improvement for prompting purposes, see my coverage at the link here.
Sensitivities Prompting
Research shows that there are three keystone sensitivities about generative AI that you need to be aware of when composing your prompts. The sensitivity has to do with the scale or size of the underlying AI model, the aspect of whether topics-based data training of the AI has occurred, and whether you opt to use an example in your prompt (a so-called one-shot). For various examples and further detailed indications about the nature and use of AI sensitivities prompting, see my coverage at the link here.
Show-Me Versus Tell-Me Prompting
Show-me consists of devising a prompt that demonstrates to the generative AI an indication of what you want (show it), while tell-me entails devising a prompt that gives explicit instructions delineating what you want to have done (tell it). For various examples and further detailed indications about the nature and use of the show-me versus tell-me prompting strategy, see my coverage at the link here.
Sinister Prompting
People are using sinister prompts to get generative AI to do foul things, such as scams and the like. I don’t want you to do this, but I also think it is valuable for you to know what sinister prompts do and how they work, alerting you to avoid them and not inadvertently fall into the trap of one. For various examples and further detailed indications about the nature and use of sinister prompting, see my coverage at the link here.
Skeleton-of-Thought (SoT) Prompting
Via a prompt akin to Chain-of-Thought (CoT), you tell the generative AI to first produce an outline or skeleton for whatever topic or question you have at center stage, employing a skeleton-of-thought (SoT) method to do so. For various examples and further detailed indications about the nature and use of the skeleton-of-thought approach for prompt engineering, see my coverage at the link here.
Small Language Model (SLM) Prompting
If you’ve not heard about small language models (SLMs), that’s perfectly understandable since they are still not quite up to par. There are a wide variety of experimental SLMs, and some are very handy while others are clunky and less appealing. You need to adjust your prompting approach when using SLMs versus conventional LLMs. For various examples and further detailed indications about the nature and use of prompting for SLMs, see my coverage at the link here.
Star Trek Trekkie Lingo Prompting
An unusual discovery by researchers showcased that using Star Trek Trekkie lingo in your prompts can improve generative AI results. Downsides exist and can undercut your efforts by inadvertent misuse or overuse of this technique. For various examples and further detailed indications about Trekkie prompting, see my coverage at the link here.
Step-Around Prompting Technique
At times, the prompts that you seek to use in generative AI are blocked by the numerous filters that the AI maker has put in place. You can use the step-around prompting technique to get around those blockages. For various examples and further detailed indications about step-around prompting, see my coverage at the link here.
“Take A Deep Breath” Prompting
The prompting phrase of “Take a deep breath” has become lore in prompt engineering, but it turns out that there are limitations and circumstances under which this wording fruitfully works. For various examples and further detailed indications about the nature and use of the take a deep breath prompting, see my coverage at the link here.
Target-Your-Response (TAYOR) Prompting
Target-your-response (TAYOR) is a prompt engineering technique that entails telling generative AI the desired look and feel of to-be-generated responses. For various examples and further detailed indications about the nature and use of TAYOR or target-your-response prompting, see my coverage at the link here.
Temperature Settings Prompting
The temperature setting for generative AI determines how varied the responses by generative AI will be. You can either have the AI produce relatively straightforward and somewhat predictable responses (that’s via the use of a low temperature), or you can heat things up and use high temperatures to prod AI toward producing seemingly more creative and less predictable responses. Some AI allows you to modify the temperature settings on your own via prompts. For various examples and further detailed indications about the nature and use of temperature settings, see my coverage at the link here.
Thinking Time Prompting
The latest iterations of generative AI and LLMs now have logical reasoning built directly into the AI architecture, which has led to new options about how much so-called thinking time you want the AI to undertake when responding to a prompt. This processing time will determine the depth and likely aptness of the response. Various prompting avenues arise when specifying the thinking time aspects. For various examples and further detailed indications about the nature and use of thinking time prompting, see my coverage at the link here.
Tree-of-Thoughts (ToT) Prompting
Tree-of-thoughts (ToT) is an advanced prompting technique that involves telling generative AI to pursue multiple avenues or threads of a problem (so-called “thoughts”) and figure out which path will likely lead to the best answer. For various examples and further detailed indications about the nature and use of ToT or tree-of-thoughts prompting, see my coverage at the link here.
Trust Layers For Prompting
Additional components outside of generative AI are being set up to do pre-processing of prompts and post-processing of generated responses, ostensibly doing so to increase a sense of trust about what the AI is doing. For various examples and further detailed indications about the nature and use of trust layers for aiding prompting, see my coverage at the link here.
Vagueness Prompting
The use of purposefully vague prompts can be advantageous for spurring open-ended responses that might land on something new or especially interesting. For various examples and further detailed indications about the nature and use of vagueness while prompting, see my coverage at the link here.
Making A Checklist Of The Prompting Techniques
I mentioned earlier that you might consider trying out the prompt engineering techniques that I’ve listed, especially ones that you don’t already know.
To help you with that fruitful exercise, here’s my suggestion. Create a spreadsheet that contains the checklist shown below of the listed prompting techniques. Make a column that you can mark as to whether you are familiar with the specific technique, do so by using a score ranging from 0 to 5, wherein 0 is that you don’t know it at all, while the highest score of 5 is that you know it like the back of your hand. Be straightforward and don’t give a fake score. Put your real score. This list is solely for your own benefit.
Make another column that has a score showing what you want to become in that technique. For example, suppose that right now, you start as a self-rated 1 on a particular technique and want to end up at a self-rated 4. Finally, include an additional column that will contain a target date of when you hope to attain the heightened score.
You can now use that spreadsheet as your career planning guide for prompt engineering purposes. Keep it updated as you proceed along in your adventure as a prompt engineer who wants to do the best that you can.
Whether you undertake that treasured challenge or not, here’s the list with numbers shown as an easy reference (the numbering doesn’t represent priority or ranking; they are just a handy reference number), and the list is still in the same alphabetical order as shown above.
Here’s the numbered list:
- Add-On Prompting
- Agentic AI Prompting
- AI Hallucination Avoidance Prompting
- Atom-of-Thoughts (AoT) Prompting
- Beat the “Reverse Curse” Prompting
- “Be On Your Toes” Prompting
- Browbeating Prompts
- Catalogs Or Frameworks For Prompting
- Certainty And Uncertainty Prompting
- Chain-of-Continuous-Thought (CCoT) Prompting
- Chain-of-Density (CoD) Prompting
- Chain-of-Feedback (CoF) Prompting
- Chain-of-Thought (CoT) Prompting
- Chain-of-Thought Factored Decomposition Prompting
- Chain-of-Verification (CoV) Prompting
- Checklist Prompting
- Conversational Prompting
- Conversational-Amplified Prompt Engineering (CAPE)
- DeepFakes To TrueFakes Prompting
- Directional Stimulus Prompting (DSP) And Hints
- Disinformation Detection And Removal Prompting
- Doubling-Up Chain-of-Thought Prompting
- Echo Prompting
- Emotionally Deceptive Prompting
- Emotionally Expressed Prompting
- End-Goal Prompting
- Essay-Compression Prompting
- Expert Personas Prompting
- Fair-Thinking Prompting
- Fallback Response Prompting
- Flipped Interaction Prompting
- Gamification Prompting
- Generating Prompts Automatically
- Generating Prompts Versus By-Hand
- Hackathons Prompt-A-Thon
- Hard Prompts Prompting
- Illicit Or Disallowed Prompting
- Imperfect Prompting
- Importing Text As Prompting Skill
- Interlaced Conversations Prompting
- Kickstart Prompting
- Knowledge Distillation Prompting
- Large Concept Model (LCM) Prompting
- Least-to-Most Prompting
- Logic-of-Thought (LoT) Prompting
- Macros In Prompts
- Mega-Mega Personas Prompting
- Mega-Personas Prompting
- Meta-Prompts
- Multi-Persona Prompting
- Near-Infinite Memory Prompting
- Overcoming “Dumbing Down” Prompting
- Persistent Context And Custom Instructions Prompting
- Plagiarism Prompting
- Politeness Prompting
- Preemptive Detection Prompting
- Privacy Protection Prompting
- Prompt Development Life Cycle (PDLC)
- Prompt Shields and Spotlight Prompting
- Prompt-To-Code Prompting
- Purpose Prompting
- Reasoning Model Prompting
- Rephrase-and-Respond prompting
- Re-Read Prompting
- Response Time Speed-up Prompting
- Retrieval-Augmented Generation (RAG) Prompting
- Self-Ask Prompting
- Self-Reflection Prompting
- Sensitivities Prompting
- Show-Me Versus Tell-Me Prompting
- Sinister Prompting
- Skeleton-of-Thought (SoT) Prompting
- Small Language Model (SLM) Prompting
- Star Trek Trekkie Lingo Prompting
- Step-Around Prompting Technique
- “Take A Deep Breath” Prompting
- Target-Your-Response (TAYOR) Prompting
- Temperature Settings Prompting
- Thinking Time Prompting
- Tree-of-Thoughts (ToT) Prompting
- Trust Layers For Prompting
- Vagueness Prompting
I realize this might seem like a daunting list.
I can hear the trolls commenting that this is way too much and that there is no possible way for someone to spend the time and energy needed to learn them all. We each have our daily jobs to deal with and work-life balances that need to be balanced.
Yes, I get that.
My suggestion is that you prioritize the ones that seem to best fit your likely needs as a prompt engineer and focus on those as your mainstay priority. The others you can get around to trying out in your spare time (well, if you ever can squeeze out any spare time).
Prompt Engineering And Ongoing Learning
Lifelong learning.
That’s what everyone is talking about these days. We are told time and again that we need to be lifelong learners. I wholeheartedly agree. This comes up here because the latest and greatest in prompt engineering is constantly changing. There are new ideas brewing. New AI research efforts are pending. It is a glorious time to be using generative AI.
I will keep covering prompt engineering and bringing you the newest prompts.
Keep your eyes and ears open, and I’ll do my best to make sure you can be a lifelong learner, authentically and profitably.