Building Prompts with AI

When I started working with large language models (LLMs), I approached prompting like most folks - typing whatever came to mind and hoping for the best. But as I've been using AI more and more each day, I've learned different ways of interacting and creating prompts that weren't obvious to me initially.

Here are three things I'm having fun playing with right now.

Let AI Be Your Prompt Engineer

We're using AI to help us do all sorts of things, so why not use it to help craft better prompts? I've found success in having the LLM analyze successful prompts and suggest improvements. It's like pair programming but for prompt engineering.

For example, instead of just asking ChatGPT to "write a marketing email," I can expand even just a little bit more and say something like

"What's an effective prompt I could use to create a marketing email? Give me a couple of variations, analyze which elements make the version effective, and then combine these best parts into an optimized prompt."

This walks me through several versions of the original prompt, showing which elements are effective and how the prompt could be created. In the end, it gives me an optimized combined prompt that has sections describing the Audience and objectives, listing out the Required Elements, and providing sections for me to input context and additional notes.

Create an Interactive Prompt Development Loop

You don't have to get the prompt perfect on the first try. I've started treating prompt creation as an iterative process:

Start with a basic prompt Have the AI suggest clarifying questions Use those questions to refine the prompt Test the refined version Repeat until you get the output you need

This back-and-forth creates more nuanced and compelling prompts than trying to nail it in one shot. Perplexity AI has an intuitive interface to help you refine your prompts like this. After you ask a question, there is a section with possible refinements or "Related" sections that you can use to expand or narrow the original question.

Custom instructions allow you to recreate something like this in your favorite LLM. Here's an interesting one I found on Reddit

End response with:

> _See also:_ [2-3 related searches]
> { varied emoji related to terms} [text to link](https://www.google.com/search?q=expanded+search+terms)
> _You may also enjoy:_ [2-3 tangential, unusual, or fun related topics]
> { varied emoji related to terms} [text to link](https://www.google.com/search?q=expanded+search+terms)

Build Your Prompt Library

You must take what you're learning and treat prompts as reusable assets. While you can maintain a simple, ever-growing, searchable document of proven prompts, there are more sophisticated approaches emerging:

Custom GPTs in ChatGPT Plus ChatGPT's custom GPT feature lets you package up your prompts into specialized AI assistants. I've created GPTs for specific workflows like code review, technical writing, and data analysis. Each one has carefully crafted instructions (that I used AI to help me create) to guide the AI toward consistent, high-quality outputs. Think of it as productizing your prompt engineering work.

Claude Project System With Claude, you can create project-specific contexts that persist throughout your conversation. This lets you front-load all your carefully crafted prompting instructions, examples, and context, then have a more natural back-and-forth within that project space. This is especially powerful for complex tasks requiring consistent style or domain knowledge.

Traditional Prompt Libraries Even with these tools, I still maintain a searchable library in a Notion database of proven prompts organized by (started from this template):

  • Use case (writing, coding, analysis, etc.)
  • Industry/domain
  • Desired output type
  • Required context/inputs

This creates a virtuous cycle - as I develop and refine prompts for specific projects, I can feed the best ones back into my custom GPTs and project templates.

Looking Forward

As these tools evolve, I expect more built-in features for prompt refinement and management. But for now, these approaches help me get consistently better results from AI interactions.

As I shift my mindset from "asking AI questions" to "designing AI interactions" and treat prompting as a proper engineering discipline, we unlock so much more of AI's potential.

What approaches have you found helpful for prompt engineering? I'd love to hear your experiences in the comments.