A Beginner’s Guide to Prompt Engineering

ODSC - Open Data Science
5 min readJul 31, 2023

With the explosion in user growth with AIs such as ChatGPT and Google’s Bard, prompt engineering is fast becoming better understood for its value. If you’re unfamiliar with the term, prompt engineering is a crucial technique for effectively utilizing text-based large language models (LLMs) like ChatGPT and Bard.

The process involves crafting well-designed instructions or queries, known as prompts, to elicit desired responses from the model. By providing clear and specific prompts, you can guide the LLM’s output and improve the quality and relevance of the generated text. Simply, it could be the difference between an actionable piece of information or something that is bland and uninteresting.

So let’s look at a few ways that you can build your prompt engineering skills.

Understanding the LLM’s Behavior

Before diving into prompt engineering, it’s important to understand the behavior of the AI you’re working with. Bing’s AI, ChatGPT, and Bard are similar, but they aren’t the same. And this goes for other LLMs as well. With that said, it’s important to note that each model has its strengths, weaknesses, and limitations.

So make sure you take the time to familiarize yourself with typical responses, tendencies, and potential biases of the LLM to make better decisions when crafting prompts.

Defining Clear Objectives

When it comes to current AI models, it’s important to be clear. Remember they often lack contextual reflexes. So you’ll want to start your prompt by defining your objectives and desired outcomes. Whether it’s creative writing, answering questions, code generation, or any other task, having a clear objective will guide your prompt design and ensure the LLM generates relevant responses. If you’re not clear, you may risk getting generated text that’s unhelpful.

Use Specific Instructions

This is a big one. When it comes to working with LLMs, you want to avoid vague or ambiguous prompts. To be effective, you want to provide specific instructions to guide the LLM’s response. You can do this by specifying the format, context, or desired information explicitly to get accurate and relevant results.

Providing Context

As mentioned above, LLMs, for now at least, lack contextual reflexes. So to have the model provide you with more context-aware responses, you’ll want to prime the LLM with relevant information. This includes, but isn’t limited to adding introductory text or providing a starting sentence to set the context for the generated text. Remember, you can add context, but it’s important to make sure that the context you do provide isn’t superfluous information.

Length Control

Sometimes when you work with an AI, it can give you quite a bit of information, and it can be overwhelming. One way to address this is by using length control techniques. This is best when you need responses of a specific length or information you’re looking for is best suited to be summarized. Also, if you use an API connection to one LLM, you have length control to limit token usage. To do this, you can set a maximum token limit or inform the AI in the prompt message to indicate when to stop generating text.

Temperature and Sampling

Text-based LLMs use sampling to generate responses. Adjusting the “temperature” parameter controls the randomness of the output. Higher values make the text more diverse but risk being less coherent, while lower values make it more focused but repetitive. Experiment with temperature to find the right balance for your use case.

Iterative Refinement

You can iteratively refine your prompts based on the LLM’s responses. You can do this by observing the model’s behavior and modifying the prompts to get more accurate and desirable outputs. In ChatGPT for example, you may see that a part of a prompt isn’t providing you with the information you’re looking for. You can open a new chat, refine the prompt a bit, take out the information that may not be working, and try again. This will provide you with clues and directions for future prompts.

Experiment and Learn

Just like with any new skill, the more you attempt and try to create prompts, the better you’ll get. That’s why effective prompt engineering often involves experimentation and learning from the LLM’s behavior. This goes across different models, as you feed a prompt into ChatGPT will likely generate a different result if it was instead fed into Bard, and vice-versa. So don’t be afraid of Iterating your prompts and observe how different instructions affect the outputs to achieve the desired results.

You may find a prompt may work well for a time, but as the models develop further, it the generated text quality may change, for the better or worse. So be open to experimentation.

Conclusion

As you can see, by Incorporating prompt engineering into your interaction with text-based LLMs, you unlock a powerful way to shape their responses effectively. So keep in mind, follow the principles outlined in this guide, and be willing to continue your learner’s journey. Because as you do, you’ll enhance the quality and relevance of generated text and achieve your objectives more efficiently.

Just like any skill, it’s important to learn from the experts in the field, and ODSC West will have amazing opportunities, workshops, talks, and events that will not only help you upskill your prompt engineering abilities but also help you chart a path forward in this growing field.

So get your pass today, and get ready to unlock your prompt’s potential.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.