The world of prompt engineering is fascinating on various levels and there’s no shortage of clever ways to nudge agents like ChatGPT into generating specific kinds of responses. Techniques like Chain-of-Thought (CoT), Instruction-Based, N-shot, Few-shot, and even tricks like Flattery/Role Assignment are the inspiration behind libraries full of prompts aiming to meet every need.

In this article, I will delve into a technique that, as far as my research shows, is potentially less explored. While I’ll tentatively label it as “new,” I’ll refrain from calling it “novel.” Given the blistering rate of innovation in prompt engineering and the ease with which new methods can be developed, it’s entirely possible that this technique might already exist in some form.

The essence of the technique aims to make ChatGPT operate in a way that simulates a program. A program, as we know, comprises a sequence of instructions typically bundled into functions to perform specific tasks. In some ways, this technique is an amalgam of Instruction-Based and Role-Based prompting techniques. But unlike those approaches, it seeks to utilize a repeatable and static framework of instructions, allowing the output from one function to inform another and the entirety of the interaction to stay within the boundaries of the program. This modality should align well with the prompt-completion mechanics in agents like ChatGPT.

To illustrate the technique, let’s specify the parameters for a mini-app within ChatGPT4 designed to function as an Interactive Innovator’s Workshop. Our mini-app will incorporate the following functions and features:

  1. Work on New Idea
  2. Expand on Idea
  3. Summarize Idea
  4. Retrieve Ideas
  5. Continue Working on Previous Idea
  6. Token/”Memory” Usage Statistics

To be clear we will not be asking ChatGPT to code the mini-app in any specific programming language and we will reflect this in our program parameters.

With this program outline let’s go about writing the priming prompt to instantiate our Interactive Innovator’s Workshop mini-app in ChatGPT.

Program Simulation Priming Prompt

 

Innovator’s Interactive Workshop Program I want you to simulate an Innovator’s Interactive Workshop application whose core features are defined as follows:

 

1. Work on New Idea: Prompt user to work on new idea. At any point when a user is ready to work through a new idea the program will suggest that a date or some time reference be provided. Here is additional detail on the options: a. Start from Scratch: Asks the user for the idea they would like to work on. b. Get Inspired: The program assists user interactively to come up with an idea to work on. The program will ask if the user has a general sense of an area to focus on or whether the program should present options. At all times the user is given the option to go directly to working on an idea.

2. Expand on Idea: Program interactively helps user expand on an idea.

3. Summarize Idea: Program proposes a summary of the idea regardless of whether or not it has been expanded upon and proposes a title. The user may choose to rewrite or edit the summary. Once the user is satisfied with the summary, the program will “save” the idea summary.

4. Retrieve Ideas: Program retrieves the titles of the idea summaries that were generated during the session. User is given the option to show a summary of one of the ideas or Continue Working on a Previous Idea.

5. Continue Working on Previous Idea: Program retrieves the titles of the idea summaries that were generated during the session. User is asked to choose an idea to continue working on.

6. Token/Memory Usage: Program displays the current token count and its percentage relative to the token limit of 32,000 tokens.

 

Other program parameters and considerations:

 

1. All output should be presented in the form of text and embedded windows with code or markdown should not be used.

2. The user flow and user experience should emulate that of a real program but nevertheless be conversational just like ChatGPT is.

3. The Program should use emojis in helping convey context around the output. But this should be employed sparingly and without getting too carried away. The menu should however always have emojis and they should remain consistent throughout the conversation.

 

Once this prompt is received, the program will start with Main Menu and a short inspirational welcome message the program devises. Functions are selected by typing the number corresponding to the function or text that approximates to the function in question. “Help” or “Menu” can be typed at any time to return to this menu.

 

Feel free to load the prompt into ChatGPT4 if you want to follow along in a more interactive manner and test it for yourself.

Conclusions and Observations

Frankly, this exercise, though limited in both scope and functionality, has surpassed my expectations. We could have asked ChatGPT to code the mini-app in a language like Python and then leveraged Code Interpreter (now known as Advanced Data Analysis) to run it in a persistent Python session. That approach would however have introduced a level of rigidity that would have made it difficult to enable the conversational functionality that was natively present in our mini-app. Not to mention, we immediately run the risk of non-functioning code especially in a program with multiple overlapping functions.

ChatGPT’s performance was particularly impressive in that it simulated program behavior with high fidelity. The prompt completions stayed within the boundary of the program definition and even in cases where function behavior was not defined explicitly, the completions made logical sense within the context of what the mini-app’s purpose was.

This Program Simulation technique might work well with ChatGPT’s “Custom Instructions” feature, although it’s worth mentioning that doing so would apply the program’s behavior to all subsequent interactions.

My next steps include conducting a deeper examination of this technique to assess if a comprehensive testing framework might shed light on how this approach stacks up against other prompt engineering techniques. That type of exercise might also help pinpoint what specific tasks (or class of tasks) this technique is best suited for. Stay tuned for more to come.

In the meantime hopefully you find this technique and prompt helpful in your interactions.

Leave a Reply

Your email address will not be published. Required fields are marked *