Chaining Large Language Model (LLM) Prompts Via Visual Programming
While companies are trying to harness LLMs in a production setting, principles like chaining and templating are emerging…
In the image above I list current prompt engineering tools, and LLM API Build Frameworks. LLM Build Frameworks vary in complexity and usability while most are premised on the same principles of chains, prompts, no-code to low code and task decomposition. Of these, PromptChainer I consider the most advanced and complete.
Considering the PromptChainer development interface…
Two terms have emerged with the adoption of LLMs in production and product settings, the first is prompt chaining, and the second is templating.
Some refer to templating as entity injection or prompt injection.
LLMs has lead to rapid prototyping of natural language applications, but when applications include complex conversations, decision making and more, a single prompt does not suffice.
Hence there is a notion amongst various frameworks to chain multiple LLM runs, or prompts together. The objective is to accomplish more complex tasks leveraging the flexibility of LLMs, together with programability and controlled complexity.
For rapid scaling of production adoption in business applications, there is a need to have a no-code to low-code environment to popularise LLM & AI-infused applications.
The image above shows one of the most complete and comprehensive LLM application development GUIs I have seen to date…
A: The chain view to create chains, add, remove, manage node connections.
B: The node view allows for testing nodes (prompts) in isolation.
C: The conversation can be run.
D: Chains can be added, edited and more.
As seen in the image below:
A conversation flow is decomposed into nodes which are chained together. The middle node is selected, and the LLM Task Editor is displayed on the right. You can see the input, output on the templating interface.
The Advantages Of Templating Prompts Are:
LLM Prompts can be re-used and shared and programmed.
Templating of generative prompts allow for the programmability of prompts, storage and re-use.
Templates, which acts as a text file, but placeholders for variables and expressions are inserted.
The placeholders are replaced with values at run-time.
Prompts can be used within context and is a measured way of controlling the generated content.
As seen above, a JavaScript node (A) is created to concatenate two pairs of results, via a code window on the right (B).
Final Thoughts
There is also an emerging abstraction between LLM Build Tools and Large Language Models.
This allows for Build Tools to use one or more LLMs, depending on the task at hand.
This also affords the possibility to migrate from one LLM to another, based on price, performance, functionality, etc.
There is an untapped opportunity in using a simple hybrid NLU & NLG approach that mixes traditional intent-based logic with dynamic LLM responses.
This approach serves as an enabler for existing chatbot frameworks by leveraging developed NLU models.
Whilst the generated power of LLMs are undeniable, the predictive power is comparable to NLU engines in many instances. Considering cost, accessibility, training data, performance and more.
There seem to be untapped possibilities on combining LLM Generative power with traditional NLU.
⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️
I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.