Sitemap - 2024 - Cobus Greyling on LLMs, NLU, NLP, chatbots & voicebots
Data Design For Fine-Tuning LLM Long Context Windows
The Importance Of Granular Data Design For Fine-Tuning
LangChain Structured Output Parser Using OpenAI
Three Considerations For Private Open-Source LLM Instances
Intents Are Not Going Away…RoNID Is A New Intent Discovery Framework
The Large Language Model Landscape — Version 5
LLMs Excel At In-Context Learning (ICL), But What About Long In-context Learning?
Using LLMs For Autonomous Vehicles
Matching Retrieved Context With Question Context Using LogProbs With OpenAI for RAG
Rapid Development Of Intelligent Generative AI APIs
RAG, Hallucination & Structure: Research By ServiceNow
Data Design For Fine-Tuning To Improve Small Language Model Behaviour
No-Code Deployment & Orchestration Of Open-Sourced Foundation Models
LlamaIndex Agent Step-Wise Execution Framework With Agent Runners & Agent Workers
The Case For An AI Productivity Suite
Step-Wise Controllable Agents From LlamaIndex
Improve Conversational UIs Using Social Intelligence
RAG Implementations Are Becoming More Agent-Like
Agentic Search-Augmented Factuality Evaluator (SAFE) For LLMs
FaaF: Facts As A Function For Evaluating RAG
Disambiguation: Using Dynamic Context In Crafting Effective RAG Question Suggestions
FIT-RAG: Are RAG Architectures Settling On A Standardised Approach?
Challenges In Adopting Retrieval-Augmented Generation Solutions
Retrieval Augmented Fine-Tuning (RAFT)
Complete AI Productivity Suite
DRAGIN: Dynamic RAG Based On Real-Time Information Needs Of LLMs
A New Study Compares RAG & Fine-Tuning For Knowledge Base Use-Cases
Chain-of-Instructions (CoI) Fine-Tuning & Going Beyond Instruction Tuning
Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws
Please Stop Saying Long Context Windows Will Replace RAG
TinyLlama Is An Open-Source Small Language Model
Agentic RAG: Context-Augmented OpenAI Agents
RAT — Retrieval Augmented Thoughts
Exploring the Purpose, Power & Potential of Small Language Models (SLMs)
Large Impact: The Rise of Small Language Models
Large Language Models Excel At In-Context Learning (ICL)
RAG, Data Privacy, Attack Methods & Safe-Prompts
Self-Reflective Retrieval-Augmented Generation (SELF-RAG)
Time-Aware Adaptive RAG (TA-ARE)
Develop Generative Apps Locally
How To Create A LangChain Application That Runs Locally & Offline
Language Model Quantization Explained
LLM Drift, Prompt Drift & Cascading
Catastrophic Forgetting In LLMs
Leveraging LLM In-Context Learning Abilities
Five Stages Of LLM Implementation [Updated]
Demonstrate, Search, Predict (DSP) for LLMs
T-RAG = RAG + Fine-Tuning + Entity Detection
Run A Small Language Model (SLM) Local & Offline
The Case For Small Language Models
Beyond Chain-of-Thought LLM Reasoning
Comparing Human, LLM & LLM-RAG Responses
Craft Successful Conversational User Interfaces: Align User Intent With Developed Intent
A Benchmark for Verifying Chain-Of-Thought
Seven RAG Engineering Failure Points
OpenAI Agent Query Planning Using LlamaIndex
Adding Noise Improves RAG Performance
UniMS-RAG: Unified Multi-Source RAG for Personalised Dialogue
Chain-of-Symbol Prompting (CoS) For Large Language Models
Prompt-RAG: Vector Embedding Free Retrieval-Augmented Generation
Concise Chain-of-Thought (CCoT) Prompting
Retrieval-Augmented Generation (RAG) vs LLM Fine-Tuning
Understanding LLM User Experience & Expectation
Meta Taxonomy Of Large Language Model Correction & Refinement
Considering Large Language Model Reasoning Step Length
Large Language Model (LLM) SWOT Analysis (Updated)
Chain Of Natural Language Inference (CoNLI)
Validating Low-Confidence LLM Generation
Large Language Model Hallucination Mitigation Techniques
What Is LangChain Expression Language (LCEL)?
Random Chain-Of-Thought For LLMs & Distilling Self-Evaluation Capability
Active Prompting with Chain-of-Thought for Large Language Models
Teaching LLMs To Say, “I don’t know”