![[llmgine Design 2025-05-03 07.08.53.excalidraw.png]] %%[[llmgine Design 2025-05-03 07.08.53.excalidraw.md|🖋 Edit in Excalidraw]]%% # llmgine: A Pattern-Based Approach to LLM Application Development ## Introduction The emergence of Large Language Models (LLMs) has catalyzed a new era of application development, bringing with it unique challenges and opportunities. However, the current landscape of LLM frameworks, while useful for rapid prototyping and simple implementations, often imposes rigid structures that constrain innovation and limit customization. This essay outlines the vision, technical architecture, and underlying philosophy of llmgine—a pattern-based approach to LLM application development that prioritizes flexibility, scalability, and observability. ## The Limitations of Current Frameworks Today's LLM frameworks excel at providing end-to-end solutions with minimal code. They offer large, pre-connected components that developers can configure without deep software engineering expertise. While this approach democratizes access to LLM capabilities, it creates a clear ceiling on what developers can build. These frameworks are fundamentally restrictive, abstracting away critical elements that developers should be able to customize. When applications grow beyond simple use cases, developers often find themselves fighting against framework constraints rather than being empowered by them. The result is a landscape where innovation is bounded by framework limitations rather than developer imagination. ## A New Paradigm: LLM Programming as a Language Rather than viewing LLM development through the lens of traditional frameworks, llmgine proposes that we treat it as a new programming language paradigm with its own patterns, idioms, and best practices. This paradigm recognizes that LLM applications have unique characteristics: 1. They are inherently asynchronous and event-driven 2. They require sophisticated context management 3. They benefit from flexible function calling mechanisms 4. They demand robust observability due to their probabilistic nature Instead of hiding these complexities behind rigid abstractions, llmgine embraces them through composable patterns that developers can implement according to their specific needs. ## Technical Architecture The llmgine architecture centers around a core engine component that orchestrates various systems: ### Core Components - **Variables**: Flexible state containers for application-specific data - **Custom Logic**: Application-specific processing that defines the unique behavior - **State Management**: Custom functions and commands that interact with application state ### Peripheral Systems - **Tools**: Interfaces for Function Calling and Code Calling - **LLM**: Connection to language models via an LLM Router - **Context**: Management of prompts, history, and memory, connected to persistent storage ### Communication Infrastructure - **Event Bus**: Carries events from the engine to various subscribers - **Command Bus**: Carries commands to the engine from various sources - **Message Bus**: Encompasses both event and command infrastructure ### Observability - **Logging**: Comprehensive recording of all system actions and responses - **Telemetry**: Collection of performance and behavior metrics ### Interface - **User Display**: Presentation of system outputs - **User Input**: Collection of user commands and queries This architecture provides a blueprint for LLM applications without prescribing implementation details. Developers can implement each component according to their specific requirements while maintaining a coherent overall structure. ## Small, Composable Patterns vs. Monolithic Frameworks The key philosophical difference in the llmgine approach is the emphasis on small, isolated, reusable functions rather than large, pre-connected blocks. While frameworks offer convenience for simple applications, they sacrifice flexibility and customization. llmgine's pattern-based approach requires more software engineering expertise but provides several advantages: 1. **Higher Ceiling**: Developers aren't constrained by framework limitations 2. **Precise Customization**: Applications can be tailored to exact requirements 3. **Future-Proof Design**: Components can evolve independently as the field advances 4. **Architectural Consistency**: Common patterns ensure maintainability without sacrificing flexibility This approach treats developers as skilled craftspeople who need high-quality tools rather than pre-built solutions. It acknowledges that truly innovative LLM applications require actual software engineering expertise. ## Event-Driven Architecture: The Foundation of LLM Applications LLM interactions are inherently asynchronous—you send a prompt, wait for a response, and then react to that response, potentially triggering new events. Traditional synchronous programming models struggle to represent this workflow elegantly. llmgine embraces event-driven architecture as the natural paradigm for LLM applications. The event bus allows components to communicate without tight coupling, enabling: 1. Asynchronous processing that doesn't block user interactions 2. Clean separation of concerns between components 3. Scalable applications that can grow without architectural overhaul 4. Observable systems where every event can be logged and analyzed This approach isn't optional—it's fundamental to how LLM applications should be structured. ## Observability: Critical for Probabilistic Systems Unlike deterministic software, LLM applications produce probabilistic outputs that can vary even with identical inputs. This unpredictability makes observability not just useful but essential. llmgine places observability at the architectural level rather than treating it as an afterthought. The dedicated observability layer connects to logging and telemetry systems, ensuring that developers can: 1. Track every interaction with LLMs 2. Monitor performance metrics like token usage and latency 3. Debug unexpected outputs by examining the full context 4. Identify patterns in model behavior over time This emphasis on observability acknowledges the unique challenges of developing with probabilistic systems and provides the tools necessary to address them. ## Conclusion: Enabling Rather Than Prescribing The llmgine philosophy can be summarized as enabling solutions rather than prescribing them. While current frameworks prescribe specific approaches to LLM application development, llmgine provides patterns and tools that enable developers to create their own solutions. This approach recognizes that the field of LLM application development is still in its infancy. By focusing on patterns rather than rigid frameworks, llmgine creates space for innovation and customization while still providing guidance on common challenges. As LLM capabilities continue to evolve, applications built with this pattern-based approach can adapt and grow without requiring complete rewrites. The result is a more sustainable, flexible approach to LLM application development that empowers developers to push the boundaries of what's possible. The future of LLM application development doesn't lie in more comprehensive frameworks but in better patterns that enable developers to express their unique visions. llmgine aims to provide those patterns while respecting the creativity and expertise of the developers who use them. ![[llmgine (Core) 2025-02-27 22.45.38.excalidraw.png]] %%[[llmgine (Core) 2025-02-27 22.45.38.excalidraw|🖋 Edit in Excalidraw]]%% ``` sequenceDiagram participant User as Discord User participant Darcy as Discord Bot (Darcy) participant LLM as LLM Server/Framework participant Notion as Notion API participant Dict as Python Dictionary Mapping %% User verification flow (hard-coded mapping) rect rgb(240, 240, 255) Note over User, Dict: User Verification (Hard-coded) Note right of Dict: Discord ID to Notion User mapping stored in Python dictionary User->>Darcy: Initial interaction Darcy->>LLM: Forward with Discord user ID LLM->>Dict: Look up Discord ID Dict->>LLM: Return associated Notion ID alt User not found LLM->>Darcy: Request verification with user Darcy->>User: "I don't recognize you. Please provide your Notion email" User->>Darcy: Provides Notion email Darcy->>LLM: Forward verification info LLM->>Dict: Add new mapping entry end LLM->>Darcy: User verified end %% View tasks flow rect rgb(240, 255, 240) Note over User, Notion: View Tasks Flow User->>Darcy: /darcy "I want to see [user]'s tasks in [project]" Darcy->>LLM: Forward raw request with Discord ID LLM->>Dict: Get Notion ID for Discord user Dict->>LLM: Return Notion ID LLM->>LLM: Parse request (user, project) LLM->>LLM: Generate function call LLM->>LLM: Verify function call is appropriate LLM->>Notion: Query tasks (user ID, project) Notion->>LLM: List of tasks with details LLM->>Darcy: Formatted task list response Darcy->>User: Display filtered task list end %% Assign task flow with confirmation rect rgb(255, 240, 240) Note over User, Notion: Assign Task Flow User->>Darcy: /darcy "assign [task] to [user] due by [date] in [project]" Darcy->>LLM: Forward raw request with Discord ID LLM->>Dict: Get Notion ID for Discord user Dict->>LLM: Return Notion ID LLM->>LLM: Parse request (task, user, date, project) %% Query active projects to get project ID LLM->>Notion: Query all active projects Notion->>LLM: List of active projects with IDs LLM->>LLM: Match project name to project ID LLM->>LLM: Generate function call with project ID LLM->>Darcy: Request confirmation with details Darcy->>User: Confirmation: "I'm going to assign [task] to [user] due [date] in [project]" User->>Darcy: Confirm (y) Darcy->>LLM: Forward confirmation LLM->>LLM: Verify function call with confirmation LLM->>Dict: Get Notion ID for target user Dict->>LLM: Return target Notion ID LLM->>Notion: Create task with details (including project ID) Notion->>LLM: Task created confirmation LLM->>Darcy: Task creation response Darcy->>User: Task successfully assigned end %% Update task flow with confirmation rect rgb(255, 255, 240) Note over User, Notion: Update Task Status Flow User->>Darcy: /darcy "I've just finished sending out the recruitment forms" Darcy->>LLM: Forward raw request with Discord ID LLM->>Dict: Get Notion ID for Discord user Dict->>LLM: Return Notion ID LLM->>LLM: Analyze statement for task reference LLM->>LLM: Generate potential function calls LLM->>Notion: Query user's active tasks Notion->>LLM: Active tasks list LLM->>LLM: Match statement to task options LLM->>Darcy: Send task options for confirmation Darcy->>User: "Do you mean task 1) X, 2) Y, or 3) Z?" User->>Darcy: Select option (1) Darcy->>LLM: Forward selected option LLM->>LLM: Verify function call with selection LLM->>Notion: Update task status to complete Notion->>LLM: Status updated confirmation LLM->>Darcy: Task update response Darcy->>User: Task marked as complete end %% Function Calling Verification Process rect rgb(240, 240, 240) Note over LLM, Notion: Function Call Verification Process LLM->>LLM: Process natural language LLM->>LLM: Identify intent LLM->>LLM: Generate function call Note over LLM: Verification Step LLM->>LLM: Validate function parameters LLM->>LLM: Check user permissions LLM->>LLM: Assess action impact alt Needs User Confirmation LLM->>Darcy: Request user confirmation Darcy->>User: Display confirmation request User->>Darcy: Provide confirmation Darcy->>LLM: Forward confirmation LLM->>LLM: Final verification end LLM->>LLM: Execute verified function call end ``` ![[llmgine Design 2025-03-26 17.49.07.excalidraw.png]] %%[[llmgine Design 2025-03-26 17.49.07.excalidraw.md|🖋 Edit in Excalidraw]]%% ``` ``` Minimum Viable Framework Features Command and Message Bus LLM Class Function Manager LLM Routing Simple Context Simple UI Interface Design Command Line UI Implementation Discord UI Implementation Observability Interface Logging, Metrics, etc Event & Command Bus Infrastructure Dependencies and Application Architecture Function Calling Manager Code Calling Manager LLM Manager LLM Router LLM Parser LiteLLM Implementation OpenAI Implementation Context Compiler Chat History Implementation Request Context Implementation FastAPI Backend Testing Infrastructure CI/CD Deployment Function Studio A Quick Aside: I'm not technical. Concepts to Catch Up - LLM Basics - LLM API & Response Format - LLM Function Calling - LLM Context & Memory Software Engineering - What is a server? - Async programming - What is CICD - Event Bus - Command Bus - Observability llmgine - What are AI Frameworks? - What are we building? - The llmgine architecture Timeline Design Decision 1: -