Convert Figma logo to code with AI

lemonade-sdk logolemonade

Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

1,415
109
1,415
68

Top Related Projects

The official Python library for the OpenAI API

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

119,202

🦜🔗 The platform for reliable agents.

Integrate cutting-edge LLM technology quickly and easily into your apps

Quick Overview

Lemonade SDK is a Python library for interacting with the Lemonade insurance API. It provides a simple and intuitive interface for developers to integrate Lemonade's insurance services into their applications, allowing for policy management, claims processing, and other insurance-related operations.

Pros

  • Easy-to-use API wrapper for Lemonade's insurance services
  • Comprehensive documentation and examples
  • Supports multiple insurance products (e.g., renters, homeowners, pet)
  • Regular updates and maintenance

Cons

  • Limited to Lemonade's specific insurance offerings
  • Requires API key and authentication, which may have associated costs
  • May have rate limits or usage restrictions
  • Dependent on Lemonade's API stability and availability

Code Examples

Creating a new policy:

from lemonade_sdk import LemonadeClient

client = LemonadeClient(api_key="your_api_key")
new_policy = client.create_policy(
    product="renters",
    coverage_amount=25000,
    deductible=500,
    start_date="2023-07-01"
)
print(f"New policy created: {new_policy.id}")

Retrieving policy details:

policy = client.get_policy(policy_id="ABC123")
print(f"Policy holder: {policy.holder_name}")
print(f"Coverage amount: ${policy.coverage_amount}")

Filing a claim:

claim = client.file_claim(
    policy_id="ABC123",
    incident_date="2023-06-15",
    description="Water damage from burst pipe",
    estimated_loss=1500
)
print(f"Claim filed successfully. Claim ID: {claim.id}")

Getting Started

To get started with the Lemonade SDK:

  1. Install the library:

    pip install lemonade-sdk
    
  2. Import the client and initialize it with your API key:

    from lemonade_sdk import LemonadeClient
    
    client = LemonadeClient(api_key="your_api_key")
    
  3. Start using the SDK to interact with Lemonade's insurance services:

    # Example: Get all policies for a user
    policies = client.list_policies(user_id="user123")
    for policy in policies:
        print(f"Policy ID: {policy.id}, Type: {policy.product}")
    

For more detailed information and advanced usage, refer to the official documentation.

Competitor Comparisons

The official Python library for the OpenAI API

Pros of openai-python

  • More comprehensive documentation and examples
  • Wider range of supported OpenAI API features
  • Larger community and more frequent updates

Cons of openai-python

  • More complex setup and configuration
  • Steeper learning curve for beginners
  • Less focus on specific use cases

Code Comparison

openai-python:

import openai
openai.api_key = "your-api-key"
response = openai.Completion.create(
  engine="davinci", prompt="Hello, world!", max_tokens=5
)

lemonade:

from lemonade import Lemonade
lemonade = Lemonade("your-api-key")
response = lemonade.complete("Hello, world!", max_tokens=5)

The openai-python library offers a more verbose and flexible approach, while lemonade provides a simpler, more streamlined interface for basic tasks. openai-python's structure allows for greater customization and access to advanced features, but lemonade's design focuses on ease of use for common operations.

Both libraries serve their purposes well, with openai-python being better suited for complex projects requiring full API access, and lemonade offering a more accessible entry point for developers looking to quickly integrate OpenAI's capabilities into their applications.

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Pros of Transformers

  • Extensive library of pre-trained models for various NLP tasks
  • Active community and frequent updates
  • Comprehensive documentation and examples

Cons of Transformers

  • Steeper learning curve for beginners
  • Large library size and potential overhead for simple projects
  • May require more computational resources for some models

Code Comparison

Transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

Lemonade:

from lemonade import Classifier

classifier = Classifier()
result = classifier.classify("I love this product!")
print(f"Label: {result.label}, Score: {result.score:.4f}")

The Transformers library offers a more extensive range of pre-trained models and tasks, while Lemonade appears to provide a simpler API for basic classification tasks. Transformers may be better suited for complex NLP projects, whereas Lemonade might be more appropriate for quick and straightforward implementations.

119,202

🦜🔗 The platform for reliable agents.

Pros of LangChain

  • More comprehensive and feature-rich framework for building LLM applications
  • Larger community and ecosystem, with extensive documentation and examples
  • Supports a wider range of LLM providers and integrations

Cons of LangChain

  • Steeper learning curve due to its extensive feature set
  • Can be overkill for simpler projects, potentially leading to unnecessary complexity
  • Requires more setup and configuration compared to Lemonade's streamlined approach

Code Comparison

LangChain example:

from langchain import OpenAI, LLMChain, PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)

Lemonade example:

from lemonade import Lemonade

lemonade = Lemonade()
response = lemonade.complete("What is a good name for a company that makes {product}?", product="shoes")

The code comparison shows that Lemonade offers a more straightforward and concise approach to interacting with LLMs, while LangChain provides more flexibility and customization options. LangChain's example demonstrates its modular structure, allowing for separate configuration of the LLM, prompt template, and chain. Lemonade's example showcases its simplicity, requiring fewer lines of code to achieve a similar result.

Integrate cutting-edge LLM technology quickly and easily into your apps

Pros of Semantic Kernel

  • More comprehensive documentation and examples
  • Broader language support (C#, Python, Java)
  • Larger community and active development

Cons of Semantic Kernel

  • Steeper learning curve due to more complex architecture
  • Heavier resource requirements for deployment

Code Comparison

Semantic Kernel (C#):

using Microsoft.SemanticKernel;

var kernel = Kernel.Builder.Build();
var result = await kernel.RunAsync("Hello world!");
Console.WriteLine(result);

Lemonade (JavaScript):

import { Lemonade } from '@lemonade-hq/sdk';

const lemonade = new Lemonade();
const result = await lemonade.run('Hello world!');
console.log(result);

Key Differences

  • Semantic Kernel offers a more modular approach with plugins and skills
  • Lemonade provides a simpler API for quick integration
  • Semantic Kernel has built-in memory and planning capabilities
  • Lemonade focuses on ease of use and rapid prototyping

Use Cases

Semantic Kernel is well-suited for:

  • Enterprise-level AI applications
  • Complex, multi-step AI workflows
  • Projects requiring advanced NLP capabilities

Lemonade is better for:

  • Startups and small teams
  • Quick prototyping and MVP development
  • Simple AI-powered features in existing applications

Both libraries aim to simplify AI integration, but Semantic Kernel offers more advanced features at the cost of complexity, while Lemonade prioritizes simplicity and ease of use.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

🍋 Lemonade: Local LLM Serving with GPU and NPU acceleration

Discord Lemonade tests Windows 11 Ubuntu 24.04 | 25.04 macOS 14+ Made with Python PRs Welcome Latest Release GitHub downloads GitHub issues License: Apache Code style: black Star History Chart

Lemonade Banner

Download | Documentation | Discord

Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs.

Startups such as Styrk AI, research teams like Hazy Research at Stanford, and large companies like AMD use Lemonade to run LLMs.

Getting Started

Step 1: Download & InstallStep 2: Launch and Pull ModelsStep 3: Start chatting!
Download & InstallLaunch and Pull ModelsStart chatting!
Install using a GUI (Windows only), pip, or from source.Use the Model Manager to install modelsA built-in chat interface is available!

Use it with your favorite OpenAI-compatible app!

Open WebUI  Continue  Gaia  AnythingLLM  AI Dev Gallery  LM-Eval  Lemonade Arcade  AI Toolkit

[!TIP] Want your app featured here? Let's do it! Shoot us a message on Discord, create an issue, or email.

Using the CLI

To run and chat with Gemma 3:

lemonade-server run Gemma-3-4b-it-GGUF

To install models ahead of time, use the pull command:

lemonade-server pull Gemma-3-4b-it-GGUF

To check all models available, use the list command:

lemonade-server list

Note: If you installed from source, use the lemonade-server-dev command instead.

Tip: You can use --llamacpp vulkan/rocm to select a backend when running GGUF models.

Model Library

Lemonade supports both GGUF and ONNX models as detailed in the Supported Configuration section. A list of all built-in models is available here.

You can also import custom GGUF and ONNX models from Hugging Face by using our Model Manager (requires server to be running).

Model Manager

Supported Configurations

Lemonade supports the following configurations, while also making it easy to switch between them at runtime. Find more information about it here.

HardwareEngine: OGAEngine: llamacppEngine: FLMWindowsLinuxmacOS
🧠 CPUAll platformsAll platforms-✅✅✅
🎮 GPU—Vulkan: All platforms
ROCm: Selected AMD platforms*
Metal: Apple Silicon
—✅✅✅
🤖 NPUAMD Ryzen™ AI 300 series—Ryzen™ AI 300 series✅——
* See supported AMD ROCm platforms
Architecture Platform Support GPU Models
gfx1151 (STX Halo) Windows, Ubuntu Ryzen AI MAX+ Pro 395
gfx120X (RDNA4) Windows, Ubuntu Radeon AI PRO R9700, RX 9070 XT/GRE/9070, RX 9060 XT
gfx110X (RDNA3) Windows, Ubuntu Radeon PRO W7900/W7800/W7700/V710, RX 7900 XTX/XT/GRE, RX 7800 XT, RX 7700 XT

Integrate Lemonade Server with Your Application

You can use any OpenAI-compatible client library by configuring it to use http://localhost:8000/api/v1 as the base URL. A table containing official and popular OpenAI clients on different languages is shown below.

Feel free to pick and choose your preferred language.

PythonC++JavaC#Node.jsGoRubyRustPHP
openai-pythonopenai-cppopenai-javaopenai-dotnetopenai-nodego-openairuby-openaiasync-openaiopenai-php

Python Client Example

from openai import OpenAI

# Initialize the client to use Lemonade Server
client = OpenAI(
    base_url="http://localhost:8000/api/v1",
    api_key="lemonade"  # required but unused
)

# Create a chat completion
completion = client.chat.completions.create(
    model="Llama-3.2-1B-Instruct-Hybrid",  # or any other available model
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

# Print the response
print(completion.choices[0].message.content)

For more detailed integration instructions, see the Integration Guide.

Beyond an LLM Server

The Lemonade SDK also include the following components:

  • 🐍 Lemonade API: High-level Python API to directly integrate Lemonade LLMs into Python applications.
  • 🖥️ Lemonade CLI: The lemonade CLI lets you mix-and-match LLMs (ONNX, GGUF, SafeTensors) with prompting templates, accuracy testing, performance benchmarking, and memory profiling to characterize your models on your hardware.

FAQ

To read our frequently asked questions, see our FAQ Guide

Contributing

We are actively seeking collaborators from across the industry. If you would like to contribute to this project, please check out our contribution guide.

New contributors can find beginner-friendly issues tagged with "Good First Issue" to get started.

Good First Issue

Maintainers

This project is sponsored by AMD. It is maintained by @danielholanda @jeremyfowers @ramkrishna @vgodsoe in equal measure. You can reach us by filing an issue, emailing lemonade@amd.com, or joining our Discord.

License and Attribution

This project is: