Convert Figma logo to code with AI

huggingface logohuggingface.js

Utilities to use the Hugging Face Hub API

2,109
408
2,109
159

Top Related Projects

18,913

A WebGL accelerated JavaScript library for training and deploying ML models.

Friendly machine learning for the web! 🤖

Pretrained models for TensorFlow.js

26,297

Library for fast text representation and classification.

Quick Overview

Huggingface.js is a JavaScript library that provides a client for the Hugging Face API, allowing developers to easily integrate machine learning models and NLP tasks into their JavaScript applications. It offers a simple interface to access various Hugging Face services, including inference, model management, and dataset operations.

Pros

  • Easy integration with JavaScript and TypeScript projects
  • Provides access to a wide range of pre-trained models and NLP tasks
  • Supports both browser and Node.js environments
  • Well-documented with clear examples and TypeScript typings

Cons

  • Limited to Hugging Face's API and models
  • Requires an API key for most functionalities
  • May have higher latency compared to running models locally
  • Dependency on external services might affect application reliability

Code Examples

  1. Performing text classification:
import { HfInference } from '@huggingface/inference';

const hf = new HfInference(process.env.HF_API_KEY);
const result = await hf.textClassification({
  model: 'distilbert-base-uncased-finetuned-sst-2-english',
  inputs: 'I love this movie!',
});
console.log(result);
  1. Generating text with a language model:
import { HfInference } from '@huggingface/inference';

const hf = new HfInference(process.env.HF_API_KEY);
const result = await hf.textGeneration({
  model: 'gpt2',
  inputs: 'Once upon a time,',
  parameters: { max_new_tokens: 50 },
});
console.log(result);
  1. Performing named entity recognition:
import { HfInference } from '@huggingface/inference';

const hf = new HfInference(process.env.HF_API_KEY);
const result = await hf.tokenClassification({
  model: 'dbmdz/bert-large-cased-finetuned-conll03-english',
  inputs: 'My name is Sarah and I live in London',
});
console.log(result);

Getting Started

To get started with huggingface.js, follow these steps:

  1. Install the package:
npm install @huggingface/inference
  1. Import the library and create an instance:
import { HfInference } from '@huggingface/inference';

const hf = new HfInference(process.env.HF_API_KEY);
  1. Use the instance to perform various NLP tasks:
const result = await hf.summarization({
  model: 'facebook/bart-large-cnn',
  inputs: 'Your long text to summarize goes here...',
});
console.log(result);

Remember to replace process.env.HF_API_KEY with your actual Hugging Face API key.

Competitor Comparisons

18,913

A WebGL accelerated JavaScript library for training and deploying ML models.

Pros of TensorFlow.js

  • More mature and established ecosystem with extensive documentation
  • Supports a wider range of machine learning tasks and models
  • Offers direct integration with TensorFlow's Python ecosystem

Cons of TensorFlow.js

  • Steeper learning curve for beginners
  • Larger bundle size, which may impact web application performance
  • Less focus on natural language processing tasks compared to Hugging Face

Code Comparison

TensorFlow.js:

const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);
const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);
model.fit(xs, ys, {epochs: 10}).then(() => {
  model.predict(tf.tensor2d([5], [1, 1])).print();
});

Hugging Face.js:

import { HfInference } from '@huggingface/inference';
const hf = new HfInference('YOUR_API_KEY');
hf.textGeneration({
  model: 'gpt2',
  inputs: 'The quick brown fox',
}).then(response => console.log(response));

This comparison highlights the different focus areas of each library, with TensorFlow.js providing more general-purpose machine learning capabilities and Hugging Face.js specializing in natural language processing tasks with a simpler API.

Friendly machine learning for the web! 🤖

Pros of ml5-library

  • Designed specifically for creative coding and artists
  • Simpler API for beginners and non-technical users
  • Includes pre-trained models for common creative tasks

Cons of ml5-library

  • Limited to browser-based applications
  • Smaller community and fewer resources compared to Hugging Face
  • Less frequent updates and maintenance

Code Comparison

ml5-library:

// Image classification
const classifier = ml5.imageClassifier('MobileNet', modelLoaded);
classifier.classify(img, gotResults);

function gotResults(error, results) {
  console.log(results);
}

huggingface.js:

import { pipeline } from '@huggingface/inference'

const classifier = pipeline('image-classification')
const result = await classifier({
  inputs: imageUrl,
})
console.log(result)

ml5-library focuses on simplicity and ease of use for creative applications, making it accessible to beginners and artists. However, it has limitations in terms of scope and community support. huggingface.js offers a more comprehensive set of tools and models, with better documentation and regular updates, but may have a steeper learning curve for non-technical users. The code comparison shows that ml5-library uses a more straightforward API, while huggingface.js provides a more flexible and powerful approach.

Pretrained models for TensorFlow.js

Pros of tfjs-models

  • Extensive collection of pre-trained models for various tasks
  • Optimized for browser and mobile environments
  • Seamless integration with TensorFlow.js ecosystem

Cons of tfjs-models

  • Limited to TensorFlow.js framework
  • May have larger model sizes compared to Hugging Face's optimized models
  • Less frequent updates compared to Hugging Face's rapidly evolving ecosystem

Code Comparison

tfjs-models:

import * as mobilenet from '@tensorflow-models/mobilenet';

const img = document.getElementById('img');
const model = await mobilenet.load();
const predictions = await model.classify(img);
console.log(predictions);

huggingface.js:

import { HfInference } from '@huggingface/inference';

const hf = new HfInference('YOUR_API_KEY');
const result = await hf.imageClassification({
  model: 'google/mobilenet_v2_1.0_224',
  data: imageBlob
});
console.log(result);

Summary

While tfjs-models offers a comprehensive set of pre-trained models optimized for browser and mobile environments, huggingface.js provides access to a wider range of state-of-the-art models and is more frequently updated. tfjs-models is tightly integrated with the TensorFlow.js ecosystem, whereas huggingface.js offers more flexibility in model selection and is not limited to a single framework.

26,297

Library for fast text representation and classification.

Pros of fastText

  • Efficient and lightweight text classification and word representation learning
  • Supports multiple languages and can handle large datasets
  • Provides pre-trained models and embeddings for various languages

Cons of fastText

  • Limited to specific NLP tasks (text classification and word embeddings)
  • Less versatile compared to huggingface.js for general-purpose NLP tasks
  • Requires more manual implementation for advanced NLP pipelines

Code Comparison

fastText:

import fasttext
model = fasttext.train_supervised("train.txt")
result = model.predict("example text")

huggingface.js:

import { pipeline } from '@huggingface/inference'
const classifier = await pipeline('sentiment-analysis')
const result = await classifier('example text')

Summary

fastText is a specialized library for efficient text classification and word embeddings, while huggingface.js provides a more comprehensive and user-friendly interface for various NLP tasks. fastText excels in performance and multilingual support, but huggingface.js offers greater flexibility and easier integration with state-of-the-art models. The choice between the two depends on the specific requirements of your NLP project and the level of customization needed.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README


huggingface javascript library logo

// Programmatically interact with the Hub

await createRepo({
  repo: { type: "model", name: "my-user/nlp-model" },
  accessToken: HF_TOKEN
});

await uploadFile({
  repo: "my-user/nlp-model",
  accessToken: HF_TOKEN,
  // Can work with native File in browsers
  file: {
    path: "pytorch_model.bin",
    content: new Blob(...)
  }
});

// Use all supported Inference Providers!

await inference.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  provider: "sambanova", // or together, fal-ai, replicate, cohere …
  messages: [
    {
      role: "user",
      content: "Hello, nice to meet you!",
    },
  ],
  max_tokens: 512,
  temperature: 0.5,
});

await inference.textToImage({
  model: "black-forest-labs/FLUX.1-dev",
  provider: "replicate",
  inputs: "a picture of a green bird",
});

// and much more…

Hugging Face JS libraries

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

  • @huggingface/inference: Use all supported (serverless) Inference Providers or switch to Inference Endpoints (dedicated) to make calls to 100,000+ Machine Learning models
  • @huggingface/hub: Interact with huggingface.co to create or delete repos and commit / download files
  • @huggingface/mcp-client: A Model Context Protocol (MCP) client, and a tiny Agent library, built on top of InferenceClient.
  • @huggingface/gguf: A GGUF parser that works on remotely hosted files.
  • @huggingface/dduf: Similar package for DDUF (DDUF Diffusers Unified Format)
  • @huggingface/tasks: The definition files and source-of-truth for the Hub's main primitives like pipeline tasks, model libraries, etc.
  • @huggingface/jinja: A minimalistic JS implementation of the Jinja templating engine, to be used for ML chat templates.
  • @huggingface/space-header: Use the Space mini_header outside Hugging Face
  • @huggingface/ollama-utils: Various utilities for maintaining Ollama compatibility with models on the Hugging Face Hub.

We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.

The libraries are still very young, please help us by opening issues!

Installation

From NPM

To install via NPM, you can download the libraries as needed:

npm install @huggingface/inference
npm install @huggingface/hub
npm install @huggingface/mcp-client

Then import the libraries in your code:

import { InferenceClient } from "@huggingface/inference";
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import { McpClient } from "@huggingface/mcp-client";
import type { RepoId } from "@huggingface/hub";

From CDN or Static hosting

You can run our packages with vanilla JS, without any bundler, by using a CDN or static hosting. Using ES modules, i.e. <script type="module">, you can import the libraries in your code:

<script type="module">
    import { InferenceClient } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@4.0.3/+esm';
    import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/hub@2.2.0/+esm";
</script>

Deno

// esm.sh
import { InferenceClient } from "https://esm.sh/@huggingface/inference"

import { createRepo, commit, deleteRepo, listFiles } from "https://esm.sh/@huggingface/hub"
// or npm:
import { InferenceClient } from "npm:@huggingface/inference"

import { createRepo, commit, deleteRepo, listFiles } from "npm:@huggingface/hub"

Usage examples

Get your HF access token in your account settings.

@huggingface/inference examples

import { InferenceClient } from "@huggingface/inference";

const HF_TOKEN = "hf_...";

const client = new InferenceClient(HF_TOKEN);

// Chat completion API
const out = await client.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
});
console.log(out.choices[0].message);

// Streaming chat completion API
for await (const chunk of client.chatCompletionStream({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
})) {
  console.log(chunk.choices[0].delta.content);
}

/// Using a third-party provider:
await client.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
  provider: "sambanova", // or together, fal-ai, replicate, cohere …
})

await client.textToImage({
  model: "black-forest-labs/FLUX.1-dev",
  inputs: "a picture of a green bird",
  provider: "fal-ai",
})



// You can also omit "model" to use the recommended model for the task
await client.translation({
  inputs: "My name is Wolfgang and I live in Amsterdam",
  parameters: {
    src_lang: "en",
    tgt_lang: "fr",
  },
});

// pass multimodal files or URLs as inputs
await client.imageToText({
  model: 'nlpconnect/vit-gpt2-image-captioning',
  data: await (await fetch('https://picsum.photos/300/300')).blob(),
})

// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2Client = client.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2Client.textGeneration({ inputs: 'The answer to the universe is' });

// Chat Completion
const llamaEndpoint = client.endpoint(
  "https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.1-8B-Instruct"
);
const out = await llamaEndpoint.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
});
console.log(out.choices[0].message);

@huggingface/hub examples

import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";

const HF_TOKEN = "hf_...";

await createRepo({
  repo: "my-user/nlp-model", // or { type: "model", name: "my-user/nlp-test" },
  accessToken: HF_TOKEN
});

await uploadFile({
  repo: "my-user/nlp-model",
  accessToken: HF_TOKEN,
  // Can work with native File in browsers
  file: {
    path: "pytorch_model.bin",
    content: new Blob(...)
  }
});

await deleteFiles({
  repo: { type: "space", name: "my-user/my-space" }, // or "spaces/my-user/my-space"
  accessToken: HF_TOKEN,
  paths: ["README.md", ".gitattributes"]
});

@huggingface/mcp-client example

import { Agent } from '@huggingface/mcp-client';

const HF_TOKEN = "hf_...";

const agent = new Agent({
  provider: "auto",
  model: "Qwen/Qwen2.5-72B-Instruct",
  apiKey: HF_TOKEN,
  servers: [
    {
      // Playwright MCP
      command: "npx",
      args: ["@playwright/mcp@latest"],
    },
  ],
});

await agent.loadTools();

for await (const chunk of agent.run("What are the top 5 trending models on Hugging Face?")) {
    if ("choices" in chunk) {
        const delta = chunk.choices[0]?.delta;
        if (delta.content) {
            console.log(delta.content);
        }
    }
}

There are more features of course, check each library's README!

Formatting & testing

sudo corepack enable
pnpm install

pnpm -r format:check
pnpm -r lint:check
pnpm -r test

Building

pnpm -r build

This will generate ESM and CJS javascript files in packages/*/dist, eg packages/inference/dist/index.mjs.

NPM DownloadsLast 30 Days