Top Related Projects
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Inference code for Llama models
TensorFlow code and pre-trained models for BERT
Quick Overview
The eugeneyan/open-llms repository is a curated list of open Large Language Models (LLMs) available for commercial use. It provides an overview of various open-source LLMs, their characteristics, and licensing information, serving as a valuable resource for developers and researchers interested in working with these models.
Pros
- Comprehensive list of open LLMs with key details
- Regular updates to include new models and information
- Clear licensing information for each model
- Includes links to model repositories and relevant papers
Cons
- Limited technical details about model architecture and performance
- No direct code examples or implementation guidance
- May not include all available open LLMs
- Requires users to navigate to external sources for more in-depth information
Code Examples
This repository is not a code library, so code examples are not applicable.
Getting Started
This repository is a curated list and does not require installation or setup. To use the information:
- Visit the repository at https://github.com/eugeneyan/open-llms
- Browse the table of open LLMs
- Click on the provided links for more information about specific models
- Check the licensing information before using any model in your project
Competitor Comparisons
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Pros of transformers
- Comprehensive library with support for numerous models and tasks
- Extensive documentation and community support
- Regular updates and new model implementations
Cons of transformers
- Larger codebase, potentially more complex for beginners
- May include unnecessary features for users focused solely on LLMs
Code comparison
transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
open-llms:
# No direct code implementation; primarily a curated list of open-source LLMs
Summary
transformers is a comprehensive library for various NLP tasks, including but not limited to LLMs. It offers extensive functionality, documentation, and community support. However, its broad scope may be overwhelming for users specifically interested in LLMs.
open-llms, on the other hand, is a curated list of open-source LLMs. It doesn't provide direct code implementation but serves as a valuable resource for discovering and comparing available open-source language models.
While transformers offers a complete toolkit for working with models, open-llms focuses on providing an overview of available open-source LLMs, making it easier for users to find and evaluate suitable models for their projects.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- Comprehensive optimization toolkit for deep learning
- Supports distributed training and inference
- Actively maintained with frequent updates
Cons of DeepSpeed
- Steeper learning curve due to complexity
- Requires more setup and configuration
- Primarily focused on performance optimization
Code Comparison
DeepSpeed:
import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=params
)
open-llms:
# No specific code implementation
# Primarily a curated list of open-source LLMs
Summary
DeepSpeed is a powerful toolkit for optimizing deep learning models, offering advanced features for distributed training and inference. It's actively maintained but requires more setup and expertise to use effectively. In contrast, open-llms is a curated list of open-source language models, serving as a reference rather than a tool. While DeepSpeed provides code for implementation, open-llms focuses on cataloging available models without specific code examples.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Pros of gpt-neox
- Comprehensive codebase for training large language models from scratch
- Includes advanced features like distributed training and model parallelism
- Actively maintained with regular updates and improvements
Cons of gpt-neox
- Steeper learning curve due to its complexity and advanced features
- Requires significant computational resources for training large models
- More focused on model training rather than providing a curated list of pre-trained models
Code comparison
gpt-neox:
from megatron.neox_arguments import NeoXArgs
from megatron.global_vars import set_global_variables, get_tokenizer
from megatron.training import pretrain
args_defaults = NeoXArgs.from_ymls(["configs/your_config.yml"])
open-llms:
| Model | Parameters | Context | Architecture | License |
|-------|------------|---------|--------------|---------|
| GPT-J | 6B | 2048 | GPT-3 | Apache 2.0 |
| BLOOM | 176B | 2048 | BLOOM | Responsible AI |
The gpt-neox repository provides a complete framework for training large language models, while open-llms serves as a curated list of open-source language models with their specifications. gpt-neox is more suitable for researchers and developers looking to train custom models, whereas open-llms is a valuable resource for those seeking information about existing open-source models.
Inference code for Llama models
Pros of Llama
- Official repository for Meta's Llama model, providing direct access to the latest updates and resources
- Includes comprehensive documentation and examples for using Llama in various applications
- Offers pre-trained models and fine-tuning scripts for specific tasks
Cons of Llama
- Limited to Llama models only, while open-llms covers a wider range of open-source language models
- Requires approval and licensing for access, unlike the open-llms repository which is freely accessible
- May have stricter usage restrictions compared to the models listed in open-llms
Code Comparison
Llama example:
from llama import Llama
model = Llama(model_path="path/to/model.pth")
output = model.generate("Hello, how are you?")
print(output)
open-llms doesn't provide direct code examples, as it's a curated list of open-source LLMs. Users would need to refer to the specific model repositories for implementation details.
Summary
Llama is the official repository for Meta's Llama model, offering direct access to resources and updates. However, it's limited to Llama models and requires approval for access. open-llms, on the other hand, provides a comprehensive list of various open-source language models, offering more flexibility but without direct implementation examples.
TensorFlow code and pre-trained models for BERT
Pros of BERT
- Developed by Google Research, offering high credibility and extensive documentation
- Focuses on a specific NLP model (BERT), providing in-depth implementation details
- Includes pre-trained models and fine-tuning scripts for various tasks
Cons of BERT
- Limited to BERT architecture, not covering other LLM types
- Less frequently updated compared to the open-llms repository
- Primarily research-oriented, which may be less accessible for practical applications
Code Comparison
BERT example (model initialization):
from transformers import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
open-llms doesn't provide code examples as it's a curated list of open-source LLMs.
Summary
BERT is a focused repository for the BERT model, offering detailed implementation and pre-trained models. open-llms serves as a comprehensive list of various open-source LLMs, providing a broader overview of available models without specific implementations. BERT is ideal for those working specifically with BERT, while open-llms is better for exploring different LLM options.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Open LLMs
These LLMs (Large Language Models) are all licensed for commercial use (e.g., Apache 2.0, MIT, OpenRAIL-M). Contributions welcome!
Open LLMs for code
Open LLM datasets for pre-training
| Name | Release Date | Paper/Blog | Dataset | Tokens (T) | License |
|---|---|---|---|---|---|
| RedPajama | 2023/04 | RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens | RedPajama-Data | 1.2 | Apache 2.0 |
| starcoderdata | 2023/05 | StarCoder: A State-of-the-Art LLM for Code | starcoderdata | 0.25 | Apache 2.0 |
Open LLM datasets for instruction-tuning
| Name | Release Date | Paper/Blog | Dataset | Samples (K) | License |
|---|---|---|---|---|---|
| OIG (Open Instruction Generalist) | 2023/03 | THE OIG DATASET | OIG | 44,000 | Apache 2.0 |
| databricks-dolly-15k | 2023/04 | Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM | databricks-dolly-15k | 15 | CC BY-SA-3.0 |
| MPT-7B-Instruct | 2023/05 | Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs | dolly_hhrlhf | 59 | CC BY-SA-3.0 |
Open LLM datasets for alignment-tuning
| Name | Release Date | Paper/Blog | Dataset | Samples (K) | License |
|---|---|---|---|---|---|
| OpenAssistant Conversations Dataset | 2023/04 | OpenAssistant Conversations - Democratizing Large Language Model Alignment | oasst1 | 161 | Apache 2.0 |
Evals on open LLMs
- Leaderboard by lmsys.org
- Evals by MosaicML
- Holistic Evaluation of Language Models (HELM)
- LLM-Leaderboard
- TextSynth Server Benchmarks
- Open LLM Leaderboard by Hugging Face
What do the licences mean?
- Apache 2.0: Allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties.
- MIT: Similar to Apache 2.0 but shorter and simpler. Also, in contrast to Apache 2.0, does not require stating any significant changes to the original code.
- CC BY-SA-4.0: Allows (i) copying and redistributing the material and (ii) remixing, transforming, and building upon the material for any purpose, even commercially. But if you do the latter, you must distribute your contributions under the same license as the original. (Thus, may not be viable for internal teams.)
- OpenRAIL-M v1: Allows royalty-free access and flexible downstream use and sharing of the model and modifications of it, and comes with a set of use restrictions (see Attachment A)
- BSD-3-Clause: This version allows unlimited redistribution for any purpose as long as its copyright notices and the license's disclaimers of warranty are maintained.
Disclaimer: The information provided in this repo does not, and is not intended to, constitute legal advice. Maintainers of this repo are not responsible for the actions of third parties who use the models. Please consult an attorney before using models for commercial purposes.
Improvements
- Complete entries for context length, and check entries with
? -
Add number of tokens trained?(see considerations) - Add (links to) training code?
- Add (links to) eval benchmarks?
Top Related Projects
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Inference code for Llama models
TensorFlow code and pre-trained models for BERT
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot