Convert Figma logo to code with AI

microsoft logodata-formulator

🪄 Create rich visualizations with AI

14,795
1,344
14,795
50

Top Related Projects

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

41,188

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

4,238

A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

39,797

TensorFlow code and pre-trained models for BERT

96,480

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Quick Overview

Data Formulator is an open-source project by Microsoft that aims to simplify the process of creating and managing data schemas for various data formats. It provides a unified approach to define, validate, and transform data structures across different platforms and languages.

Pros

  • Streamlines data schema creation and management
  • Supports multiple data formats and languages
  • Improves data consistency and interoperability
  • Reduces development time and potential errors in data handling

Cons

  • Limited documentation and examples available
  • Still in early development stages
  • May require a learning curve for users unfamiliar with schema definition concepts
  • Limited community support compared to more established data schema tools

Code Examples

# Define a simple schema
from data_formulator import Schema

user_schema = Schema({
    "name": str,
    "age": int,
    "email": str
})

# Validate data against the schema
valid_user = {"name": "John Doe", "age": 30, "email": "john@example.com"}
user_schema.validate(valid_user)  # Returns True

invalid_user = {"name": "Jane Doe", "age": "25", "email": "jane@example.com"}
user_schema.validate(invalid_user)  # Raises ValidationError
# Transform data using a schema
from data_formulator import Schema, Transform

transform_schema = Schema({
    "full_name": Transform(lambda x: x.upper()),
    "age": Transform(lambda x: x * 2)
})

data = {"full_name": "John Doe", "age": 30}
transformed_data = transform_schema.apply(data)
# Result: {"full_name": "JOHN DOE", "age": 60}
# Create a nested schema
nested_schema = Schema({
    "user": {
        "name": str,
        "address": {
            "street": str,
            "city": str,
            "country": str
        }
    },
    "orders": [int]
})

# Validate nested data
nested_data = {
    "user": {
        "name": "Alice",
        "address": {
            "street": "123 Main St",
            "city": "Anytown",
            "country": "USA"
        }
    },
    "orders": [1001, 1002, 1003]
}
nested_schema.validate(nested_data)  # Returns True

Getting Started

To get started with Data Formulator, follow these steps:

  1. Install the library:

    pip install data-formulator
    
  2. Import the necessary modules:

    from data_formulator import Schema, Transform
    
  3. Define your schema:

    my_schema = Schema({
        "name": str,
        "age": int,
        "email": str
    })
    
  4. Use the schema to validate or transform data:

    data = {"name": "Alice", "age": 28, "email": "alice@example.com"}
    my_schema.validate(data)
    

For more advanced usage and features, refer to the project's documentation and examples in the GitHub repository.

Competitor Comparisons

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Pros of transformers

  • Extensive library of pre-trained models for various NLP tasks
  • Active community and frequent updates
  • Comprehensive documentation and examples

Cons of transformers

  • Larger library size and potential overhead for simpler projects
  • Steeper learning curve for beginners

Code comparison

data-formulator:

from data_formulator import DataFormulator

df = DataFormulator()
result = df.generate_data("Create a list of 5 fruits")
print(result)

transformers:

from transformers import pipeline

generator = pipeline('text-generation', model='gpt2')
result = generator("List 5 fruits:", max_length=50)
print(result[0]['generated_text'])

Summary

transformers is a comprehensive library for NLP tasks with a wide range of pre-trained models, while data-formulator focuses on data generation. transformers offers more flexibility and options for various NLP applications, but may be more complex for simple tasks. data-formulator provides a straightforward approach to data generation, which could be beneficial for specific use cases.

41,188

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for distributed training of large models
  • Supports a wide range of hardware configurations and model architectures
  • Extensive documentation and active community support

Cons of DeepSpeed

  • Steeper learning curve for beginners
  • May be overkill for smaller projects or simpler models
  • Requires more setup and configuration compared to Data Formulator

Code Comparison

Data Formulator:

from data_formulator import DataFormulator

df = DataFormulator()
df.load_data("dataset.csv")
df.preprocess()
df.train_model()

DeepSpeed:

import deepspeed
import torch

model = MyModel()
engine = deepspeed.initialize(model=model, config_params=ds_config)
for batch in dataloader:
    loss = engine(batch)
    engine.backward(loss)
    engine.step()

Key Differences

  • Data Formulator focuses on simplifying data preprocessing and model training for tabular data
  • DeepSpeed is designed for large-scale distributed training of deep learning models
  • Data Formulator provides a higher-level API, while DeepSpeed offers more fine-grained control
  • DeepSpeed is better suited for advanced users and complex projects, while Data Formulator caters to beginners and simpler use cases
4,238

A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.

Pros of FLAML

  • More comprehensive AutoML toolkit with support for various tasks (classification, regression, time series forecasting, etc.)
  • Efficient hyperparameter tuning with cost-aware search algorithms
  • Active development and regular updates

Cons of FLAML

  • Steeper learning curve due to more advanced features
  • May be overkill for simpler data processing tasks
  • Requires more computational resources for complex optimizations

Code Comparison

FLAML example:

from flaml import AutoML
automl = AutoML()
automl.fit(X_train, y_train, task="classification")
predictions = automl.predict(X_test)

Data-Formulator example:

from data_formulator import DataFormulator
df = DataFormulator(data)
df.process()
result = df.get_result()

Summary

FLAML is a more comprehensive AutoML toolkit suitable for various machine learning tasks, while Data-Formulator focuses on data processing and transformation. FLAML offers advanced features like efficient hyperparameter tuning but may have a steeper learning curve. Data-Formulator is simpler to use for basic data manipulation tasks but lacks the advanced ML capabilities of FLAML. Choose based on your specific needs and project complexity.

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Pros of ONNX Runtime

  • Widely adopted and supported across multiple platforms and frameworks
  • Optimized for high-performance inference on various hardware
  • Extensive documentation and community support

Cons of ONNX Runtime

  • Larger codebase and more complex setup compared to Data Formulator
  • Primarily focused on inference, not data preprocessing or transformation

Code Comparison

Data Formulator:

from data_formulator import DataFormulator

df = DataFormulator()
df.add_column("new_column", lambda row: row["existing_column"] * 2)
transformed_data = df.transform(input_data)

ONNX Runtime:

import onnxruntime as ort

session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: input_data})[0]

Summary

ONNX Runtime is a powerful inference engine for machine learning models, while Data Formulator focuses on data preprocessing and transformation. ONNX Runtime offers broader platform support and optimization capabilities, but may be overkill for simpler data manipulation tasks. Data Formulator provides a more straightforward approach to data transformation but lacks the extensive inference capabilities of ONNX Runtime.

39,797

TensorFlow code and pre-trained models for BERT

Pros of BERT

  • Widely adopted and extensively researched in the NLP community
  • Pre-trained models available for various languages and tasks
  • Extensive documentation and community support

Cons of BERT

  • Requires significant computational resources for training and fine-tuning
  • May be overkill for simpler NLP tasks
  • Limited flexibility for customizing the model architecture

Code Comparison

BERT example:

import tensorflow as tf
from transformers import BertTokenizer, TFBertModel

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')

Data Formulator example:

from data_formulator import DataFormulator

formulator = DataFormulator()
formulated_data = formulator.formulate(input_data)

While BERT focuses on natural language processing tasks, Data Formulator appears to be a tool for data manipulation and transformation. BERT provides pre-trained models for various NLP tasks, whereas Data Formulator seems to offer a more flexible approach to data formatting and preparation. The choice between the two would depend on the specific requirements of your project and the nature of the data you're working with.

96,480

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • Widely adopted and supported by a large community
  • Extensive ecosystem of tools and libraries
  • Flexible and intuitive for dynamic neural networks

Cons of PyTorch

  • Steeper learning curve for beginners
  • Larger memory footprint compared to some alternatives
  • Can be slower for certain operations on CPU

Code Comparison

Data-Formulator:

from data_formulator import DataFormulator

df = DataFormulator()
df.load_data("dataset.csv")
df.preprocess()

PyTorch:

import torch
from torch.utils.data import Dataset, DataLoader

class CustomDataset(Dataset):
    def __init__(self, data):
        self.data = data

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        return self.data[idx]

While Data-Formulator focuses on simplifying data preprocessing and formatting, PyTorch provides a comprehensive framework for building and training neural networks. Data-Formulator offers a more streamlined approach for data preparation, while PyTorch requires more setup but offers greater flexibility and power for complex machine learning tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Data Formulator icon  Data Formulator: AI-powered Data Visualization

🪄 Explore data with visualizations, powered by AI agents.

Try Online Demo   Install Locally

arXivLicense: MITYouTubebuildDiscord

News 🔥🔥🔥

[01-25-2025] Data Formulator 0.6 — Real-time insights from live data

  • ⚡ Connect to live data: Connect to URLs and databases with automatic refresh intervals. Visualizations update automatically as your data changes to provide you live insights. Demo: track international space station position speed live
  • 🎨 UI Updates: Unified UI for data loading; direct drag-and-drop fields from the data table to update visualization designs.

[12-08-2025] Data Formulator 0.5.1 — Connect more, visualize more, move faster

  • 🔌 Community data loaders: Google BigQuery, MySQL, Postgres, MongoDB
  • 📊 New chart types: US Map & Pie Chart (more to be added soon)
  • ✏️ Editable reports: Refine generated reports with Chartifact in markdown style. demo
  • ⚡ Snappier UI: Noticeably faster interactions across the board

[11-07-2025] Data Formulator 0.5: Vibe with your data, in control

  • 📊 Load (almost) any data: load structured data, extract data from screenshots, from messy text blocks, or connect to databases.
  • 🤖 Explore data with AI agents: Use agent mode for hands-off exploration, or stay in control in interactive mode.
  • ✅ Verify AI generated results: interact with charts and inspect data, formulas, explanations, and code.
  • 📝 Create reports to share insights: choose charts you want to share, and ask agents to create reports grounded in data formulated throughout exploration.

Previous Updates

Here are milestones that lead to the current design:

  • v0.2.2 (Demo): Goal-driven exploration with agent recommendations and performance improvements
  • v0.2.1.3/4 (Readme | Demo): External data loaders (MySQL, PostgreSQL, MSSQL, Azure Data Explorer, S3, Azure Blob)
  • v0.2 (Demos): Large data support with DuckDB integration
  • v0.1.7 (Demos): Dataset anchoring for cleaner workflows
  • v0.1.6 (Demo): Multi-table support with automatic joins
  • Model Support: OpenAI, Azure, Ollama, Anthropic via LiteLLM (feedback)
  • Python Package: Easy local installation (try it)
  • Visualization Challenges: Test your skills (challenges)
  • Data Extraction: Parse data from images and text (demo)
  • Initial Release: Blog | Video
View detailed update history
  • [07-10-2025] Data Formulator 0.2.2: Start with an analysis goal

    • Some key frontend performance updates.
    • You can start your exploration with a goal, or, tab and see if the agent can recommend some good exploration ideas for you. Demo
  • [05-13-2025] Data Formulator 0.2.1.3/4: External Data Loader

    • We introduced external data loader class to make import data easier. Readme and Demo
      • Current data loaders: MySQL, Azure Data Explorer (Kusto), Azure Blob and Amazon S3 (json, parquet, csv).
      • [07-01-2025] Updated with: Postgresql, mssql.
    • Call for action link:
      • Users: let us know which data source you'd like to load data from.
      • Developers: let's build more data loaders.
  • [04-23-2025] Data Formulator 0.2: working with large data 📦📦📦

    • Explore large data by:
      1. Upload large data file to the local database (powered by DuckDB).
      2. Use drag-and-drop to specify charts, and Data Formulator dynamically fetches data from the database to create visualizations (with ⚡️⚡️⚡️ speeds).
      3. Work with AI agents: they generate SQL queries to transform the data to create rich visualizations!
      4. Anchor the result / follow up / create a new branch / join tables; let's dive deeper.
    • Checkout the demos at [https://github.com/microsoft/data-formulator/releases/tag/0.2]
    • Improved overall system performance, and enjoy the updated derive concept functionality.
  • [03-20-2025] Data Formulator 0.1.7: Anchoring ⚓︎

    • Anchor an intermediate dataset, so that followup data analysis are built on top of the anchored data, not the original one.
    • Clean a data and work with only the cleaned data; create a subset from the original data or join multiple data, and then go from there. AI agents will be less likely to get confused and work faster. ⚡️⚡️
    • Check out the demos at [https://github.com/microsoft/data-formulator/releases/tag/0.1.7]
    • Don't forget to update Data Formulator to test it out!
  • [02-20-2025] Data Formulator 0.1.6 released!

    • Now supports working with multiple datasets at once! Tell Data Formulator which data tables you would like to use in the encoding shelf, and it will figure out how to join the tables to create a visualization to answer your question. 🪄
    • Checkout the demo at [https://github.com/microsoft/data-formulator/releases/tag/0.1.6].
    • Update your Data Formulator to the latest version to play with the new features.
  • [02-12-2025] More models supported now!

    • Now supports OpenAI, Azure, Ollama, and Anthropic models (and more powered by LiteLLM);
    • Models with strong code generation and instruction following capabilities are recommended (gpt-4o, claude-3-5-sonnet etc.);
    • You can store API keys in api-keys.env to avoid typing them every time (see template api-keys.env.template).
    • Let us know which models you have good/bad experiences with, and what models you would like to see supported! [comment here]
  • [11-07-2024] Minor fun update: data visualization challenges!

    • We added a few visualization challenges with the sample datasets. Can you complete them all? [try them out!]
    • Comment in the issue when you did, or share your results/questions with others! [comment here]
  • [10-11-2024] Data Formulator python package released!

    • You can now install Data Formulator using Python and run it locally, easily. [check it out].
    • Our Codespaces configuration is also updated for fast start up ⚡️. [try it now!]
    • New experimental feature: load an image or a messy text, and ask AI to parse and clean it for you(!). [demo]
  • [10-01-2024] Initial release of Data Formulator, check out our [blog] and [video]!

Overview

Data Formulator is a Microsoft Research prototype for data exploration with visualizations powered by AI agents.

Data Formulator enables analysts to iteratively explore and visualize data. Started with data in any format (screenshot, text, csv, or database), users can work with AI agents with a novel blended interface that combines user interface interactions (UI) and natural language (NL) inputs to communicate their intents, control branching exploration directions, and create reports to share their insights.

Get Started

Play with Data Formulator with one of the following options:

  • Option 1: Install via Python PIP

    Use Python PIP for an easy setup experience, running locally (recommend: install it in a virtual environment).

    # install data_formulator
    pip install data_formulator
    
    # Run data formulator with this command
    python -m data_formulator
    

    Data Formulator will be automatically opened in the browser at http://localhost:5000.

    you can specify the port number (e.g., 8080) by python -m data_formulator --port 8080 if the default port is occupied.

  • Option 2: Codespaces (5 minutes)

    You can also run Data Formulator in Codespaces; we have everything pre-configured. For more details, see CODESPACES.md.

    Open in GitHub Codespaces

  • Option 3: Working in the developer mode

    You can build Data Formulator locally if you prefer full control over your development environment and develop your own version on top. For detailed instructions, refer to DEVELOPMENT.md.

Using Data Formulator

Load Data

Besides uploading csv, tsv or xlsx files that contain structured data, you can ask Data Formulator to extract data from screenshots, text blocks or websites, or load data from databases use connectors. Then you are ready to explore.

image

Explore Data

There are four levels to explore data based depending on whether you want more vibe or more control:

  • Level 1 (most control): Create charts with UI via drag-and-drop, if all fields to be visualized are already in the data.
  • Level 2: Specify chart designs with natural language + NL. Describe how new fields should be visualized in your chart, AI will automatically transform data to realize the design.
  • Level 3: Get recommendations: Ask AI agents to recommend charts directly from NL descriptions, or even directly ask for exploration ideas.
  • Level 4 (most vibe): In agent mode, provide a high-level goal and let AI agents automatically plan and explore data in multiple turns. Exploration threads will be created automatically.

https://github.com/user-attachments/assets/164aff58-9f93-4792-b8ed-9944578fbb72

  • Level 5: In practice, leverage all of them to keep up with both vibe and control!

Create Reports

Use the report builder to compose a report of the style you like, based on selected charts. Then share the reports to others!

Developers' Guide

Follow the developers' instructions to build your new data analysis tools on top of Data Formulator.

Help wanted:

Research Papers

@article{wang2024dataformulator2iteratively,
      title={Data Formulator 2: Iteratively Creating Rich Visualizations with AI}, 
      author={Chenglong Wang and Bongshin Lee and Steven Drucker and Dan Marshall and Jianfeng Gao},
      year={2024},
      booktitle={ArXiv preprint arXiv:2408.16119},
}
@article{wang2023data,
  title={Data Formulator: AI-powered Concept-driven Visualization Authoring},
  author={Wang, Chenglong and Thompson, John and Lee, Bongshin},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2023},
  publisher={IEEE}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.