Top Related Projects
Synthetic data generation for tabular data
1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Low-code framework for building custom LLMs, neural networks, and other AI models
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
An Open Source Machine Learning Framework for Everyone
Quick Overview
YData Synthetic is an open-source library for generating synthetic data using various machine learning techniques, including GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders). It aims to provide a solution for creating high-quality synthetic data that preserves the statistical properties of the original dataset while ensuring privacy and data augmentation.
Pros
- Offers multiple synthetic data generation techniques, including GANs and VAEs
- Supports both tabular and time series data
- Provides pre-processing and post-processing utilities for data handling
- Includes privacy preservation features to protect sensitive information
Cons
- Limited documentation and examples for some advanced features
- Requires a good understanding of machine learning concepts for optimal use
- May have a steeper learning curve for users new to synthetic data generation
- Performance can vary depending on the complexity and size of the input data
Code Examples
- Loading and preprocessing data:
from ydata_synthetic.preprocessing.regular.processor import RegularDataProcessor
# Load and preprocess data
data_prep = RegularDataProcessor(df)
processed_data = data_prep.process()
- Training a GAN model:
from ydata_synthetic.synthesizers import RegularSynthesizer
# Initialize and train a GAN model
synthesizer = RegularSynthesizer(model='GAN', n_epochs=300)
synthesizer.fit(processed_data)
- Generating synthetic data:
# Generate synthetic samples
synthetic_data = synthesizer.sample(n_samples=1000)
# Inverse transform the data to original format
synthetic_data = data_prep.inverse_transform(synthetic_data)
Getting Started
To get started with YData Synthetic, follow these steps:
- Install the library:
pip install ydata-synthetic
- Import necessary modules and load your data:
import pandas as pd
from ydata_synthetic.preprocessing.regular.processor import RegularDataProcessor
from ydata_synthetic.synthesizers import RegularSynthesizer
# Load your data
df = pd.read_csv('your_data.csv')
# Preprocess the data
data_prep = RegularDataProcessor(df)
processed_data = data_prep.process()
# Train a synthesizer
synthesizer = RegularSynthesizer(model='GAN', n_epochs=300)
synthesizer.fit(processed_data)
# Generate synthetic data
synthetic_data = synthesizer.sample(n_samples=1000)
synthetic_data = data_prep.inverse_transform(synthetic_data)
This quick start guide demonstrates how to load data, preprocess it, train a GAN model, and generate synthetic samples.
Competitor Comparisons
Synthetic data generation for tabular data
Pros of SDV
- More comprehensive suite of synthetic data generation tools
- Better documentation and tutorials for beginners
- Larger community and more frequent updates
Cons of SDV
- Can be slower for large datasets
- More complex setup and configuration
- Steeper learning curve for advanced features
Code Comparison
SDV:
from sdv import Metadata, SDV
metadata = Metadata()
metadata.add_table('table_name', data)
sdv = SDV()
sdv.fit(metadata)
synthetic_data = sdv.sample('table_name')
ydata-synthetic:
from ydata_synthetic.synthesizers import ModelParameters, RegularSynthesizer
synthesizer = RegularSynthesizer(ModelParameters())
synthesizer.fit(data)
synthetic_data = synthesizer.sample(num_samples)
Both libraries offer straightforward APIs for generating synthetic data, but SDV requires more setup with metadata definition. ydata-synthetic provides a more streamlined approach for single-table scenarios.
1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
Pros of ydata-profiling
- Comprehensive data profiling and reporting capabilities
- Generates interactive HTML reports for easy data exploration
- Supports various data formats and integrates well with pandas DataFrames
Cons of ydata-profiling
- Focused solely on data profiling, lacking synthetic data generation features
- May require more computational resources for large datasets
- Limited customization options compared to ydata-synthetic
Code Comparison
ydata-profiling:
from ydata_profiling import ProfileReport
profile = ProfileReport(df, title="Profiling Report")
profile.to_file("report.html")
ydata-synthetic:
from ydata_synthetic.synthesizers import RegularSynthesizer
synthesizer = RegularSynthesizer(model_parameters, n_cpu=4)
synthetic_data = synthesizer.fit_sample(data)
ydata-profiling excels in data analysis and visualization, providing detailed insights into dataset characteristics. It generates comprehensive reports but lacks synthetic data generation capabilities. On the other hand, ydata-synthetic focuses on creating synthetic datasets, offering more flexibility in data generation but with fewer built-in profiling features. The choice between the two depends on whether the primary need is data analysis (ydata-profiling) or synthetic data generation (ydata-synthetic).
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Pros of fairseq
- Comprehensive toolkit for sequence modeling tasks
- Supports a wide range of architectures and pre-trained models
- Highly customizable and extensible for research purposes
Cons of fairseq
- Steeper learning curve due to its complexity
- Primarily focused on natural language processing tasks
- Requires more computational resources for training and inference
Code Comparison
fairseq:
from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('/path/to/model')
tokens = model.encode('Hello world')
output = model.decode(tokens)
ydata-synthetic:
from ydata_synthetic.synthesizers import ModelParameters, RegularSynthesizer
synthesizer = RegularSynthesizer(ModelParameters())
synthetic_data = synthesizer.fit_sample(real_data)
Key Differences
- fairseq is primarily designed for NLP tasks, while ydata-synthetic focuses on generating synthetic data
- fairseq offers more flexibility and customization options, but ydata-synthetic is more user-friendly for data generation tasks
- fairseq requires more setup and configuration, whereas ydata-synthetic provides a simpler API for quick implementation
Low-code framework for building custom LLMs, neural networks, and other AI models
Pros of ludwig
- More versatile, supporting a wide range of machine learning tasks beyond synthetic data generation
- Offers a user-friendly declarative machine learning tool that requires minimal coding
- Has a larger community and more frequent updates
Cons of ludwig
- Steeper learning curve due to its broader scope and capabilities
- May be overkill for projects focused solely on synthetic data generation
- Requires more computational resources for complex models
Code comparison
ludwig:
from ludwig.api import LudwigModel
model = LudwigModel(config)
results = model.train(dataset=train_data)
predictions = model.predict(dataset=test_data)
ydata-synthetic:
from ydata_synthetic.synthesizers import ModelParameters, GaussianCopula
synthesizer = GaussianCopula(ModelParameters())
synthetic_data = synthesizer.fit_sample(data)
Summary
ludwig is a more comprehensive machine learning framework that can handle various tasks, including synthetic data generation. It offers greater flexibility but may be more complex for users solely interested in generating synthetic data. ydata-synthetic, on the other hand, is specifically designed for synthetic data generation, making it more straightforward for this particular use case but less versatile overall.
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Pros of CNTK
- More mature and established deep learning framework with extensive documentation
- Supports a wider range of neural network architectures and algorithms
- Offers high-performance distributed training across multiple GPUs and machines
Cons of CNTK
- Less active development and community support compared to newer frameworks
- Steeper learning curve for beginners in deep learning
- Limited focus on synthetic data generation compared to ydata-synthetic
Code Comparison
CNTK example (creating a simple neural network):
import cntk as C
input = C.input_variable(2)
output = C.layers.Dense(1)(input)
model = C.sigmoid(output)
ydata-synthetic example (generating synthetic data):
from ydata_synthetic import SyntheticDataGenerator
generator = SyntheticDataGenerator()
synthetic_data = generator.generate(num_samples=1000)
While CNTK focuses on building and training neural networks, ydata-synthetic specializes in generating synthetic data for various use cases. CNTK provides a comprehensive toolkit for deep learning tasks, whereas ydata-synthetic offers a more targeted solution for creating artificial datasets to augment or replace real data in machine learning projects.
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Extensive ecosystem with a wide range of tools and libraries
- Strong support for production deployment and scalability
- Comprehensive documentation and large community support
Cons of TensorFlow
- Steeper learning curve for beginners
- Can be more complex and verbose for simple tasks
- Slower development cycle compared to more lightweight frameworks
Code Comparison
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
ydata-synthetic:
from ydata_synthetic.synthesizers import ModelParameters, GaussianCopulaSynthesizer
synthesizer = GaussianCopulaSynthesizer(ModelParameters())
synthesizer.fit(data)
synthetic_data = synthesizer.sample(n_samples)
Summary
TensorFlow is a comprehensive deep learning framework with a vast ecosystem, while ydata-synthetic is a specialized library for generating synthetic data. TensorFlow offers more flexibility and scalability for general machine learning tasks, but ydata-synthetic provides a simpler, more focused approach for synthetic data generation. The choice between the two depends on the specific requirements of your project and your familiarity with each framework.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME

YData Synthetic
A package to generate synthetic tabular and time-series data leveraging the state of the art generative models.
ð The exciting features:
These are must try features when it comes to synthetic data generation:
- A new streamlit app that delivers the synthetic data generation experience with a UI interface. A low code experience for the quick generation of synthetic data
- A new fast synthetic data generation model based on Gaussian Mixture. So you can quickstart in the world of synthetic data generation without the need for a GPU.
- A conditional architecture for tabular data: CTGAN, which will make the process of synthetic data generation easier and with higher quality!
Synthetic data
What is synthetic data?
Synthetic data is artificially generated data that is not collected from real world events. It replicates the statistical components of real data without containing any identifiable information, ensuring individuals' privacy.
Why Synthetic Data?
Synthetic data can be used for many applications:
- Privacy compliance for data-sharing and Machine Learning development
- Remove bias
- Balance datasets
- Augment datasets
Looking for an end-to-end solution to Synthetic Data Generation?
YData Fabric enables the generation of high-quality datasets within a full UI experience, from data preparation to synthetic data generation and evaluation.
Check out the Community Version.
ydata-synthetic
This repository contains material related with architectures and models for synthetic data, from Generative Adversarial Networks (GANs) to Gaussian Mixtures. The repo includes a full ecosystem for synthetic data generation, that includes different models for the generation of synthetic structure data and time-series. All the Deep Learning models are implemented leveraging Tensorflow 2.0. Several example Jupyter Notebooks and Python scripts are included, to show how to use the different architectures.
Are you ready to learn more about synthetic data and the bext-practices for synthetic data generation?
Quickstart
The source code is currently hosted on GitHub at: https://github.com/ydataai/ydata-synthetic
Binary installers for the latest released version are available at the Python Package Index (PyPI).
pip install ydata-synthetic
The UI guide for synthetic data generation
YData synthetic has now a UI interface to guide you through the steps and inputs to generate structure tabular data. The streamlit app is available form v1.0.0 onwards, and supports the following flows:
- Train a synthesizer model
- Generate & profile synthetic data samples
Installation
pip install ydata-synthetic[streamlit]
Quickstart
Use the code snippet below in a python file (Jupyter Notebooks are not supported):
from ydata_synthetic import streamlit_app
streamlit_app.run()
Or use the file streamlit_app.py that can be found in the examples folder.
python -m streamlit_app
The below models are supported:
- CGAN
- WGAN
- WGANGP
- DRAGAN
- CRAMER
- CTGAN
Examples
Here you can find usage examples of the package and models to synthesize tabular data.
- Fast tabular data synthesis on adult census income dataset
- Tabular synthetic data generation with CTGAN on adult census income dataset
- Time Series synthetic data generation with TimeGAN on stock dataset
- Time Series synthetic data generation with DoppelGANger on FCC MBA dataset
- More examples are continuously added and can be found in
/examplesdirectory.
Datasets for you to experiment
Here are some example datasets for you to try with the synthesizers:
Tabular datasets
Sequential datasets
Project Resources
In this repository you can find the several GAN architectures that are used to create synthesizers:
Tabular data
- GAN
- CGAN (Conditional GAN)
- WGAN (Wasserstein GAN)
- WGAN-GP (Wassertein GAN with Gradient Penalty)
- DRAGAN (On Convergence and stability of GANS)
- Cramer GAN (The Cramer Distance as a Solution to Biased Wasserstein Gradients)
- CWGAN-GP (Conditional Wassertein GAN with Gradient Penalty)
- CTGAN (Conditional Tabular GAN)
- Gaussian Mixture
Sequential data
Contributing
We are open to collaboration! If you want to start contributing you only need to:
- Search for an issue in which you would like to work. Issues for newcomers are labeled with good first issue.
- Create a PR solving the issue.
- We would review every PRs and either accept or ask for revisions.
Support
For support in using this library, please join our Discord server. Our Discord community is very friendly and great about quickly answering questions about the use and development of the library. Click here to join our Discord community!
FAQs
Have a question? Check out the Frequently Asked Questions about ydata-synthetic. If you feel something is missing, feel free to book a beary informal chat with us.
License
Top Related Projects
Synthetic data generation for tabular data
1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Low-code framework for building custom LLMs, neural networks, and other AI models
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
An Open Source Machine Learning Framework for Everyone
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot
