Convert Figma logo to code with AI

vchoutas logosmplx

SMPL-X

2,441
377
2,441
132

Top Related Projects

Easy-to-use glTF 2.0-compliant OpenGL renderer for visualization of 3D scenes.

Make human motion capture easier.

A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

33,406

OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Quick Overview

SMPLX is a Python library for working with the SMPL-X (eXpressive) human body model. It provides tools for manipulating and rendering 3D human body meshes, including support for body shape, pose, and facial expressions. The library is designed to be used in various applications such as computer vision, graphics, and animation.

Pros

  • Comprehensive implementation of the SMPL-X model, including body, hands, and face
  • Supports various body model versions (SMPL, SMPL+H, SMPL-X)
  • Efficient GPU acceleration for faster computations
  • Well-documented and actively maintained

Cons

  • Requires specific dependencies and data files, which can be challenging to set up
  • Limited to the SMPL-X family of models, not suitable for other 3D human body representations
  • May have a steep learning curve for users unfamiliar with 3D human body modeling
  • Licensing restrictions for commercial use

Code Examples

  1. Creating a SMPL-X model:
import torch
import smplx

model = smplx.create(model_path='path/to/smplx/models',
                     model_type='smplx',
                     gender='neutral',
                     use_face_contour=True,
                     num_betas=10,
                     num_expression_coeffs=10,
                     ext='npz')
  1. Generating a body mesh:
betas = torch.randn([1, 10], dtype=torch.float32)
expression = torch.randn([1, 10], dtype=torch.float32)
body_pose = torch.zeros([1, 21, 3], dtype=torch.float32)
global_orient = torch.zeros([1, 3], dtype=torch.float32)

output = model(betas=betas, expression=expression,
               body_pose=body_pose, global_orient=global_orient,
               return_verts=True)

vertices = output.vertices
joints = output.joints
  1. Visualizing the generated mesh:
import numpy as np
import trimesh

mesh = trimesh.Trimesh(vertices=vertices[0].detach().cpu().numpy(),
                       faces=model.faces,
                       process=False)
mesh.show()

Getting Started

  1. Install the library and its dependencies:

    pip install smplx[all]
    
  2. Download the SMPL-X model files from the official website and place them in a known directory.

  3. Create a SMPL-X model and generate a body mesh:

    import torch
    import smplx
    
    model = smplx.create(model_path='path/to/smplx/models', model_type='smplx')
    betas = torch.randn([1, 10], dtype=torch.float32)
    output = model(betas=betas, return_verts=True)
    vertices = output.vertices
    
  4. Visualize or process the generated mesh as needed for your application.

Competitor Comparisons

Easy-to-use glTF 2.0-compliant OpenGL renderer for visualization of 3D scenes.

Pros of pyrender

  • Focused on 3D rendering and visualization
  • Simpler to use for basic 3D scene rendering
  • Supports both CPU and GPU rendering

Cons of pyrender

  • More limited in scope, primarily for rendering
  • Less suitable for advanced body modeling tasks
  • Smaller community and fewer updates

Code Comparison

pyrender:

import pyrender
import numpy as np

mesh = pyrender.Mesh.from_trimesh(trimesh.Sphere())
scene = pyrender.Scene()
scene.add(mesh)
pyrender.Viewer(scene, use_raymond_lighting=True)

smplx:

import smplx
import torch

model = smplx.create(model_path='path/to/model', model_type='smplx')
output = model(betas=torch.rand(1, 10), expression=torch.rand(1, 10))
vertices = output.vertices.detach().cpu().numpy().squeeze()

Summary

pyrender is a lightweight 3D rendering library, while smplx focuses on parametric human body modeling. pyrender excels in simple 3D visualizations, whereas smplx provides advanced capabilities for generating and manipulating human body models. The choice between them depends on the specific requirements of your project, with pyrender being more suitable for general 3D rendering tasks and smplx for detailed human body modeling and animation.

Make human motion capture easier.

Pros of EasyMocap

  • More comprehensive motion capture pipeline, including multi-view capture and 3D reconstruction
  • Easier to use for beginners, with a more user-friendly interface and documentation
  • Supports a wider range of input formats and output options

Cons of EasyMocap

  • Less focused on body model accuracy compared to SMPLX
  • May have lower performance in single-view scenarios
  • Less integration with deep learning frameworks

Code Comparison

EasyMocap example:

from easymocap.dataset import CONFIG
from easymocap.pipeline import Pipeline

pipeline = Pipeline(CONFIG)
pipeline.run('path/to/images')

SMPLX example:

import torch
from smplx import SMPLX

model = SMPLX(model_path='path/to/model')
output = model(betas=torch.zeros(1, 10), 
               expression=torch.zeros(1, 10),
               return_verts=True)

EasyMocap provides a higher-level interface for motion capture tasks, while SMPLX offers more fine-grained control over the body model parameters. EasyMocap is better suited for end-to-end motion capture applications, whereas SMPLX is more appropriate for detailed body modeling and animation tasks.

A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

Pros of FrankMocap

  • Provides a complete pipeline for 3D human pose and shape estimation
  • Includes hand and face keypoint detection
  • Offers real-time performance for full-body motion capture

Cons of FrankMocap

  • Less flexible for customization and research purposes
  • Primarily focused on motion capture applications
  • Limited documentation for advanced usage and modifications

Code Comparison

SMPLX example:

import smplx
model = smplx.create(model_path, model_type='smplx')
output = model(betas=betas, expression=expression, return_verts=True)
vertices = output.vertices

FrankMocap example:

from frankmocap.mocap import FrankMocap
mocap = FrankMocap()
pred_output = mocap.regress(img)
body_mesh = pred_output['body_mesh']

Summary

SMPLX is a more flexible and research-oriented framework for 3D human body modeling, while FrankMocap provides a ready-to-use solution for motion capture applications. SMPLX offers greater customization options and is better suited for in-depth research, whereas FrankMocap excels in real-time performance and ease of use for full-body motion capture tasks.

33,406

OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Pros of OpenPose

  • Real-time multi-person keypoint detection
  • Supports 2D pose estimation for body, face, hands, and foot
  • Well-documented with extensive tutorials and examples

Cons of OpenPose

  • Limited to 2D pose estimation, lacks 3D body modeling
  • Requires more computational resources for real-time performance
  • Less flexibility in terms of body shape and pose parameters

Code Comparison

OpenPose example:

import cv2
import pyopenpose as op

params = dict()
params["model_folder"] = "../models/"
opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()

datum = op.Datum()
imageToProcess = cv2.imread("image.jpg")
datum.cvInputData = imageToProcess
opWrapper.emplaceAndPop(op.VectorDatum([datum]))

print(datum.poseKeypoints)

SMPLX example:

import torch
import smplx

model = smplx.create(model_path='path/to/models', model_type='smplx')
betas = torch.randn([1, 10], dtype=torch.float32)
expression = torch.randn([1, 10], dtype=torch.float32)
output = model(betas=betas, expression=expression)

print(output.vertices)

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

SMPL-X: A new joint 3D model of the human body, face and hands together

[Paper Page] [Paper] [Supp. Mat.]

SMPL-X Examples

Table of Contents

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SMPL-X/SMPLify-X model, data and software, (the "Model & Software"), including 3D meshes, blend weights, blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Disclaimer

The original images used for the figures 1 and 2 of the paper can be found in this link. The images in the paper are used under license from gettyimages.com. We have acquired the right to use them in the publication, but redistribution is not allowed. Please follow the instructions on the given link to acquire right of usage. Our results are obtained on the 483 × 724 pixels resolution of the original images.

Description

SMPL-X (SMPL eXpressive) is a unified body model with shape parameters trained jointly for the face, hands and body. SMPL-X uses standard vertex based linear blend skinning with learned corrective blend shapes, has N = 10, 475 vertices and K = 54 joints, which include joints for the neck, jaw, eyeballs and fingers. SMPL-X is defined by a function M(θ, β, ψ), where θ is the pose parameters, β the shape parameters and ψ the facial expression parameters.

News

  • 3 November 2020: We release the code to transfer between the models in the SMPL family. For more details on the code, go to this readme file. A detailed explanation on how the mappings were extracted can be found here.
  • 23 September 2020: A UV map is now available for SMPL-X, please check the Downloads section of the website.
  • 20 August 2020: The full shape and expression space of SMPL-X are now available.

Installation

To install the model please follow the next steps in the specified order:

  1. To install from PyPi simply run:
pip install smplx[all]
  1. Clone this repository and install it using the setup.py script:
git clone https://github.com/vchoutas/smplx
python setup.py install

Downloading the model

To download the SMPL-X model go to this project website and register to get access to the downloads section.

To download the SMPL+H model go to this project website and register to get access to the downloads section.

To download the SMPL model go to this (male and female models) and this (gender neutral model) project website and register to get access to the downloads section.

Loading SMPL-X, SMPL+H and SMPL

SMPL and SMPL+H setup

The loader gives the option to use any of the SMPL-X, SMPL+H, SMPL, and MANO models. Depending on the model you want to use, please follow the respective download instructions. To switch between MANO, SMPL, SMPL+H and SMPL-X just change the model_path or model_type parameters. For more details please check the docs of the model classes. Before using SMPL and SMPL+H you should follow the instructions in tools/README.md to remove the Chumpy objects from both model pkls, as well as merge the MANO parameters with SMPL+H.

Model loading

You can either use the create function from body_models or directly call the constructor for the SMPL, SMPL+H and SMPL-X model. The path to the model can either be the path to the file with the parameters or a directory with the following structure:

models
├── smpl
│   ├── SMPL_FEMALE.pkl
│   └── SMPL_MALE.pkl
│   └── SMPL_NEUTRAL.pkl
├── smplh
│   ├── SMPLH_FEMALE.pkl
│   └── SMPLH_MALE.pkl
├── mano
|   ├── MANO_RIGHT.pkl
|   └── MANO_LEFT.pkl
└── smplx
    ├── SMPLX_FEMALE.npz
    ├── SMPLX_FEMALE.pkl
    ├── SMPLX_MALE.npz
    ├── SMPLX_MALE.pkl
    ├── SMPLX_NEUTRAL.npz
    └── SMPLX_NEUTRAL.pkl

MANO and FLAME correspondences

The vertex correspondences between SMPL-X and MANO, FLAME can be downloaded from the project website. If you have extracted the correspondence data in the folder correspondences, then use the following scripts to visualize them:

  1. To view MANO correspondences run the following command:
python examples/vis_mano_vertices.py --model-folder $SMPLX_FOLDER --corr-fname correspondences/MANO_SMPLX_vertex_ids.pkl
  1. To view FLAME correspondences run the following command:
python examples/vis_flame_vertices.py --model-folder $SMPLX_FOLDER --corr-fname correspondences/SMPL-X__FLAME_vertex_ids.npy

Example

After installing the smplx package and downloading the model parameters you should be able to run the demo.py script to visualize the results. For this step you have to install the pyrender and trimesh packages.

python examples/demo.py --model-folder $SMPLX_FOLDER --plot-joints=True --gender="neutral"

SMPL-X Examples

Modifying the global pose of the model

If you want to modify the global pose of the model, i.e. the root rotation and translation, to a new coordinate system for example, you need to take into account that the model rotation uses the pelvis as the center of rotation. A more detailed description can be found in the following link. If something is not clear, please let me know so that I can update the description.

Citation

Depending on which model is loaded for your project, i.e. SMPL-X or SMPL+H or SMPL, please cite the most relevant work below, listed in the same order:

@inproceedings{SMPL-X:2019,
    title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
    author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2019}
}
@article{MANO:SIGGRAPHASIA:2017,
    title = {Embodied Hands: Modeling and Capturing Hands and Bodies Together},
    author = {Romero, Javier and Tzionas, Dimitrios and Black, Michael J.},
    journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
    volume = {36},
    number = {6},
    series = {245:1--245:17},
    month = nov,
    year = {2017},
    month_numeric = {11}
  }
@article{SMPL:2015,
    author = {Loper, Matthew and Mahmood, Naureen and Romero, Javier and Pons-Moll, Gerard and Black, Michael J.},
    title = {{SMPL}: A Skinned Multi-Person Linear Model},
    journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
    month = oct,
    number = {6},
    pages = {248:1--248:16},
    publisher = {ACM},
    volume = {34},
    year = {2015}
}

This repository was originally developed for SMPL-X / SMPLify-X (CVPR 2019), you might be interested in having a look: https://smpl-x.is.tue.mpg.de.

Acknowledgments

Facial Contour

Special thanks to Soubhik Sanyal for sharing the Tensorflow code used for the facial landmarks.

Contact

The code of this repository was implemented by Vassilis Choutas.

For questions, please contact smplx@tue.mpg.de.

For commercial licensing (and all related questions for business applications), please contact ps-licensing@tue.mpg.de.