Move AITranslation site

3wks agoupdate 176 0 0

AI vision technology realizes tools for markerless motion capture, which can be widely used in 3D animation, game development and virtual reality, etc., and reduces the threshold of creation with high-quality data capture and flexible integration capability.

Language:
en
Collection time:
2025-03-30
Move AIMove AI

What is Move AI?

Move AI is a London-based startup working to democratize 3D animation. The company utilizes artificial intelligence and computer vision technology to develop a markerlessmotion captureMove AI is a tool that extracts motion data from video and synchronizes live action to digital models for high-fidelity motion tracking. Its core features include markerless motion capture, arbitrary environment capture, and high-quality motion data extraction for 3D animators, game developers, and other professionals.Move AI's technology lowers the threshold and cost of motion capture, bringing new possibilities to the industry.

The core goal of Move AI is to automate motion capture (Motion Capture) by capturing and understanding motion in video through computer vision and deep learning techniques.

How Move AI works

  1. Video Input and Preprocessing

    • video input: The Move AI system receives incoming video data, which can be in a variety of formats (e.g. MP4, AVI, MOV, etc.) and may contain different frame rates (FPS) and resolutions. The video input module needs to support multiple video formats and be able to process video data from different sources (e.g., cameras, files, network streams, etc.).
    • preprocessing step: Includes frame extraction (breaking the video into a series of consecutive frames), frame rate adjustment (adjusting the frame rate of the video as needed), and resolution adjustment (adjusting the resolution of the video to accommodate subsequent processing).
  2. Multi-camera capture

    • Synchronized capture of motion data from different angles by multiple cameras improves the accuracy and completeness of the capture.
    • Internal parameter calibration: calibrates the internal parameters of the camera, such as focal length, principal point coordinates, distortion factor, etc.
    • External Parameter Calibration: Calibrates the external parameters of the camera, including the relative position and rotation angle between the cameras.
    • Ensure that multiple cameras capture images at the same time through hardware or software synchronization mechanisms.
  3. deep learning model

    • OpenPose: A convolutional neural network (CNN) is used to extract human key points (e.g., shoulders, elbows, knees, etc.) from an image to generate 2D key point coordinates.
    • DensePose: Further refined from OpenPose, it generates dense point cloud data on the human body surface for 3D pose estimation.
    • MediaPipe Hands: uses deep learning models to extract key hand points (e.g., fingertips, knuckles, etc.) from images to generate 2D or 3D gesture data.
    • FaceNet: extracts facial key points (e.g., eyes, nose, mouth, etc.) from images to generate facial expression and movement data.
  4. Motion Data Analysis

    • Initial analysis of exercise data using statistical methods (e.g., mean, variance, standard deviation, etc.) to generate basic exercise metrics.
    • The movement patterns are categorized and downscaled by techniques such as cluster analysis and principal component analysis (PCA).
    • Training supervised learning models (e.g., Random Forest, Support Vector Machines) and unsupervised learning models (e.g., k-means, DBSCAN) to predict and classify motion data.
    • Advanced analysis of complex motion data using deep learning models (e.g., CNN, RNN) to generate advanced metrics such as motion trajectory, velocity, acceleration, and more.
  5. Animation Generation

    • Forward Kinematics (FK): direct calculation of bone end positions from joint angles, suitable for simple motion control.
    • Inverse Kinematics (IK): back-propagation of joint angles based on the target position, suitable for complex motion control such as arm grasping movements.
    • The captured keypoint data is mapped to the skeletal structure of the virtual character to generate the corresponding skeletal animation.
    • Accurate skeletal posture and motion is achieved through joint rotation matrices and quaternion calculations.
    • Multiple captured motion data are fused to generate continuous, smooth animated transitions.

Move AI Key Features

  1. Markerless Motion Capture: Supports single and multi-camera configurations, capturing using cell phones and standard cameras without the need to wear any complicated motion capture suits or markers.
  2. Arbitrary environment capture: Capture up to 22 people at a time, in any environment.
  3. High-quality motion data: Capture high-quality 3D human movement data, including finger tracking, through AI, computer vision and physical modeling.
  4. Massive capture of spaceMove One allows capturing in a 5m x 5m space, while Move Multi-Cam allows capturing in a 20m x 20m space.
  5. Real-time motion tracking: Provides real-time markerless motion capture as well as post-processing capabilities.
  6. Easy Redirection: FBX and USD formats can be exported, compatible with users' preferred 3D animation software.

Move AI application areas

Move AI provides powerful tools and solutions for sports training, animation, game development, and medical rehabilitation to help users realize high-quality motion capture and applications.

Move AI History

  • Move AI is an AI tool developed by the company of the same name to provide markerless motion capture technology to 3D animators and studios.
  • On October 4, 2023, Move AI announced that it has raised a $10 million seed round from Play Ventures, Warner Music Group, RKKVC, Level2 Ventures and Animoca Brands.
  • In the summer of 2023, Move AI revealed plans to launch a single-camera app, "Move One," in September, which is currently accepting applications for an invitation-only Beta test mode and is scheduled for public release later that year.
  • In March 2025, Move AI proposed the second generation of AI motion capture technology (Gen 2 spatial motion), which aims to solve the problem of capturing in occlusion and complex motion capture environments, and further improve the stability and accuracy of capture.

data statistics

Relevant Navigation

No comments

none
No comments...