FeelingAI Research

We are committed to the research of generative interactive AI, bringing a new experience of infinite real-time interaction to global users. These original technologies include performance agent of 3D characters, interactive 3D space generation, physical human-scene interactions, etc.

OneMotion-OneModel

Motion Generation Model from a Single Motion1 min read

The dynamic performance of characters is an important component of infinite real-time interactive experiences. We care about bringing virtual 3D characters alive, they should exhibit their personalities, tell their stories, express their feelings, and most importantly, respond to our acts naturally and freely.

SceneGenAgent

Agent for Procedural 3D City Scene Generation1 min read

To enable infinite interactive experiences, it is essential to have 3D scenes with vivid details. However, creating a high-quality 3D scene is often resource-intensive and time-consuming. In some games, this process can even take several years. Facing this, Procedural Content Generation, i.e. PCG, is widely adopted in game development as one solution to ease the work of game developers.nbsp.

TokenHSI

TokenHSI:Unified synthesis of physical human-scene interactions through task tokenization2 min read

Synthesizing diverse and physically plausible Human-Scene Interactions (HSI) is pivotal for both computer animation and embodied AI. Despite encouraging progress, current methods mainly focus on developing separate controllers, each specialized for a specific interaction task. 

GaussianAnything

GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D Generation2 min read

While 3D content generation has advanced significantly, existing methods still face challenges with input formats, latent space design, and output representations. This paper introduces a novel 3D generation framework that addresses these challenges, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space. 

GausSim

GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects2 min read

We introduce GausSim, a novel neural network-based simulator designed to capture the dynamic behaviors of real-world elastic objects represented through Gaussian kernels. We leverage continuum mechanics and treat each kernel as a Center of Mass System (CMS) that describes continuous piece of matter, accounting for realistic deformations without idealized assumptions. 

EdgeTAM

EdgeTAM: On-Device Track Anything Model1 min read

On top of Segment Anything Model (SAM), SAM 2 further extends its capability from image to video inputs through a memory bank mechanism and obtains a remarkable performance compared with previous methods, making it a foundation model for video segmentation task.