PAPER PLAINE

Fresh research, simply explained. Updates twice daily.

PhyCo: Learning Controllable Physical Priors for Generative Motion

Teaching AI to generate videos where objects move and collide realistically

Video generation models can now create realistic motion and physics interactions—objects bounce properly, materials deform correctly, and friction behaves as expected—by training on 100,000+ simulated videos where physical properties are systematically varied. The system lets users control these physical attributes directly, without needing to reconstruct 3D geometry or run simulations after generation.

Current video AI produces visually plausible but physically nonsensical motion: objects pass through each other, gravity works inconsistently, and materials respond wrongly to forces. PhyCo fixes this at generation time, which matters for video effects in film and games, robot training simulations, and any application where physical accuracy affects downstream decisions. Users can now specify exact friction or material properties and get videos that respect them automatically.