InvSculpt: Inverse Sculpting Modeling via Controlled 3D Generation and a Vector Displacement Field
Abstract
Inverse sculpting modeling aims to decompose a sculpted mesh into an underlying base shape and reusable geometric details, enabling non-expert users to inherit professional sculpting effort. We present InvSculpt, a novel inverse sculpting framework that decomposes a sculpted mesh into a high-fidelity underlying shape and reusable geometric details represented as a vector displacement field (VDF). Our approach combines semantic priors from text-guided 2D image editing with a 3D rectified flow model to perform inversion-based, mask-free detail removal, recovering an underlying shape that preserves the identity of the source mesh. To represent sculpted details in a lossless and transferable manner, we extract a VDF defined on the surface of the recovered underlying shape and learn a continuous neural representation for geometry-aware transfer. We observe that standard conditional sampling after inversion often suffers from trajectory drift, leading to identity shift and low-frequency distortion. To address this issue, we introduce a trajectory correction strategy that constrains early sampling steps to follow the inversion path, effectively stabilizing subsequent conditional guidance. This design enables robust detail removal and precise extraction of the VDF. Extensive experiments demonstrate that InvSculpt achieves significantly higher-quality mesh decomposition than prior methods and supports a wide range of applications, including geometry redesign and high-fidelity geometric detail transfer.
