LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields
SIGGRAPH ASIA, 2023

  • 1 University of Toronto
  • 2 MIT CSAIL
  • 3 Meta Reality Labs Research
  • + Corresponding authors

Abstract

overview

Linear reduced-order modeling (ROM) simplifies complex simulations by approximating the behavior of a system using a simplified kinematic representation. Typically, ROM is trained on input simulations created with a specific spatial discretization, and then serves to accelerate simulations with the same discretization. This discretization-dependence is restrictive. Becoming independent of a specific discretization would provide flexibility to mix and match mesh resolutions, connectivity, and type (tetrahedral, hexahedral) in training data; to accelerate simulations with novel discretizations unseen during training; and to accelerate adaptive simulations that temporally or parametrically change the discretization. We present a flexible, discretization-independent approach to reduced-order modeling. Like traditional ROM, we represent the configuration as a linear combination of displacement fields. Unlike traditional ROM, our displacement fields are continuous maps from every point on the reference domain to a corresponding displacement vector; these maps are represented as implicit neural fields. With linear continuous ROM (LiCROM), our training set can include multiple geometries undergoing multiple loading conditions, independent of their discretization. This opens the door to novel applications of reduced order modeling. We can now accelerate simulations that modify the geometry at runtime, for instance via cutting, hole punching, and even swapping the entire mesh. We can also accelerate simulations of geometries unseen during training. We demonstrate one-shot generalization, training on a single geometry and subsequently simulating various unseen geometries.

-->

Example Results:

BibTeX

Acknowledgments

We would like to thank Otman Benchekroun, Jonathan Panuelos, Kateryna Starovoit, and Mengfei Liu for their feedback on Fig 1. We would also like to thank our lab system administrator, John Hancock, and our financial officer, Xuan Dam, for their invalu- able administrative support in making this research possible. This project is funded in part by Meta and the Natural Sciences and Engineering Research Council of Canada (Discovery RGPIN-2021- 03733). We thank the developers and community behind PyTorch, the Taichi programming language, and NVIDIA Warp for empow- ering this research. The meshes in Fig 3 are derived from entries 133568, 133078 and 170179 of the Thingi10k dataset.

The website template was borrowed from NeRV and Michaël Gharbi.