AI x Science Seminar: Steve Sun
Generative constitutive laws as graphs and tree
Amy Gutman Hall, Room 414
Capturing path- and rate-dependent behaviors of solids, such as creeping, plastic deformation, damage, and fracture, often requires interpreting and quantifying relationships among the histories of variables, such as dislocation density and porosity. This relational information is then represented by mathematical models composed of differentiable functions. In this talk, we explore (1) how these relationships can be represented by directed graphs and deduced through deep reinforcement learning, (2) how they can be trained on a library of existing material models represented as multidimensional meshes, and (3) how they can be re-interpreted as classical material models through symbolic pruning. To understand the mechanisms of constitutive behaviors, we represent physical quantities as vertices and determine their relations using Monte Carlo tree search. To facilitate learning from limited data, we leverage existing models to construct priors that enhance the likelihood of plausible predictions. By representing material models as meshes, we train a latent diffusion model that uses previous material models and experimental data to guide the reverse generation of new models. For applications where fast inference and interpretability are essential, we introduce a projected neural additive method for learning symbolic constitutive models as expression trees. This approach expresses the material response as a linear combination of univariate functions of projected variables. This technique enables us to search for hyperelasticity in high-dimensional spaces without sacrificing neural network expressivity. We show that the proposed model can reproduce polynomials of arbitrary order and dimension and thus achieve universal approximation by the Stone-Weierstrass theorem. Through a series of 1D post-hoc symbolic regressions, we obtain symbolic material models that significantly reduce the inference time for hydrocodes. The pros and cons of these techniques for various practical applications will be discussed. Together, these examples highlight how efficient representation of materials is critical for balancing the need for accuracy, interpretability, and computational efficiency in the learning of constitutive models.
Presented by
Penn AI, Innovation in Data Engineering and Science (IDEAS),
and the Data Driven Discovery Initiative (DDDI)