Mini-symposium

Partial differential equations (PDEs) are ubiquitous in nearly all fields of applied physics and engineering, encompassing a wide range of physical and phenomenological models and conservation laws. In particular, many physical phenomena of interest can be described as a system of parameterized, time-dependent, non-linear PDEs. With the advent of spatial discretization schemes such as finite-difference, finite-volume, or spectral methods, many such problems can now be solved routinely, utilizing the high-performance computing (HPC) paradigm of distributed parallelism on large clusters of networked CPUs or GPUs. However, important challenges still remain regarding solution accuracy, convergence, and computational cost associated with these methods for problems with significant model complexity. In recent years, machine learning techniques have been proposed as a promising alternative to conventional numerical methods. These techniques have been shown to have several advantages over conventional numerical methods, including:

  • The resulting solution is meshless, analytical, and continuously differentiable.
  • Using neural networks provides a solution with very good generalization properties.
  • The computational cost of training is only weakly dependent on the problem dimension.
  • Relatively few parameters may be required to model complex solutions.
  • Temporal and spatial derivatives can be treated in the same way.
  • Training is highly parallelizable on GPUs using open-source deep learning libraries.

During this symposium, we will provide an overview of the state-of-the-art techniques being used today to solve PDEs with deep learning and try to provide a sense of what will be possible in the future.

Programme

lundi 13/05

17:00 17:30 Télécharger le support

Solving Partial Differential Equations with Deep Learning

James Scoggins

Partial differential equations (PDEs) are ubiquitous in nearly all fields of applied physics and engineering, encompassing a wide range of physical and phenomenological models and conservation laws for problems related to reaction-diffusion-convection systems, electromagnetism, quantum mechanical systems, andkinetics, to name a few. In particular, many physical phenomena of interest can be described as a system of parameterized, time-dependent, nonlinear PDEs. With the advent of spatial discretization schemess uch as finite-difference, finite-volume, finite-element, or spectral methods, many such problems are now solved routinely, utilizing the high-performance computing paradigm of distributed parallelism on large clusters of networked CPUs or GPUs. While a large body of work exists in the literature dedicated to these numerical methods, important challenges remain regarding the solution accuracy, convergence, and computational cost for various problems with significant model complexity. Recently, machine learning techniques have been proposed as a promising alternative to conventional numerical methods. These techniques make use of the universal approximation capacity of artificial neural networks to representthe solution of PDEs, with no spatial discretization required. In this presentation, we will demonstrate the state-of-the-art in solving PDEs with deep learning using TensorFlow and comment on the some challenges and outlook for this field in the near future.
17:30 18:00 Télécharger le support

Overcoming the course of dimensionality with DNNs: theoretical approximation results for PDEs

Philippe Von Wurstemberger

Artificial neural networks (ANNs) have very successfully been used in numerical simulations for a series of computational problems ranging from image classification to numerical approximations of partial differential equations (PDEs). Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, such numerical simulations indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems. Although there are numerous results on approximation capacities of ANNs such as, e.g., the universal approximation theorem, most of them cannot explain the empirical success of ANNs when approximating high-dimensional functions. In this talk I will explain recent theoretical developments which demonstrate that ANNs can efficiently approximate solutions of high-dimensional PDEs. More precisely, I will present results revealing that the minimal required number of parameters of an ANN to approximate solutions of certain PDEs grows at most polynomially in both the reciprocal 1 ⁄ ϵ of the prescribed approximation accuracy ϵ > 0 and the PDE dimension d ∈ N. Those statements prove that ANNs do indeed have the capacity to overcome the curse of dimensionality in the numerical approximation of PDEs.
18:00 18:30 Télécharger le support

Approximation spaces of deep neural networks

Rémi Gribonval

We study the expressivity of sparsely connected deep networks. Measuring a network's complexity by its number of connections, or its number of neurons, we consider the class of functions which error of best approximation with networks of a given complexity decays at a certain rate. Using classical approximation theory, we show that this class can be endowed with a norm that makes it a nice function space, called approximation space. We establish that the presence of certain skip connections has no impact of the approximation space, and discuss the role of the network's nonlinearity (also known as activation function) on the resulting spaces, as well as the benefits of depth. For the popular ReLU nonlinearity (as well as its powers), we relate the newly identified spaces to classical Besov spaces, which have a long history associated to sparse wavelet decompositions. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
18:30 19:00 Télécharger le support

LS-SVM based solutions to differential equations

Siamak Mehrkanoon

From a kernel-based modeling point of view, one can consider the given differential equations together with its initial or boundary conditions as prior knowledge and seek the solution by means of Least Squares Support Vector Machines (LSSVMs) whose parameters are adjusted to minimize an appropriate error function. In particular, here we introduce a LS-SVM based framework for learning the solutions of dynamical systems governed by Ordinary Differential Equations, Differential Algebraic Equations (DAEs) as well as Partial Differential Equations (PDEs). The problem is formulated as an optimization problem in the primal-dual setting. The approximate solution in the primal is expressed in terms of the feature map and is forced to satisfy the system dynamics, initial/boundary conditions using a constrained optimization problem. The optimal representation of the solution is then obtained in the dual. For the linear and nonlinear cases, the model parameters are obtained by solving a system of linear and nonlinear equations, respectively. The proposed model utilizes few training points to learn a closed form approximate solution. Furthermore, it does not require index reduction techniques and can directly be applied to learn the state trajectories of the underlying implicit systems. The experimental results on different ODE systems, DEA systems with index from 0 to 3, as well as PDEs are presented and compared with analytic solutions to confirm the validity and applicability of the proposed method. Similarly, analogous approaches based on the introduced LS-SVM framework can be used to address inverse problems where given the observational data one in particular aims at estimating the unknown parameters of the system. Here we present an extended formulation using LS-SVM core model to estimate the unknown constant or time-varying parameters of the given dynamical system described by ordinary (delay) differential equations. Finally, some possibilities and future directions for considering deeper architectures will be discussed.

Comité d'organisation

  • Loïc Gouarin (CMAP)
  • James B. Scoggins (CMAP)