S'abonner à l'agenda
  • Thomas Saigre

    Modèle de réduction d'ordre et analyse de sensibilité pour les simulations de transfert de chaleur de chaleur à l'intérieur du globe oculaire humain

    16 janvier 2024 - 14:00Salle de conférences IRMA

    Le transfert de chaleur dans le globe oculaire humain, est fortement influencé par divers paramètres physiologiques et externes. En particulier, il affecte de manière critique le comportement des fluides dans l'œil et les processus d'administration de médicaments. Cependant, la modélisation nécessite la connaissance de divers paramètres, dont certains peuvent jouer un rôle essentiel dans le développement de pathologies. Bien que certaines données médicales aient été récemment acquises, seuls quelques paramètres et leur variabilité sont connus, tandis que d'autres ne peuvent être directement mesurés. À cette fin, un modèle 3D pour simuler le transfert de chaleur dans dans l’œil humain est développé. Afin d'identifier les principaux facteurs influençant le comportement du transfert de chaleur, il est nécessaire d'étudier l'influence de ces paramètres à travers un processus de quantification d'incertitude qui implique de nombreuses évaluations des modèles. Cependant, ce processus est coûteux. Par conséquent, l'utilisation d'une approche de réduction de modèle s'avère essentielle pour réduire le coût de calcul. Dans cette présentation, nous présenterons la méthode des bases réduites avec bornes d’erreur comme moyen de réduire le modèle sans compromettre la précision. Ce modèle réduit permettra l'utilisation des indices de Sobol, une approche statistique, pour évaluer l'influence des paramètres du modèle sur les résultats.
  • Alena Shilova

    Learning HJB Viscosity Solutions with PINNs

    17 janvier 2024 - 10:30Salle de conférences IRMA

    Despite recent advances in Reinforcement Learning (RL), the Markov Decision Processes are not always the best choice to model complex dynamical systems requiring interactions at high frequency. Being able to work with arbitrary time intervals, Continuous Time Reinforcement Learning (CTRL) is more suitable for those problems. Instead of the Bellman equation operating in discrete time, it is the Hamiltonian Jacobi Bellman (HJB) equation that describes value function evolution in CTRL. Even though the value function is a solution of the HJB equation, it may not be its unique solution. To distinguish the value function from other solutions, it is important to look for the viscosity solutions of the HJB equation. The viscosity solutions constitute a special class of solutions that possess uniqueness and stability properties. This paper proposes a novel approach to approximate the value function by training a Physics Informed Neural Network (PINN) through a specific ε-scheduling iterative process constraining the PINN to converge towards the viscosity solution and shows experimental results with classical control tasks.
  • Marie Billaud-Friess

    Méthode des bases réduites probabiliste pour des problèmes paramétrés

    22 janvier 2024 - 15:00Salle de séminaires 309


    Des variantes probabilistes des méthodes de réduction de modèle ont émergé récemment pour améliorer les performances des approches existantes, aussi bien bien en terme de stabilité que d'efficacité. Dans cet exposé, nous présentons une méthode des bases réduites probabiliste pour l'approximation d'une famille de fonctions paramétrées. Ce type de méthode repose sur algorithme "greedy" (glouton) probabiliste utilisant un estimateur d'erreur sous la forme d'une espérance d'une variable aléatoire paramétrée. En pratique, des algorithmes de type MC ou bandit peuvent être considérés. Ces algorithmes ont été testés pour l'approximation de famille de fonctions paramétrées pour lesquelles nous avons accès uniquement à des évaluations ponctuelles (bruitées). En particulier, nous avons considéré l'approximation de la "variété" des solutions, d' EDP paramétrées, admettant une représentation probabiliste par le biais de la formule de Feynman-Kac.
  • Emmanuel De Bézenac

    Representation Equivalent Neural Operators

    30 janvier 2024 - 14:00Salle de conférences IRMA

    Recently, operator learning, or learning mappings between infinite-dimensional function spaces, has garnered significant attention, notably in relation to learning partial differential equations from data. Conceptually clear when outlined on paper, neural operators necessitate discretization in the transition to computer implementations. This step can compromise their integrity, often causing them to deviate from the underlying operators, with practical consequences.

    This talk introduces a new take on neural operators, with a novel framework, Representation equivalent Neural Operators, designed to deal with the aforementioned issue. At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations. These concepts will be introduced and their practical applications will be discussed, introducing a novel a convolutional based neural operator.
  • Nicolas Boullé

    Elliptic PDE learning is provably data-efficient

    13 février 2024 - 14:00Salle de conférences IRMA

    PDE learning is an emerging field at the intersection of machine learning, physics, and mathematics, that aims to discover properties of unknown physical systems from experimental data. Popular techniques exploit the approximation power of deep learning to learn solution operators, which map source terms to solutions of the underlying PDE. Solution operators can then produce surrogate data for data-intensive machine learning approaches such as learning reduced order models for design optimization in engineering and PDE recovery. In most deep learning applications, a large amount of training data is needed, which is often unrealistic in engineering and biology. However, PDE learning is shockingly data-efficient in practice. We provide a theoretical explanation for this behavior by constructing an algorithm that recovers solution operators associated with elliptic PDEs and achieves an exponential convergence rate with respect to the size of the training dataset. The proof technique combines prior knowledge of PDE theory and randomized numerical linear algebra techniques and may lead to practical benefits such as improving dataset and neural network architecture designs.
  • Bruno Lévy

    Fluides et galaxies : quelques applications des théorèmes de Brenier en physique numérique

    20 février 2024 - 14:00Salle de conférences IRMA

    Dans cette présentation, je parlerai des théorèmes de Brenier en transport optimal.

    Ces théorèmes ont la particularité de se traduire particulièrement bien en algorithmes numériques, grâce (entre autres) aux travaux de Benamou, de Mérigot et de Gallouet.

    Je montrerai des applications en dynamique des fluides et en cosmologie.

  • Benjamin Mélinand

    Eaux profondes

    12 mars 2024 - 14:00Salle de conférences IRMA

    J’expliquerai comment on peut dériver et justifier des modèles asymptotiques pour l’équation des vagues dans l’hypothèse d’eaux dites profondes.
  • Juliette Chabassier Et Augustin Ernoult

    Comprendre et prédire les propriétés acoustiques d'instruments de musique du patrimoine : le cas d'une trompette Besson, du musée de la musique de Paris

    19 mars 2024 - 14:00Salle de conférences IRMA

    Dans cet exposé, nous utiliserons des outils d'acoustique, de modélisation et d'analyse numérique afin de mieux comprendre le fonctionnement d'une trompette actuellement conservée au sein du Musée de la Musique à Paris. Nous montrerons comment la simulation directe couplée à une méthode d'inversion permettent de reconstruire de façon non destructive, la forme interne de l'instrument de musique, paramètre prépondérant au son émis. À partir de données tomographiques, une première perce (rayon interne de l'instrument) est reconstruite et permet le calcul de la réponse linéaire de l'instrument. Cette dernière est comparée à des données expérimentales de même nature et un problème inverse permet d'affiner la reconstruction. Ces calculs d'acoustique linéaire sous forme mixte en pression et débit se basent sur une discrétisation en espace par une méthode d'éléments finis non standard dont la convergence repose sur des éléments de preuve originaux. A partir de la perce reconstruite et dont le comportement linéaire est validé expérimentalement, une comparaison sonore est souhaitable. La discrétisation en temps repose sur la garantie d'un bilan de puissance au niveau discret, et s'appuie sur un schéma de Störmer-Verlet dans la partie linéaire du tuyau. Ce dernier est prouvé stable pour une source impulsionnelle, y compris lorsque le pas de temps approche sa plus grande valeur admissible, grâce à des éléments de preuve originaux. Enfin, des sons de trompette sont comparés entre ceux d'un musicien jouant une copie de la trompette réalisée à partir du plan issu de la reconstruction de perce, et ceux d'une simulation sonore de la trompette couplée à un modèle rudimentaire non linéaire d'embouchure. Ce travail a fait l'objet d'une collaboration entre la Cité de la Musique- Philharmonie de Paris, le Centre de Recherche et de Restauration des Musées de France, l'Institut Technique Européen des Métiers de la Musique, le fabricant de trompettes Jérôme Wiss et l'équipe MAKUTU de l'Inria Bordeaux.
  • Louis Garenaux

    Stabilité des fronts monostable pour les lois de bilan scalaires.

    2 avril 2024 - 14:00Salle de conférences IRMA

    Les lois de bilan scalaires sont des équations de réaction-advection qui apparaissent naturellement dans des contextes physiques / biologiques. Elles sont obtenues par un bilan entre deux instants proches de la variation de la quantité d'intérêt. Dans cette présentation, je me concentrerai sur certaines solutions propagées à vitesse constante, qui connectent deux états d'équilibres distincts. En particulier, je discuterai la stabilité de ces solutions appelées des fronts.
  • Fanny Lehmann

    3D wave propagation with Neural Operators

    16 avril 2024 - 14:00A confirmer

    Wave propagation simulations are the core of numerous applications and they have reached a high level of fidelity thanks to a continuous improve in numerical modelling and computational resources. When simulating wave propagation in the Earth’s crust, the properties of the propagation domain are subject to large epistemic uncertainties due to the difficulty of conducting geophysical measurements. However, the computational costs of physics-based simulations in three-dimensional (3D) heterogeneous domains prevent uncertainty analyses via a Monte Carlo-like approach. I introduce a surrogate model based on a Multiple Input Fourier Neural Operator (MIFNO), an extension of the popular Fourier Neural Operator [Li et al, 2021]. Fourier Neural Operators rely on the Fast Fourier Transform to learn the frequential representation of Partial Differential Equations (PDEs). Our MIFNO predicts the solution of the 3D elastic wave equation from the properties of the propagation domain and the initial condition. Its main specificities are: - a factorized architecture that limits the number of parameters and improves the scalability - a depth-to-time conversion that predicts 3D time-dependent variables without a 4D surrogate - an implementation depending on the input representation (structured grids and vectors) I will describe the theoretical foundations of the MIFNO architecture, illustrate its prediction ability and quantify the prediction error. I will also show the benefits of transfer learning to fine-tune the MIFNO on a real earthquake and improve its accuracy. This allows us to quantify uncertainties on the solution of the elastic wave equation.
  • Antoine Rousseau

    Large CFL schemes and porosity models for fast flood numerical models

    7 mai 2024 - 14:00Salle de conférences IRMA

    In this talk, I will present two ways to reduce the computational cost of flood numerical models. The first feature is the subscale parametrization thanks to porosity modeling. It has first been introduced by V. Guinot in th 2000s and is still at the core of LEMON’s scientific production. It allows reducing the cost of simulation thanks to coarser meshes, while the subscale informationis captured with pre-computations on the domain topography. The second part of the talk will be dedicated to large CFL models to handle heterogeneous meshes with explicit time-schemes. The main idea is to consider hydraulic information not only on the neighbouring cells of any flux interface, but to gather it from a pre-defined dependence domain thanks to a convolution process. This provides much larger time steps (hence a reduced computational cost) without any mesh modification or accuracy loss. These two features are part of the team SW2D-LEMON, developed at Inria and Université Montpellier.
  • Christian Klein

    Numerical study of dispersive equations appearing in the theory of water waves

    14 mai 2024 - 14:00Salle 301

    We present a numerical study of solutions to equations appearing in the theory of surface waves, concretely Boussinesq systems (integrable and non-integrable examples) and Serre-Green-Naghdi (SGN) equations. Solitary waves in 1D are constructed, and there stability is studied numerically. The time evolution of localised initial data is explored. Of special interest is the role of the non-cavitation condition. The appearence of dispersive shock waves, zones of rapidly modulated oscillations, in the vicinity of shocks of the corresponding dispersionless systems is studied. In the context of the SGN equations, these questions are also addressed in 2D.

    This is work in collaboration with V. Duchene, S. Gavrilyuk and J.-C. Saut.
  • François Vilar

    Monolithic convex property preserving scheme on unstructured grids and entropy consideration

    21 mai 2024 - 14:00Salle de conférences IRMA

    This talk aims at presenting a subcell monolithic DG/FV convex property preserving scheme solving system of conservation laws on 2D unstructured grids. This is known that discontinuous Galerkin (DG) method needs some sort of nonlinear limiting to avoid spurious oscillations or nonlinear instabilities which may lead to the crash of the code. The main idea motivating the present work is to improve the robustness of DG schemes, while preserving as much as possible its high accuracy and very precise subcell resolution. To do so, a convex blending of high-order DG and fist-order finite volume (FV) scheme will be locally performed at the subcell scale where it is needed. To this end, we first prove that it is possible to rewrite DG scheme as a subcell FV scheme on a subgrid provided with some specific numerical fluxes referred to as DG reconstructed fluxes. Then, the monolithic DG/FV scheme will be defined as following: to each face of each subcell will be assigned two fluxes, a 1st-order FV one and a high-order reconstructed one, that will be in the end blended in a convex way. The goal is now to determine, through analysis, optimal blending coefficients to achieve the desire properties (for instance positivity, non-oscillatory, entropy inequalities) while preserving the high accuracy of the scheme. Numerical results on various type problems will be presented to assess the very good performance of the design method. A particular emphasis will be put on entropy consideration. By means of this subcell monolithic framework, we will attempt to address the following questions: what do we mean by entropy stability? What is the cost of such constraints? Is this absolutely needed?
  • Thomas Bellotti

    Boundary conditions analysis for a two-unknowns lattice Boltzmann scheme

    17 septembre 2024 - 14:00Salle de conférences IRMA

    In this talk, I will first present key results from our recent works on the numerical analysis of lattice Boltzmann schemes, which ignore boundary conditions. These schemes can be used to simulate systems of conservation laws. Next, I will theoretically examine boundary conditions in lattice Boltzmann methods, focusing on a simplified two-unknowns model. By mapping lattice Boltzmann schemes to finite difference schemes, we enable rigorous consistency and stability analyses. We propose kinetic boundary conditions for inflow and outflow scenarios, addressing the trade-off between accuracy and stability, which we successfully mitigate. Consistency is analyzed using modified equations, while stability is assessed via GKS (Gustafsson, Kreiss, and Sundström) theory. For coarse meshes, where GKS theory may falter, we employ spectral and pseudo-spectral analyses of the scheme's matrix to explain low-resolution effects. Lastly, I will discuss potential research directions related to boundary conditions in kinetic schemes.
  • Michel Duprez

    A finite difference scheme with an optimal convergence for elliptic PDEs on domains defined by a level-set function

    24 septembre 2024 - 14:00Salle de séminaires IRMA

    In this talk, we will present a new finite difference method, on regular grid, well suited for elliptic problems posed in a domain given by a level-set function. It is inspired by the phi-FEM paradigm which is a fictitious finite element method imposing the boundary conditions thanks to a level-set function describing the domain. We will consider here the Poisson equation with Dirichlet condition.We will prove the optimal convergence of our finite difference method in some Sobolev norms. Moreover, the discrete problem is proven to be well conditioned, i.e. the condition number of the associated matrix is of the same order as that of a standard method on a comparable mesh. We will then give some numerical results that confirm the optimal convergence in the considered Sobolev norms. An other advantage of our approach is that it uses standard libraries such as Numpy and Scipy in Python, and the implementation is very short (less than 100 lines in Python), making it a very low-cost numerical scheme in terms of computation time.
  • Théophile Dolmaire

    Inelastic collapse of three particles in dimension d ≥ 2

    1 octobre 2024 - 14:00Salle de conférences IRMA

    The Boltzmann equation can be derived rigorously from a system of elastic hard spheres (Lanford’s theorem). In the case of large systems of particles that interact inelastically (sand, snow, interstellar dust), the derivation of the inelastic Boltzmann equation is still open. One major difficulty, already at the microscopic level, comes from the phenomenon of inelastic collapse, when infinitely many collisions take place in finite time.

    Assuming that the restitution coefficient r is constant, we obtain general results of convergence and asymptotics concerning the variables of the dynamical system describing a collapsing system of particles. We prove a complete classification of the singularities when a collapse of three particles takes place, obtaining only two possible orders of collisions between the particles: either the particles arrange in a nearly-linear chain (studied in [3]), or they form a triangle, and we show that, after sufficiently many collisions, the particles collide according to a unique order of collisions, which is periodic. Finally, we construct explicit initial configurations leading to a nearly-linear collapse in a stable way, such that the angle between the particles at the time of collapse can be chosen a priori, with an arbitrary precision.

    The results are taken from [1] and [2], obtained in collaboration with Juan J. L. Velázquez (Universität Bonn).

    [1] Théophile Dolmaire, Juan J. L. Velázquez, “Collapse of inelastic hard spheres in dimension d ≥ 2”, to appear in Journal of Nonlinear Science, preprint arXiv:2402.13803v2 (02/2024).
    [2] Théophile Dolmaire, Juan J. L. Velázquez, “Properties of some dynamical systems for three collapsing inelastic particles”, preprint arXiv:2403.16905 (03/2024).
    [3] Tong Zhou, Leo P. Kadanoff, “Inelastic collapse of three particles”, Physical Review E, 54:1, 623–628 (07/1996).
  • Yanfei Xiang

    Neural operator preconditioning and neural network solvers for the solution of the parametric Helmholtz equations

    8 octobre 2024 - 14:00Salle de conférences IRMA

    In recent decades, scientific machine learning, utilizing deep learning methodologies, has found widespread application in the fields of scientific computing and computational engineering. That includes learning the neural networks as a solver and learning functions by the neural operators. Neural network solver can be quite promising after a stroke of luck and proper training. However, they generally yield solution with limited accuracy and exhibit potential issues in network generalizability. Besides, unlike the classical numerical linear algebra methods, purely data-driven neural network solvers lack theoretical convergence guarantee. In this talk, we focus on learning different neural operators for getting a preconditioner to accelerate the solution of the parametric Helmholtz equations by the classical Krylov subspace methods. Given the goal is to learning a preconditioner rather than a solver, the required accuracy is not high and so does the training cost. In order to learning an effective neural network preconditioner without re-training, we train various neural operators with different neural network architectures. Furthermore, we investigate the influence of the settings of training datasets and the tuning of hyper-parameters of different neural operators. Finally, we successfully test network generalizability of trained operator from different physical aspects. We also success in applying it to accelerating the solution of a challenging practical human head CT scan dataset on 64 times larger domain size. In short, this operator learning part illustrates that the performance of the trained inference depends on the choosing of network architecture, the setting of training datasets, and the tuning of hyper-parameters of the neural networks. For the neural network solver part, we focus on improving its accuracy by incorporating an idea from classical subspace method. Furthermore, we illustrate the trade-off between higher accuracy and network generalizability, and the difference between learning neural operator and neural network solver. In conclusion, this presentation demonstrates the efficiency of neural operator learning and neural network solver for improving the simulation of the parametric Helmholtz equations.
  • Geneviève Dusson

    A nonlinear reduced model based on optimal transport for electronic structure calculations

    15 octobre 2024 - 14:00Salle de conférences IRMA

    Electronic structure calculations are widely used to predict the physical properties of molecules and materials. They require to solve nonlinear partial differential and eigenvalue equations. These equations are generally numerically very demanding, especially since they are parameterized by the positions of the nuclei in the molecule and must be solved a large number of times when these positions vary. This is the case for example when simulating the dynamics of a molecule. In this talk, I will present a recent work aimed at efficiently calculating approximate solutions of such parameterized PDEs, with the objective of reducing the overall computational time. For this, I will present a non-linear interpolation method between several solutions, based on optimal transport, and using in particular Wasserstein barycenters. I will illustrate this method with simulations carried out on a 1D toy model.
  • Stephan Simonis

    Computing Statistical Solutions of Fluid Flows

    18 octobre 2024 - 14:00Salle de conférences IRMA

    Despite the supreme importance of fluid flow models, the well-posedness of three-dimensional viscous and inviscid flow equations remains unsolved. Promising efforts have recently evolved around the concept of statistical solutions. In this talk, we present stochastic lattice Boltzmann methods for efficiently approximating statistical solutions to the incompressible Navier–Stokes equations in three spatial dimensions. Space-time adaptive kinetic relaxation frequencies are used to find stable and consistent numerical solutions along the inviscid limit toward the Euler equations. With single level Monte Carlo and stochastic Galerkin methods, we approximate responses, e.g., from initial random perturbations of the flow field. The novel combinations of schemes are implemented in the parallel C++ data structure OpenLB and executed on heterogeneous high-performance computing machinery. Based on exploratory computations, we search for scaling of the energy spectra and structure functions in terms of Kolmogorov’s K41 theory. For the first time, we numerically approximate the limit of statistical solutions of the incompressible Navier–Stokes solutions toward weak-strong unique statistical solutions of the incompressible Euler equations in three dimensions. Applications to wall-bounded turbulence and the potential to generate training data for novel generative artificial intelligence algorithms are discussed.
  • Tom Sprunck

    Deux approches pour déterminer la géométrie d'une pièce par l'acoustique

    12 novembre 2024 - 14:00Salle de conférences IRMA

    De nombreux travaux récents en traitement du signal audio s'intéressent à la question suivante : "peut-on entendre la forme d'une pièce ?". Bien que cette question s'inspire du célèbre article de Kac "Peut-on entendre la forme d’un tambour ?", les deux problèmes diffèrent considérablement. En effet, Kac examine l’unicité de la forme d’un tambour à partir des fréquences propres de l’opérateur de Laplace-Dirichlet. En pratique, il est impossible d'accéder directement aux fréquences ou fonctions propres à partir de mesures réelles. Nous considérons donc ici le problème inverse, plus réaliste, de la reconstruction de la forme d’une pièce à partir de mesures de réponses impulsionnelles effectuées en un nombre fini de positions de microphones dans la pièce. Deux approches très distinctes seront introduites dans cette présentation. La première se base sur le modèle des sources images, qui ne prend en compte que les réflexions spéculaires de l'impulsion sonore sur les murs (conditions de Neumann). La méthode de reconstruction développée utilise des outils issus de la super-résolution pour estimer les positions des sources images à partir de signaux temporels, puis en déduire les paramètres géométriques de la pièce. La seconde approche repose sur des méthodes d'optimisation de forme, en intégrant des conditions aux limites plus réalistes dans le domaine fréquentiel.
  • Lucas Ertzbischoff

    On the hydrostatic limit of the Euler-Boussinesq equations

    19 novembre 2024 - 14:00Salle de conférences IRMA

    I will talk about the hydrostatic approximation of the Euler-Boussinesq equations, describing the evolution of an inviscid stratified fluid where the vertical length scale is much smaller than the horizontal one. Even though of importance in oceanography, the justification of the hydrostatic limit in this context has remained an open problem. I will discuss some recent results showing that some instability mechanisms may prevent this limit to hold. This is joint work with R. Bianchini (CNR Rome) and M. Coti Zelati (Imperial College London).
  • Rahul Barthwal

    On the Riemann problem for a reduced hyperbolic model governing two-phase thin film flow

    26 novembre 2024 - 14:00Salle de séminaires IRMA

    In this talk, we discuss the Riemann problem for a new hyperbolic model governing two-phase thin film flow. In the first part of the talk, we discuss the modelling of surface tension driven two-phase thin film flow under the influence of an anti-surfactant and obtain a reduced 4×4 triangular hyperbolic model in one-dimension under the assumption that the solute is perfectly soluble and there are negligble capillarity and diffusivity effects. In the second part of the talk, we will discuss about the Riemann problem and an exact and Godunov solver for this reduced model. We will end our talk with some future directions for higher order stable schemes based on generalized Riemann problems also referred as GRP solvers. This is joint work with Christian Rohde (University of Stuttgart) and Yue Wang (IAPCM, Beijing, China).
  • Ivan Dokmanić

    A spring-block theory of feature learning in deep neural networks

    3 décembre 2024 - 14:00Salle de conférences IRMA

    A central question in deep learning is how deep neural networks (DNNs) learn features. DNN layers progressively collapse data into a regular low-dimensional geometry. This collective effect of nonlinearity, noise, learning rate, width, depth, and numerous other parameters, has eluded first-principles theories which are built from microscopic neuronal dynamics. We discovered a noise–nonlinearity phase diagram that highlights where shallow or deep layers learn features more effectively. I will describe a macroscopic mechanical theory of feature learning that accurately reproduces this phase diagram, offering a clear intuition for why and how some DNNs are ``lazy'' and some are ``active'', and relating the distribution of feature learning over layers with test accuracy. Joint work with Cheng Shi and Liming Pan.