TÍTULO DE LA PRESENTACIÓN:
Polydisperse multiphase flows are characterized by the presence of a population of particles, droplets or bubbles whose size, density, and other physical properties may vary and evolve in space and time. This population, which constitutes the so-called disperse phase, typically interacts with another phase (carrier phase), exchanging mass, momentum, and energy. Describing the behavior of these flows entails studying the coupled spatio-temporal evolution of the population of entities forming the disperse phase and of the carrier phase. The former requires tracking the changes in space and time of a joint number density function (NDF) of the disperse phase properties (e.g., size, velocity, composition, charge), whose dimensionality, and related computational cost, rapidly increases with the number and complexity of the physical phenomena under consideration (e.g., accounting for particle size and velocity alone leads to a seven-dimensional problem). In order to tackle such dimensionality challenge and maintain an acceptable computational cost for applications, it is possible to directly study the evolution of statistical quantities related to the joint distribution function (NDF) called moments by solving a set of partial differential equations with fluxes and source terms that depend on integrals of the NDF and, consequently, need closures. Quadrature-based moment methods are a robust approach to obtain such closures in the context of Euler-Euler multiphase flow models and offer a systematic procedure to numerically reconstruct the NDF from a vector of its moments. In this lecture, the challenges of describing polydisperse multiphase flows will be introduced by considering gas-liquid and gas-particle flows as examples. Multidimensional quadrature-based moment methods will be discussed in the context of polydisperse multiphase flows. Closure approaches for the moment spatial fluxes and for the source terms in the moment equations that guarantee the preservation of moment realizability will be discussed, as well as coupling strategies to robustly solve the moment equations for the disperse phase together with the equations for the carrier phase. Finally, example results demonstrating the predictive capabilities of the approach using an open-source implementation of quadrature-based moment methods, OpenQBMM, will be shown.
TÍTULO DE LA PRESENTACIÓN:
TÍTULO DE LA PRESENTACIÓN:
TÍTULO DE LA PRESENTACIÓN:
TÍTULO DE LA PRESENTACIÓN:
The virtual element method (VEM) is a recent extension of the finite element method that permits arbitrary polygonal element geometry in two dimensions. This mesh flexibility means that the VEM is well-suited to problems involving adaptive mesh remeshing. In this work an energy error estimation has been implemented using a super-convergent patch recovery procedure. Using this error estimator elements are flagged for refinement or coarsening. The refinement (D van Huyssteen et al., CMAME, 393(1):114849 (2022)) and coarsening (D van Huyssteen et al., CMAME, 418(1):116507 (2024)) of the elements is performed using novel remeshing procedures that are suitable for the arbitrary polygonal element geometries permitted by the VEM. The combined remeshing procedure has been implemented for the case of two-dimensional linear elastic problems and represents the first example of a fully adaptive VEM (D van Huyssteen et al., arXiv, 2407.13665 (2024)). Of further significance is the novel notion of quasi-optimal meshes. A quasi-optimal mesh is that which meets a specified energy error target and exhibits quasi-even error distribution. That is, all element-level errors fall within a satisfactory range defined in terms of the specific target. Through the novel fully adaptive remeshing procedure elements are refined and coarsened accordingly until a quasi-even error distribution is met. The remeshing procedure is capable of generating a quasi-optimal mesh from any initial mesh and for any specified error target (D van Huyssteen et al., arXiv, 2407.13665 (2024)).
TÍTULO DE LA PRESENTACIÓN:
Most available multiphase flow solvers resort to numerical methods based on the one-fluid approach which consists in introducing averaged quantities in one or several cells around the position of the interface. Understanding the consequences of this regularization procedure is important in order to develop numerical methods able to reduce discretization errors and the design of novel Adaptive Mesh Refinement Methods to obtain grids that reduce the numerical errors. In this talk we will theoretically discuss the influence of an arbitrary regularization procedure in the continuum limit in problems where both the solution of the sharp interface and its corresponding regularized problem can be analytically computed. In general, we show that the errorintroduced by any the regularization can be decomposed into an outer problem and an inner problem that imposes jump conditions for the error and the errorflux in the outer region. Although the harmonic mean is shown to be exact in the outer regions for one-dimensional problems, the optimal choice of theaveraging rule is shown to be problem dependent for multidimensional flows. Interestingly, the model proposed is shown to reproduce well the numericalerrors observed for a variety of problems related to the solution of the Poisson equation and also the Navier-Stokes equations, where the introduction of an artificial regularization length modifies the growth rate of classical instabilities.
TÍTULO DE LA PRESENTACIÓN:
Over the last five decades computational mechanics has matured rapidly. In each of the core disciplines -fluid dynamics, structural dynamics, combustion, heat transfer, acoustics, electromagnetics, mass transfer, control, etc.- robust and efficient numerical techniques have been developed, and a large code base of academic, open source and commercial codes is available. The acquisition of many of these commercial codes by the leading CAD-vendors attests to the desire to streamline the typical computational mechanics workflow (CAD, boundary conditions, loads, physical parameter, solution with possibly mesh adaptation, post-processing) by integrating all parts into a single application. The ability to obtain accurate and timely results in each of the coredisciplines or metiers has prompted the desire to reach the same degree of simplicity in computing multi-physics problems. A large class of coupled problems exhibits large disparity of timescales. Examples include evaporative cooling (where the flowfield may be established in seconds while the temperature field requires minutes), sedimentation of rivers and estuaries (where the flowfield is established in seconds while the filling up of a channel takes weeks), deposition of cholesterol in arteries (where the flowfield is established after two heartbeats while the deposition can take years), the wear of semi-autogenous grinding (SAG) mills (where the movement of steel balls and mineral-rich rocks and mud is established in minutes while the wear of the liners can take hours), and many others. We denote this class of problems as `barely coupled’. In each of these cases a coupling is clearly present. However, due to physics and nonlinear effects one can not simply run with a fully coupled time discretization using very large timesteps. This would lead to incorrect results. It then becomes very costly to run in a strictly time-accurate manner. The recourse advocated here is to run each problemto a quasi steady-state, and to couple the different disciplines in a loose manner. The talk will describe in detail the techniques used, some fundamental results regarding stability and convergence, and show several examples.
TÍTULO DE LA PRESENTACIÓN:
TÍTULO DE LA PRESENTACIÓN: