Sea ice regulates heat, moisture and salinity in the polar oceans. In the winter, sea-ice insulates relatively warm ocean water from the colder air, except where fractures allow heat and water vapor to escape from the ocean to the atmosphere. This exchange affects cloud cover and precipitation. Freezing of sea water exposed in fractures also causes brine to be ejected into the ocean. These factors impact worldwide ocean currents, weather patterns and ecosystems. An elastic-decohesive constitutive model for pack ice has been developed that explicitly accounts for fractures. The constitutive model is based on elasticity combined with a cohesive crack law that predicts the initiation, orientation and opening of fractures, and also has a simple closing/refreezing model. The model is constructed to transition from observed brittle failure under tension, to compressive brittle failure under moderate compression, and to a plastic-like faulting under large confinement, as observed in laboratory experiments. The various modes of failure occur in the model, depending on the stress state in the ice. Where the transitions occur in stress space depends on the material parameters and can be adjusted based on empirical data. This model is implemented in the material-point method (MPM), which is based on a Lagrangian set of material points with associated mass, position, velocity, stress, and other material parameters, and a background mesh where the momentum equation is solved. This method avoids the convection errors associated with fully Eulerian methods as well as the mesh entanglement that can occur with fully Lagrangian methods under large deformations. Example calculations using the elastic-decohesive constitutive model are performed for the Arctic, where predictions can be validated against satellite observations.
The time-dependent mechanical behavior of composite media is the result of intricate interactions between elastic and inelastic deformation processes operating within the different constitutive phases. A key consequence of these interactions is that microscopic constitutive descriptions based on finite sets of internal variables give rise to macroscopic constitutive descriptions with an infinity of internal variables. This fact has motivated several attempts to generate approximate macroscopic descriptions based on reduced sets of effective internal variables that provide a partial but hopefully accurate characterization of the evolving microscopic state of the composite. In this talk we will present recent advances on the development of a class of reduced-order descriptions for viscoelastic composite media based on mean-field homogenization. The focus here is placed on the structure of the effective viscoelastic constitutive relations, and on accurate approximations of those relations granted the underlying microstructure is known with sufficient precision so that purely elastic properties can be accurately determined. Sample results for rigidly reinforced viscoelastic solids subject to complex deformation histories are reported to highlight the capabilities and limitations of the scheme and to motivate possible improvements.
Traditionally, lattice materials comprise a micro-architectured lattice and intervening porosity. The mechanical properties are sensitive to the topology, relative density and the length scale, but usually much less sensitive to the degree of imperfection. But what if we fill the porosity with an inviscid, incompressible fluid? The resulting mechanical properties are sensitive to the degree to which fluid can leak from one cell into the next. The macroscopic in-plane yield surface of a hexagonal honeycomb, filled with such an inviscid, incompressible fluid, has been calculated and analytical models have been obtained for the collapse modes. Numerical simulations reveal that the finite strain response can comprise a snap-back, softening behaviour under uniaxial compression. This has been explored in some details, and the transverse propagation of a shear band, and its subsequent band broadening is reminiscent of microbuckle propagation in a fibre composite. A Maxwell-line construction can be used on the unit cell response in order to determine the steady state propagation stress. Other competing collapse modes exist that exhibit string softening but do not admit the existence of a localisation (shear) band. If time permits, some remarks will be made on the actuation of a lattice due to induced swelling by the presence of a liquid phase.
The current pandemic has renewed interest in pathogen propagation, transmission and mitigation. In particular, the relative impact of transmission via large droplets versus small droplets or aerosols, as well as possible changes to existing heating, ventilation air-conditioning (HVAC) systems has been debated throughout 2020. This in turn has lead to a vigorous effort to model all of the phenomena associated with pathogen propagation, transmission and mitigation via advanced computational techniques from the molecular scale to the scale of the built environment. In order to derive the required physical and numerical models, the talk will start by summarizing the current understanding of pathogen (and in particular virus) transmission and mitigation. The ordinary and partial differential equations that describe the flow, the particles and possibly the UV radiation loads in rooms or HVAC ducts ducts will be presented, as well as proper numerical methods to solve them in an expedient way. Thereafter, the motion of pedestrians, as well as proper ways to couple computational fluid dynamics (CFD) and computational crowd dynamics (CCD) to enable high-fidelity pathogen transmission and infection simulations is treated. Numerous examples (classrooms, offices, hospitals, subway cars, airplanes) of studies carried out over the last year indicate that high-fidelity simulations of pathogen propagation, transmission and mitigation in the built environment have reached a high degree of sophistication, offering a quantum leap in accuracy from simpler probabilistic models. This is particularly the case when considering the propagation of pathogens via aerosols in the presence of moving pedestrians.
An introduction to some basic aspects of reliability analysis in structural mechanics is presented. A first part dealing with reliability analysis of reinforced concrete structures considering stochastic fields is developed. Classical methods, such as FORM and Monte Carlo Methods with Importance Sampling, are employed. Viscoelastic (with aging) as well as corrosion (due to carbonation) effects on the structural behavior are also analyzed. Reliability Analysis and Reliability Based Design Optimization (RBDO) of laminated composite material structures are presented in a second part. As in the first part, FORM and Monte Carlo Methods with Importance Sampling, are also employed. Finite Element Method (FEM), Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) are used. Two types of Artificial Neural Networks with supervised training and feed-forward architecture are employed (Multilayer Perceptron Neural Networks and Neural Networks with Radial-Basis Functions). Artificial Neural Networks are useful to reduce substantially the computational processing time. Genetic Algorithms are a suitable technique to optimize laminated composite material structures taking into account some problem characteristics such as discrete variables and non-smooth responses, avoiding stagnation in points with local minimums (Genetic Algorithms do not use variable gradients like deterministic optimization methods). Finally, some conclusions and remarks are summarized
Over the last years, we have been studying methods for fluid-structure interaction at small scales. By small we mean systems that are microscopic, such as microbia or single cells. Such systems have to surmount huge viscous effects, which make their self-propulsion to involve large displacement of appendages and/or large deformations of the body. Our objective is to build a mathematically sound discrete approximation of microswimmers, incorporating several interactions with the environment. Further, we would like to achieve a fully-parameterizable model built upon state-of-the-art open finite element libraries, so that people (or algorithms) can play with them and build upon them. We have started from the simpler cases of rigid bodies and articulated sets of rigid bodies embedded in Newtonian or quasi-Newtonian fluids, and recently incorporated Cosserat one-dimensional solids (flexible rods). These implementations are being carried out in the platforms FEniCS and Firedrake. In this talk we will describe the mathematical problem and adopted (ALE based) formulation in the context of other popular alternatives, such as the immersed boundary and boundary element methods. We will also show some examples of microswimmers in action, and some incipient attempts at making them learn to swim and feed by themselves.
Nowadays, studies are being carried out on the Assessment of the remaining life of the posttensioning system of nuclear power plants. Specifically, a structural analysis of the containment building is carried out to identify the moment when the prestressed system reaches the minimum design stress, even beyond the 40-year life contemplated by existing analysis. The results of this structural analysis make it possible to obtain the aging of the post-tensioning system of nuclear containments, and then incorporate it into the aging management plan. Usually, a structural analysis is required to:-Evaluate the structural behavior for the new period of life requested and verify that the previous states of post-tensioning already carried out are fulfilled. - Demonstrate that the effects of aging can be adequately managed during new periods of operation. This type of study allows us to examine the possibility of extending the operational horizon for the containment buildings of nuclear power plants up to 60 years of life. To meet these requirements, a non-linear finite element study of the containment behavior is carried out, taking into account the effects of damage in concrete, plasticity in passive steel and decay of the stress in tendons of the prestressing system, with the objective to assess the extension of its useful life. The structural analysis starts from the state of the containment immediately after its construction in reinforced concrete including the liner in its place, applying then the initial post-tensioning state. Later on, the stresses of the tendons of the structure changes due to maintenance operations and rheological relaxation stresses effects. All these effects are simulated by means of a non-linear numerical calculation, which makes it possible to obtain the remaining stresses in the tendons at a given instant of time. In addition, the calculated stresses are usually compared with that measured during the experimental control campaigns.
The Sun is a typical magnetic star our galaxy with a surface temperature of 5000 K. The sun's atmosphere is extremely hot (10^6 to 10^7) ºK and is composed of a thin, highly magnetized and stratified plasma. A great variety of transient phenomena take place there. One of these phenomena are solar flares or flares. When they occur, the coronal plasma is heated locally to temperatures of the order of 10^7 ºK in extremely short times. In 1988 Parker suggested that the mechanism magnetic reconnection is the mechanism that dominates the release of energy. The coronal magnetic fields anchored in the turbulent photospheric plasma suffers strong deformations forming complex sheets of currents. When the intensity of the current increases beyond a certain threshold, the magnetic reconnection phenomenon dominates (locally) the coronal dynamics and the magnetic energy is released in the form of kinetic energy and thermal energy (Parker, 1988 & Parker, 1983). Two alternative approaches have been developed in order to evaluate the predictive capacity of Parker's model: the first assumes that the coronal plasma is a magnetic fluid that can be described by the equations of magnetohydrodynamics, and in the second the hypothesis is that the solar corona is in a state of self-organized criticality and the dynamics of the coronal magnetic field can be modeled through a cellular automaton. In this work we will present both approaches and discuss their strengths and weaknesses with particular interest the prediction of solar flares and their impact on space weather.
Inundation events such as tsunamis and storm surges pose a significant threat to coastal communities and infrastructure around the world. The damage from such events is often not only the result of the flowing water itself, but also of transported debris. Such debris is composed of collections of whatever objects the inundation flows mobilize as they move through a stricken region, and can range from vehicles, to watercraft, to houses and small buildings. The purpose of our work is to address this challenge directly by experimentally and numerically studying flow-driven ensembles of debris impacting structures, with a focus on quantifying impact forces and damming effects in a manner valid for this kind of highly nonlinear, chaotic system. Among many possible numerical methods, the Material Point Method (MPM) emerges as most effective for modeling interacting deformable bodies. However, the method presents limitations on practical levels of resolution that can be achieved at scale due to computational costs. In this context GPU based MPM implementations offer an alternative approach. Graphics Processing Units (GPUs) accelerate Material Point Method (MPM) programs on the order of 100x, but limited memory and bandwidth restricts simulation size. Recent software advances in Computer Graphics now permit Multi-GPU MPM for engineering projects with many material points (10,000,000+) and grid-cells (1,000,000,000+). Hardware trends suggest rising GPU viability, with doubling of (i) video memory, (ii) bandwidth, (iii) computational cores, and (iv) increased accessibility in the next four years. We present our expansion of an optimized, open-source Multi-GPU MPM code (Claymore, https://github.com/penn-graphics-research/claymore) from computer graphics to engineering, where certain values (e.g., stress, strain, state-variables, forces) must be held to high standards. Building on open-source codes leverages prior development, requiring only basic I/O changes and vetting of physical accuracy (e.g., material models, shape-functions, transfer schemes). This code is used to model real-world flume experiments, performed in 12m x 0.9m x 1.2m (UW Flume) and 120m x 4.5m x 5.4m (OSU Flume) facilities. Stochastic debris-field transportation, jammed debris formations, and precise loadings are captured and extrapolated for probabilistic structural design against tsunamis.
Next-generation numerical algorithms for accurately solving advanced flow problems with complex geometries will undoubtedly rely on robust and efficient high-order accurate formulations for unstructured grids. Adaptive high order schemes have the potential to drastically improve the computational efficiency required to achieve the desired error level, but almost all of them are less robust than their low order counterparts when the numerical solution contains discontinuities or even under-resolved physical features (e.g., under-resolved turbulent flows). Although there is much work to be done to develop the theory behind the stability, consistency, and accuracy of algorithms as we increase asynchrony, reduce communication, and decompose problems in search of more concurrency, we do not want expensive and time-consuming simulations to fail because the underlying spatial and temporal discretizations are unstable. Therefore, because the scale of many numerical models and physical problems targeted computational science exacerbates the stability difficulties (the exascale era is also not too far), the use of fully controllable non-linearly stable complex simulation algorithms is necessary. While incremental improvements to existing and widely used algorithms will continue to improve overall capabilities, the development of new numerical techniques and their extension to complex multi-scale and multi-physics problems offers the possibility of radically advanced computational fluid dynamics and beyond. We are currently at a critical development stage with high order methods, where we can mimic important continuous stability estimates in time and space by carefully constructing discrete operators. However, the advancement of numerical schemes and their properties go hand-to-hand with analytical knowledge about the physical models. It is almost impossible to progress with the numerical algorithms further than we analytically know: Thus, which properties are important? Or, more fundamentally, how do we prove the convergence of the numerical solution to a physical solution for a given PDE model? We can only answer this question if researchers from several disciplines, including physics, mathematics, and computer science, collaborate altogether.
In the recent years, additive manufacturing has allowed to build complex geometries with a small minimum length scale (lattice structures). This manufacturing revolution has directly impacted on the field of topology optimization, enabling the classical homogenization techniques to resurrect. This is the case of the recent work (G. Allaire et al., Computers and Mathematics with Applications, 78(7):2197–2229, (2019)) where optimal lattice structures have been obtained when minimizing compliance. In this work, we follow the techniques developed in (G. Allaire et al., Computers and Mathematics with Applications, 78(7):2197–2229, (2019)) but we focus on controlling the stress norm of the structure, one of the key aspects in structural design. In this case, the main idea is to consider in the homogenization theory the stresses appeared in the micro-structure. For that propose, the macroscopic stresses must be corrected via the amplificator tensors. Thus, we extend the results obtained in (G. Allaire et al., Struct Multidisc Optim, 28(2-3):87–98 (2004)) (rank-q laminates) for the case of lattice structures. As in work (Allaire et al. (2019)), the micro-structure is described by two geometrical parameters and the orientation of the cell. However, in this case, we propose a smooth geometry of the micro-structure to reduce stress concentration. The optimization process is divided in three steps. Step 1. For a wide number of parameters, we compute and save the homogenized material properties of the micro-structure (amplificators, homogenized constitutive tensor and fraction volume). We call this pre-computed data, once it is linearly interpolated, computational Vademecum. Step 2. We consider the stress norm and the volume as cost function and constraint of the optimization problem and we use a standard projected gradient for updating the micro-structural parameters. For the orientation of the cell, we propose to align it with the principal stress direction following the results obtained in (Allaire et al. (2019)) for the case of the compliance. This choice comes from an accurate analysis of the amplificators components. Step 3. Finally, we proceed with the de-homogenization process (Allaire et al. (2019)). Basically, it consists in the description of the geometry via a level-set function which depends on the coordinates and the microscopic parameters. The first depends on the microscopic coordinate y = x/ε while the second on the macroscopic coordinate x. Subtle geometrical techniques are considered for avoiding singularities in the orientation field. The work ends showing several numerical results. First, the values of the amplificators for the proposed smooth microscopic geometry will be compared with a non-smooth one. Finally, the optimization process for both cases and the optimal de-homogenized structures will be shown (A. Ferrer et al., Phil Trans R Soc A (2021)).
We will report on progress in our recently proposed geometrically exact continuum-kinematics-inspired peridynamics (CPD) formulation. The novel CPD respects the well-established classical continuum kinematics, accounts for large deformations and is variationally consistent. We distinguish between one-, two- and three-neighbour interactions. One-neighbour interactions recover the original (bond-based) PD formalism. Two- and three neighbour interactions are fundamentally different to state-based PD. We account for material frame indifference and provide a set of appropriate arguments for objective interaction potentials accordingly. We will present CPD in a manner that is immediately suitable for computational implementation. From a computational perspective, the proposed strategy is fully implicit and quadratic convergence associated with the Newton–Raphson scheme is observed. Finally, we demonstrate the capability of our proposed framework via a series of applications and numerical examples at large deformations.
Wildfires are major hazardous events which have devastating socio-economic and environmental impacts, including the loss of human lives and biodiversity, health deterioration, destruction of infrastructures and economic activity, soil degradation, and climate change. They are estimated to contribute with 20% of the global greenhouse gas emissions per year. By 2030, wildfires could destroy 55% of the Amazon rainforest in addition to the 20% that has already been destroyed, resulting in the Earth largely losing its ability to absorb greenhouse gases. Therefore, the development of more effective and safer means to fight wildfires is one of the world’s most pressing challenges of our time. Given the increasing severity and frequency of wildfires, the use of robots in place of humans is of special interest. Drones have been widely used for wildfire monitoring and detection, though scarcely for firefighting. Since few drones can be remotely controlled and coordinated to operate simultaneously, their achievable fire-suppression capabilities are limited. Therefore, we propose the use of swarms of autonomous collaborative firefighting drones comprising a safe, robust, resilient and scalable system able to cope with uncertainty, errors, perturbations, and the failure or loss of a few units. A key enabling technology is the development of self-organisation algorithms that allow them to communicate and allocate tasks amongst themselves, plan their trajectories, and self-coordinate their flights to achieve the swarm’s objectives. Swarm Robotics studies how to design simple robots so that a desired collective behaviour emerges from the local interactions among a large number of them and with the environment. The design of the local interaction rules leading to the emergence of the desired system’s behaviour is not trivial. Since decisions are autonomous, the emergent behaviour is hard to predict. Therefore, one of the main challenges is the design and formal verification of local interaction rules at the individual level so that the correct behaviour emerges at the system level with some degree of trustworthiness. At this stage, the self-organisation algorithm is a reactive rule-based approach based on the particle swarm algorithm, which in turn was inspired by social behaviour observed in nature. In the future, evolutionary robotics and reinforcement learning will be investigated to generate firefighting swarm behaviour automatically. Other enabling technologies developed in this project includes wildfire propagation modelling, multi-agent collision-avoidance algorithm, and the design of more efficient and lighter fire suppressants.