Forum > Forum III : Validating numerical vibroacoustic models

Forum III: Validating numerical vibroacoustic models - How much is enough?

Numerical models for the prediction of vibration and noise are ubiquitous in engineering design. The problem of calibrating or ‘updating’ such models based upon experimental observation has been studied for more than two decades with considerable success in industrial-scale applications. Now questions arise as to how a test should best be designed to replicate in-service conditions sufficiently closely, how good is an updated model and how are we able to assess its fitness for purpose. This is a problem partly of engineering judgement and partly of quite formal uncertainty quantification techniques based upon probabilistic or non-probabilistic (interval or fuzzy) methods. This forum seeks to explore both the practical application of techniques used in modern industry and new mathematical approaches emerging from university and government laboratories at the forefront of the science of modelling, testing and validation. 


Chair: Prof. John Mottershead,

 Alexander Elder Professor of Applied Mechanics, Department of Mechanical, Materials and Aerospace Engineering, School of Engineering, University of Liverpool, UK.

Resultat d'imatges de john mottershead

John Mottershead has BSc and PhD degrees in Mechanical Engineering and was awarded the DEng degree by the University of Liverpool, where he is the Alexander Elder Professor in Applied Mechanics. His research interests include FE model updating, image processing of full-field vibration and strain data, active vibration control and servoaeroelasticity. He has published several hundred papers in international journals and conference proceeding and his industrial collaborations include, from the motor industry BMW, Fiat, Ford and Peugot-Citroen, and from the aerospace industries Leonardo (Helicopter Division), Airbus UK and Rolls-Royce. He is the Editor-in-Chief of Mechanical Systems and Signal Processing and was Director of the Liverpool Institute for Risk and Uncertainty until November 2017.   


Confirmed keynote speakers:

Dr. François M. Hemez,

Los Alamos National Laboratory, New Mexico, USA


François Hemez has been a member of the technical staff at Los Alamos National Laboratory (LANL) since 1997 and adjunct professor at the University of California San Diego (UCSD) since 2011. He earned a Ph.D. in aerospace engineering from the University of Colorado in 1993 and graduated from École Centrale Paris, France, in 1989. At Los Alamos, François spent seven years in the Weapons Engineering Division, one of which as team leader for engineering validation methods. In 2005, François joined LANL’s “X” Division for nuclear weapon design and analysis where he managed several projects of the Advanced Scientific Computing program. François currently leads a team that contributes to the quantification of simulation uncertainty for the annual assessment and certification of the U.S. nuclear deterrent. He also supports non-proliferation programs of the U.S. Department of Energy and Intelligence Community. At UCSD, François teaches graduate-level courses on Verification and Validation (V&V), uncertainty quantification, and decision-making. François has been a member of the MathWorks (MATLAB® software) Advisory Board since 2014. He has authored 424 technical publications and reports, including 47 peer-reviewed book chapters and manuscripts; given 162 invited lectures, including 9 international keynotes; and taught V&V to 650+ students and practicing engineers around the World.


Mr. Joan Sapena,  

Acoustics Core Competence Leader, Alstom Transport, France


Mr. Joan Sapena graduated from La Salle Engineering School in 1995 in Technical Engineering Telecommunications  (Sound & Image) and joined ALSTOM Transport in 2000. Previously he had worked for five years in Noise Vibration and Harshnes (NVH) department of anautomotive engineering company for NVH testing. He has lead site activities of acoustics and was promoted to Senior Expert in the field of Acoustics in 2004. Since 2008 he has been leading the Acoustics Core Competence Network in ALSTOM with the responsibilities for defining R&D methods and processes related to acoustic performances of ALSTOM railway products. He has participated in the elaboration of European programs such as ACOUTRAIN and FINE1. He is representative for France in the standardisation CEN/TC256/WG3 – Acoustics group since 2008 and member of the railway industry UNIFE Noise Group. His main expertise is Rolling Stock Acoustics of all type of products and the development of transfer path methods for interior noise.

Title: Railways Rolling stock: Validation is not a must, it is a support to reduce risks

Abstract: Mr Sapena’s presentation will use practical examples from industrial experience to cover the following points with the purpose of generating debate, developing an understanding of the complexity of industrial processes, the role of correlation and validation and the point at which it should stop.

  • Introduction about the simulations done in the rolling stock railway industry (Interior noise including AB and SB paths, Exterior noise).
  • Current status of validation of the different methods at component level and at train level.
  • Models simplify the problem…reality is much more complicated.
  • Non-validated models are giving good results…lucky or simple is enough?
  • Why have 100% validated models if we are not able to model and understand everything?
  • Design details during the industrial process are not present in models…
  • Input uncertainties could be much more important than the model uncertainty itself.
  • Virtual certification will be much more important in the global uncertainty of the simulation than in the validation of the model.
  • For sure, validating a model will reduce the risks to take design decisions but budgets and just enough are a priority in design today.


Prof. Sankaran Mahadevan

School of Engineering, Vanderbilt University, Nashville, Tenesse, USA.


Professor Sankaran Mahadevan has thirty years of research and teaching experience in mechanical systems reliability analysis, uncertainty quantification, model verification and validation, structural health monitoring, and design optimization. His research has been extensively funded by NSF, NASA, FAA, DOE, DOD, DOT, NIST, General Motors, Chrysler, Union Pacific, American Railroad Association, and Sandia, Idaho, Los Alamos and Oak Ridge National Laboratories. His research contributions are documented in more than 600 publications, including two textbooks on reliability methods and 270 journal articles. He has directed 40 Ph.D. dissertations and 24 M. S. theses, and has taught many industry short courses on reliability and uncertainty analysis methods. He is currently serving as Managing Editor for the ASCE-ASME Journal of Risk and Uncertainty (Part B: Mechanical Engineering), and as Associate Editor for three other journals. His awards include the NASA Next Generation Design Tools award (NASA), the SAE Distinguished Probabilistic Methods Educator Award, and best paper awards in the MORS Journal and the SDM and IMAC conferences. Professor Mahadevan obtained his B.S. from Indian Institute of Technology, Kanpur, M.S. from Rensselaer Polytechnic Institute, Troy, NY, and Ph.D. from Georgia Institute of Technology, Atlanta, GA.

Title: Multi-level and Multi-fidelity Approaches to Probabilistic Validation and Prediction

Abstract: The presentation will discuss a Bayesian probabilistic framework for model validation and uncertainty quantification across multiple levels of the testing hierarchy, in order to quantify the uncertainty in system-level prediction. Probabilistic metrics for model validation are introduced first, and the suitability of different metrics to different validation scenarios is discussed. Then the integration of results from different uncertainty quantification activities is discussed. The information available (models, test data, and expert opinion) is often heterogeneous and at different levels of fidelity. A Bayesian network approach is used for multi-fidelity information fusion at multiple levels of the system hierarchy. The results of calibration, verification, and validation with respect to each individual model are aggregated, and propagated through the Bayesian network in order to quantify the overall prediction uncertainty. The relevance of test data to the prediction quantity of interest is also included in the framework. Based on the uncertainty aggregation, an inverse problem formulation and solution for test selection is developed, in order to directly answer the question “How much validation is enough”. Example problems from structural dynamics and acoustics are used to illustrate the proposed framework. 

Online user: 3 RSS Feed