Пленарные доклады

Понедельник, 23 сентября 2019 г., 9:00-10:40
Зал «Сокольники»

Открытие конференции [PDF]
Владимир Воеводин, МГУ имени М.В. Ломоносова, Россия

Валентин Васильевич Воеводин - создатель математической теории параллельных вычислений
Е.Е. Тыртышников, МГУ имени М.В. Ломоносова, РАН, Россия

An Overview of High Performance Computing and Future Requirements [PDF]
Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, and University of Manchester, USA

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments.  Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. 

HLRS - A National Supercomputing Center for Research and Industry [PDF]
Michael Resch, University of Stuttgart, Germany

Established in 1996 HLRS was the first German national supercomputing center. Over more than 20 years its focus and activities have been shaped by a changing landscape of research and industry. In this talk we will present a number of questions relevant for any HPC center. What are the driving technological forces for HPC? What kind of research questions arise in HPC? How does industry change the operation of HLRS? What are the upcoming new challenges like Machine Learning and Artificial Intelligence? What impact will they have on HPC?


Понедельник, 23 сентября 2019 г., 11:10-13:00
Зал «Сокольники»

A Virtual Materials Lab, Material by Design
Reza Sadeghi, BIOVIA, Dassault Systemes, USA

Researchers around the globe are working on a broad scope of new materials to meet the needs of specific functions. From thermally functional highly every efficient materials to bio inspired, 1st principle based models are used to gain deeper understanding of these materials.

In order to continue to build more powerful microprocessors, quantum materials are being considered to continue where CMOS may stop. Scalable Energy-efficient Magnoelectric Spin-Orbit or MESO has shown potential, requiring only 30% of the voltage CMOS chips demand today, and roughly 20 times less energy in sleep state. Miniaturization remains a need and it’s getting increasingly difficult to pack more transistors onto circuit boards and boost their computing power unless new materials are made available to this industry.

Применение центральных процессоров ARM архитектуры для решения ресурсоемких задач с использованием суперкомпьютеров
А.В. Мурашов, Т-Платформы

Консорциум Open Power открывает новые двери [PDF]
К. Мозговой, А. Солуковцев, IBM

Речь пойдет об очередном шаге вперед консорциума Open Power, ставшего в 2019 году еще более открытым, доступным и понятным. О том, что этот шаг дает миру технологий и какие перспективы открывает.

Прикладной искусственный интеллект с NVIDIA [PDF]
А.Р. Джораев, NVIDIA

Lenovo HPC [PDF]
А.В. Сысоев, Lenovo

Поставщик суперкомпьютеров №1 в мире, программно-аппаратные решения LeSI для нагрузок HPC и AI. Программно-определяемые системы хранения данных с параллельным вводом-выводом DSS-G. Единый портал управлением кластерными вычислителями LiCO.

Adaptable accelerators for HPC, Storage and Networking [PDF]
Jens Stapelfeldt, Xilinx

Advances in artificial intelligence, increasingly complex workloads, and an explosion of unstructured data are forcing rapid evolution of the data center. The Xilinx platform is powering this revolution through adaptable acceleration of compute, storage, and networking.

Вычисления, работа с большими данными и машинное обучение как части единой парадигмы НРС
Н. Местер, Intel


Вторник, 24 сентября 2019 г., 9:00-10:40
Зал «Сокольники»

High performance computing and machine learning in hydrocarbon exploration and recovery problems
Sergey Tikhotskiy, RAS, Russia

Geophysics has traditionally been one of the main consumers of computing resources throughout the world. In recent decades, the tasks of processing geophysical information have been supplemented by the problems of complex field modeling: the processes of generation and migration of hydrocarbons, multiphase filtration, changes in the stress-strain state, and oil recovery intensification. These problems are of particular relevance because of the need to engage in the recovery of untraditional hydrocarbon reserves. All the above tasks cannot be solved without the use of high-performance computing using supercomputers and modern data analysis methods.

In hydrocarbon exploration, it is necessary to develope the high-performance algorythms and software to refine the structure and evaluate the reservoir rocks properties. In particular, this includes methods for processing modern wide-azimuth and multicomponent seismic data, including migration and seismic inversion for anisotropic media, as well as full-waveform inversion. These  are based on the multiple calculation of the seismic wavefield in three-dimensional inhomogeneous anisotropic media, as well as nonlinear multiparameter optimization. To carry out such calculations in a typical field model, with only one position of the seismic source, it is necessary to have 3 petabytes of RAM and perform 1021 operations with a floating point (i.e. 1000 exaflops). It is also valuable to be able to estimate the effective physical properties of micro-inhomogeneous porous-fractured media at different scales, which is necessary for a correct estimation of the transport prorerties from logs and field geophysical data. Correct assessment of fluid saturation also requires the development of methods for modeling and inversion of the electromagnetic field for such media. The maximum task is to develop algorithms for the combined inversion of various physical fields with detail that is adequate to the needs of the exploration industry. Because the evaluation of the collector and transport properties and other parameters of the geological environment according to the data of geophysical studies is a non-linear and incorrect inverse problem, for its solution it is effective to apply the methods of big data analysis and machine learning.

The imortant task in the recovery process modeling is the transition from the traditional approach, in which hydrodynamic and geomechanical modeling of reservoirs are performed independently, to their conjugate modeling, since a change in the stress-strain state leads to a change in the transport properties of rocks. This problem leads to the complex numerical schemes that require the use of special grids and are unstable. Overcoming the arising difficulties and creating practically applicable software products is also possible solely on the basis of high-performance computing.

A special class of problems that require the use of high-performance computing is the design and optimization of methods for oil recovery enchancement: hydraulic fracturing, steam injection, method of in-situ combusion, etc.

The main goal of the simulation is the digital model of the field, including information on the current state of the reservoir, its stress-strain state, fluid flow. Such a model should be updated and refined in real time, taking into account the newly acquired exploration, well and monitoring data. Real-time analysis of the simulation results, using artificial intelligence methods, must lead to the optimisation of the recovery scheme, drilling parameters and use of the enhanced oil recovery methods. This functionality leas to the concept of "smart field". It is advisable to develop a typical supercomputer platform (including hardware and software), which could then be scaled and applied in all hydrocarbon fields as a standard means for the operation of the described digital model.

Quantum information technologies: current status and prospects of their applications [PDF]
Vladimir Gerdt, JINR, Russia

The talk based on materials of the workshop «Quantum Computing for High Energy Physics» (CERN, November 5-6, 2018) and other open sources. and contains the state-of-the-art brief review of quantum computing and quantum information. In consideration of the especially promising applications of quantum computing we place emphasis on the quadratic unconstrained binary optimization problems (quadratic programming) which form the basis of rapidly developing area - quantum machine learning  - perspective in analysis of big data.

R&D of a Quantum-Annealing Assisted Next-Generatioon HPC Infrastructure for computer science, data science and their fusion applications development [PDF]
Hiroaki Kobayashi, Tohoku University, Japan

As the silicon technology driven by Moore’s law is facing the physical limitation, we are now moving to the post-Moore’s era in the design of high-performance computing architectures.   Quantum Annealing is one of  emerging information processing technologies in the Post-Moore’s era and is  expected to work well for combinatorial optimization problems.   In my talk, I will be presenting our on-going project entitled Quantum-Annealing assisted next generation HPC infrastructure that aims to integrated quantum annealing information processing into a conventional HPC system as  an accelerator of  combinatorial optimization problems.  I am also discussing the design of several applications that integrate computational science and data science approaches.

Опыт внедрения компьютерного и суперкомпьютерного моделирования в практику проектирования изделий ракетно-космической техники
Е.В. Хахулина, РКК "Энергия"


Вторник, 24 сентября 2019 г., 11:10-13:00
Зал «Сокольники»

Математическое моделирование мощного плазменного космического двигателя: зарубежный опыт и предложение по организации расчётов c использованием суперкомпьютеров в России [PDF]
А.В. Мурашов, Т-Платформы; В.И. Читайкин, ВНИИА им. Н.Л.Духова

An Overview of Fugaku Supercomputer [PDF]
Yutaka Ishikawa, RIKEN, Japan

The Fugaku (it is a new name for the Post-K), a flagship supercomputer in Japan, is being developed by Riken and Fujitsu. It will be the first supercomputer with an Armv8-A with SVE (Scalable Vector Extension) architecture. It will consist of more than 150k nodes connected by the TofuD network, an enhanced version of Tofu interconect used in K computer. The general operation will start in 2021. In this presentation, an overview of the Fugaku hardware and its software stack will be presented.

Правильная сеть для современных систем HPC и Deep Learning [PDF]
Б.М. Нейман, Mellanox

Последние новости и достижения Dell Technologies в создании HPC систем в России и мире
Д. Тропин, Dell Technologies

AMD EPYC- ядерная мощь на службе высокопроизводительных вычислений
П. Станавов, AMD

Опыт применения решений РСК для задач физики высоких энергий
А.А. Московский, РСК

Анонсы секционных докладов компаний NEC, Системы инженерного анализа, DDN