Istraživačka jedinica za znanost o podatcima Centra izvrsnosti za znanost o podatcima i napredne kooperativne sustave, u okviru projekta "DATACROSS – Napredne metode i tehnologije u znanosti o podatcima i kooperativnim sustavima", organizira predavanje istraživačkog seminara
HPX: a C++ Standard Library for Concurrency and Parallelism
koje će održati dr. sc. Xinzhe Wu s Forschungszentrum Jülich, Njemačka. Predavanje će se održati u srijedu 05. veljače 2020. u 14:00 sati na Institutu Ruđer Bošković u predavaonici I krila (krilo Ivan Supek).
Više o predavaču i predavanju možete pročitati u opširnijem sadržaju obavijesti.
Sažetak: HPX is a C++ Standard Library for Concurrency and Parallelism, which implements all of the corresponding facilities as defined by C++. HPX exposes a uniform, standards-oriented API for ease of programming parallel and distributed applications. It enables programmers to write fully asynchronous code using hundreds of millions of threads. HPX provides unified syntax and semantics for local and remote operations. It makes concurrency manageable with dataflow and future based synchronization. In this presentation, I will give a brief introduction about HPX, including the programming model, the features for both node-level and distributed-level parallel programming, and some examples to illustrate these features.
Biografija: Xinzhe Wu received a B.S. degree in mathematics and applied mathematics, an M.S. degree in control engineering from Beihang University in China, and a Ph.D. degree in Computer Science from Maison de la Simulation & Université de Lille in France in 2019. He joined the Jülich Supercomputing Centre in Germany as a postdoctoral researcher since June 2019. Currently, he is involving in the PRACE-6IP WP8 project for the portable linear algebra, and DAAD-HoMSa project for the optimization of material science algorithms on hybrid systems. His current research interests are in numerical linear algebra, including iterative methods for linear systems and eigenvalue problems, polynomial preconditioners, parallel matrix operation optimization, and High-performance computing especially task-based parallel programming.