Portable and Efficient Julia Code for Heterogeneous Hardware Systems
DescriptionAs we are entering the exascale era of supercomputing, heterogeneous hardware architectures have become the norm for new systems. At the same time, vendors provide competing APIs to their accelerators, making it harder to write and maintain portable code. The Julia programming language tries to tackle this challenge by providing useful and usable abstractions that allow a near-seamless transition between different systems with minimal or no code changes. In this talk, we will present the AMDGPU.jl package and discuss how it makes the power of AMD GPUs as easily accessible within Julia as other accelerators. We will compare it to its sister package CUDA.jl, and show how the differences in design lead to uniquely useful features for each platform while retaining compatible APIs. We will also look at how AMDGPU.jl integrates with the rest of Julia's GPU ecosystem, such as in KernelAbstractions.jl, ImplicitGlobalGrid.jl, and ParallelStencil.jl. Finally, we will talk about where we see AMDGPU.jl's development heading, and what great things are in store for users over the next few years.
TimeWednesday, June 2811:00 - 11:30 CEST
Computer Science, Machine Learning, and Applied Mathematics