Held in Conjunction With 33rd IEEE International Parallel & Distributed Processing Symposium
May 20-24, 2019
Rio de Janeiro, Brazil
The 24th HIPS workshop, to be held as a full-day meeting on May 20th at the IEEE IPDPS 2019 conference in Rio de Janeiro, Brazil, focuses on high-level programming of multiprocessors, compute clusters, and massively parallel machines. Like previous workshops in the series, which was established in 1996, this event serves as a forum for research in the areas of parallel applications, language design, compilers, runtime systems, and programming tools. It provides a timely and lightweight forum for scientists and engineers to present the latest ideas and findings in these rapidly changing fields. In our call for papers, we especially encouraged innovative approaches in the areas of emerging programming models for large-scale parallel systems and many-core architectures.
Topics of interest to the HIPS workshop include but are not limited to:
Time | Event |
---|---|
8:50 -- 9:00am | Opening Remarks |
Keynote |
|
9:00 -- 10:00am | Target-independent Runtime System for Heterogeneous Accelerators |
Dr. Jaejin Lee, IEEE Fellow (Seoul National University) Jaejin Lee is the director of the Center for Manycore Programming and a professor in the department of Computer Science and Engineering at Seoul National University (SNU), Korea. He received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 1999. He received an M.S. degree in Computer Science from Stanford University in 1995 and a B.S. degree in Physics from SNU in 1991. After obtaining the Ph.D. degree, he spent a half year at the UIUC as a visiting lecturer and a postdoctoral research associate. He was an assistant professor in the department of Computer Science and Engineering at Michigan State University from January 2000 to August 2002 before joining SNU. His research group at SNU developed the SnuCL OpenCL framework for heterogeneous clusters, which aims at ease of programming and high performance. He served on numerous program committees of premier international conferences in compilers and architectures areas. He is an IEEE fellow and a member of ACM. His current research interests include programming systems of heterogeneous machines and parallelization and optimization of deep learning frameworks. Abstract: Heterogeneous computing is widening its user base because of its high performance and high power efficiency. Especially, heterogeneous computing with GPUs is de facto standard for running deep learning applications in these days. There are two widely used heterogeneous programming models: CUDA and OpenCL. OpenCL provides a common abstraction layer across different accelerator architectures, such as multicore CPUs, GPUs, and FPGAs. While the usage of CUDA is restricted to NVIDIA GPUs, it is the most popular programming model to implement DNN frameworks because of the cuDNN library. Although OpenCL inherited many features from CUDA and they have almost the same platform model, they are not compatible with each other. In addition, they are restricted to a heterogeneous system running a single operating system instance. To target a heterogeneous cluster running multiple operating system instances, programmers must use an OpenCL or CUDA framework together with a communication library, such as MPI. In this talk, we introduce a target-independent runtime system for both CUDA and OpenCL, which is an on-going work at Seoul National University. With the runtime system, programs written in either CUDA or OpenCL can run on a system with different types of accelerators, such as AMD GPUs, NVIDIA GPUs, multicore CPUs, Intel FPGAs, and Xilinx FPGAs. In addition, programs solely written in either OpenCL or CUDA also run on a cluster equipped with such heterogeneous accelerators without using any communication library. Since CUDA or OpenCL separately has a wide user base and large code base, our runtime system is useful to extend the code base for each programming model and unifies the efforts to develop applications in heterogeneous computing. |
|
10:00 -- 10:30pm | IPDPS Coffee Break |
Session 1: Full papers |
|
10:30 -- 11:00am |
Toward an Analytical Performance Model to Select between GPU and CPU Execution
Artem Chikin, Jose Nelson Amaral, Karim Ali and Ettore Tiotto |
11:00 -- 11:30am |
Software-defined Events through PAPI
Anthony Danalis, Heike Jagode, Thomas Herault, Piotr Luszczek and Jack Dongarra |
11:30 -- 12:00pm |
A Container-Based Framework to Facilitate
Reproducibility in Employing Stochastic Process Algebra for Modeling
Parallel Computing Systems
William Sanders, Srishti Srivastava and Ioana Banicescu |
12:00 -- 1:20pm | Lunch | Session 2: Short Papers |
1:20 -- 1:40pm |
Opera: Data Access Pattern Similarity Analysis To Optimize OpenMP Task Affinity
Jie Ren, Chunhua Liao and Dong Li |
1:40 -- 2:00pm |
OpenMP to FPGA Offloading Prototype using OpenCL SDK
Marius Knaust, Florian Mayer and Thomas Steinke |
Session 3: Invited Talks |
|
2:00 -- 2:30pm |
Invited Talk 1: Jee Choi (University of Oregon)
Optimizing Tensor Decomposition on HPC Systems - Challenges and Approaches |
2:30 -- 3:00pm |
Invited Talk 2: Pedro Fonseca (Purdue University)
Towards correct concurrent systems |
3:00 -- 3:30pm | IPDPS Coffee Break |
3:30 -- 4:00pm |
Invited Talk 3: Torsten Hoefler (ETH Zürich)
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis |
Submission deadline: Jan 22, 2019 Feb 18, 2019
Notification of acceptance: Mar 1, 2019
Camera-ready papers due: Mar 8, 2019
Submission deadline: Jan 29, 2019 Feb 18, 2019
Notification of acceptance: Mar 1, 2019
Camera-ready papers due: Mar 8, 2019
Please submit papers through this link to Easychair conference system.
The HIPS paper style is identical to the IPDPS paper style. Submitted manuscripts may not exceed 10 (for full papers) or 4 (for short papers) single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references.
Conference | Date | Location |
---|---|---|
23rd HIPS 2018 | May 21, 2018 | Vancouver, British Columbia, Canada |
22nd HIPS 2017 | May 29, 2017 | Orlando, FL, USA |
21st HIPS 2016 | May 23, 2016 | Chicago, IL, USA |
20th HIPS 2015 | May 25, 2015 | Hyderabad, India |
19th HIPS 2014 | May 19, 2014 | Phoenix, AZ, USA |
18th HIPS 2013 | May 20, 2013 | Boston, MA, USA |
17th HIPS 2012 | May 21, 2012 | Shanghai, China |
16th HIPS 2011 | May 20, 2011 | Anchorage, Alaska, USA |
15th HIPS 2010 | April 19-23, 2010 | Atlanta, GA, USA |
14th HIPS 2009 | May 25, 2009 | Rome, Italy |
13th HIPS 2008 | April 14, 2008 | Miami, FL, USA |
12th HIPS 2007 | March 26, 2007 | Long Beach, California, USA |
11th HIPS 2006 | April 25, 2006 | Rhodes Island, Greece |
10th HIPS 2005 | April 4, 2005 | Denver, Colorado, USA |
9th HIPS 2004 | April 26, 2004 | Santa Fe, New Mexico, USA |
8th HIPS 2003 | April 22, 2003 | Nice, France |
7th HIPS 2002 | April 15, 2002 | Fort Lauderdale, FL, USA |
6th HIPS 2001 | April 23, 2001 | San Francisco, CA, USA |
5th HIPS 2000 | May 1, 2000 | Cancun, Mexico |
4th HIPS 1999 | April 12, 1999 | San Juan, Puerto Rico, USA |
3rd HIPS 1998 | March 30, 1998 | Orlando, FL, USA |
2nd HIPS 1997 | April 1, 1997 | Geneva, Switzerland |
1st HIPS 1996 | April 16, 1996 | Honolulu, HI, USA |