Posters


HPDC'16 will offer conference attendees the opportunity to participate in a poster session. The list of accepted posters for HPDC 2016 includes two categories of posters: accepted posters, which are accepted as a result of the reviewing process and are included in the proceedings of HPDC 2016, and open call posters which have been selected from the submissions received in response to the open Call for Posters, but are not included in the proceedings of HPDC 2016.

Accepted Posters (based on full submitted paper)
A1 Feng Liang and Francis C.M. Lau BAShuffler: Maximizing Network Bandwidth Utilization in the Shuffle of YARN
A2 Seokyong Hong, Sangkeun Lee, Seung-hwan Lim, Sreenivas R. Sukumar, and Ranga R. Vatsavai Comprehensive Evaluation of Graph Pattern Matching in Graph Analysis Systems
A3 YongLi Cheng, Fang Wang, Hong Jiang, Yu Hua, Dan Feng, and XiuNeng Wang DD-Graph: A Highly Cost-Effective Distributed Disk-based Graph-Processing Framework
A4 Stephan Schlagkamp, Rafael Ferreira da Silva, William Allcock, Ewa Deelman, and Uwe Schwiegelshohn Consecutive Job Submission Behavior at Mira Supercomputer
A5 Zachary Benavides, Rajiv Gupta, and Xiangyu Zhang Parallel Execution Profiles
A6 Kyle Hale, Conor Hetland, and Peter Dinda Automatic Hybridization of Runtime Systems
A7 Abhishek Kulkarni, Luke Dalessandro, Ezra Kissel, Andrew Lumsdaine, and Martin Swany Network-Managed Virtual Global Address Space for Message-driven Runtimes
A8 Dmitriy Morozov and Zarija Lukic Master of Puppets: Cooperative Multitasking for In Situ Processing

Open Call Posters (based on the call for poster)
O1 Shinichiro Takizawa, Motohiko Matsuda and Naoya Maruyama A Locality-aware Task Scheduling of Message Passing and MapReduce Hybrid Models
O2 Yusuke Tanimura Towards Efficient Data Staging for Multi-Tenant Big Data Analytics
O3 Saman Biookaghazadeh, Shujia Zhou and Ming Zhao Kaleido: Enabling Scientific Data Storage and Processing on Big-Data Systems
O4 Wubin Li Towards the Design and Implementation of Benchmarking for Workload Aware Storage Platform
O5 Kohei Toshimitsu and Kenjiro Taura Instant Cloud FS : A Distributed File System for Instant Deployment across Multiple Environments
O6 Chunhung Huang, Hsi-En Yu and Weicheng Huang Dockerize distribute computing architecture in medical imaging and radiotherapy
O7 Keiichiro Fukazawa, Ryusuke Egawa, Yuko Isobe and Ikuo Miyoshi Performance Evaluation of MHD Simulation Code on SX-ACE and FX100
O8 Yosuke Oyama, Akihiro Nomura, Ikuro Sato, Hiroki Nishimura, Yukimasa Tamatsu and Satoshi Matsuoka Training Condition Conscious Performance Modeling of an Asynchronous Data-Parallel Deep Learning System
O9 Ayae Ichinose, Atsuko Takefusa, Hidemoto Nakada and Masato Oguchi Evaluation of Distributed Processing of the Deep Learning Framework Caffe
O10 Hiroko Midorikawa Blk-Tune: Blocking Parameter Auto-Tuning for Flash-based Out-of-Core Stencil Computations
O11 Mohamed Wahib, Naoya Maruyama and Takayuki Aoki A High-level Framework for Efficient AMR on GPUs
O12 Moon Gi Seok, Tag Gon Kim and Daejin Park Agent-based On-Chip Glitch Filter Placement for Safe Microcontroller in Noisy Environment
O13 Leyuan Wang, Yangzihao Wang and John Owens Fast Parallel Subgraph Matching on the GPU
O14 Stephan Schlagkamp and Florian Schmickmann A Dynamic Simulation Framework for Parallel Job Scheduling Performance Evaluation
O15 Keisuke Fukuda, Motohiko Matsuda, Naoya Maruyama, Rio Yokota, Kenjiro Taura and Satoshi Matsuoka Tapas: an implicitly parallel programming framework for hierarchical N-body algorithms
O16 Wataru Endo and Kenjiro Taura MGAS-2: Global Address Space Library with Dynamic Data Migration
O17 Jieun Choi and Yoonhee Kim Data-Locality Aware Scientific Workflow Scheduling Method in HPC Cloud Environments
O18 Shintaro Iwasaki and Kenjiro Taura An Automatic Cut-off for Task-Parallel Programs
O19 An Huynh and Kenjiro Taura Critical Path Analysis for Characterizing Parallel Runtime Systems
O20 Yuya Kobayashi, Hideyuki Jitsumoto, Akihiro Nomura and Satoshi Matsuoka Evaluating tolerance of applications against realistic DRAM faults