Keynote Speakers


The Federated Computing Research Conference (FCRC) provides a Turing award lecture, and five distinguished keynote lectures (one in each day) that are open to the attendees of all co-located events. Please note that the Turing award lecture is in the evening of Sunday, Jun 14, and is accessible to HPDC registrants. In addition, HPDC'15 features two keynotes from distinguished researchers in the field of high performance distributed computing.

  • Allen D. Malony (University of Oregon)
  • Title: Through the Looking-Glass: From Performance Observation to Dynamic Adaptation [Slides]

    Abstract: Since the beginning of "high-performance" parallel computing, observing and analyzing performance for purposes of finding bottlenecks and identifying opportunities for improvement has been at the heart of delivering the performance potential of next-generation scalable systems. Interestingly, it is the ever-changing parallel computing landscape that is the main driver of requirements for parallel performance technology and the improvements necessary beyond the current state-of-the-art. Indeed, the development and application of our TAU Performance System over many years largely follows an evolutionary path of addressing measurement and analysis problems in new parallel machines and programming environments.

    However, the outlook to future parallel systems with high degrees of concurrency, heterogeneous components, dynamic runtime environments, asynchronous execution, and power constraints suggests a new perspective will be needed on the role of performance observation and analysis in respect to tool technology integration and performance optimization methods. The reliance on post-mortem analysis of application-level ("1st person") performance measurements is prohibitive for exascale-class machines because of the performance data volume, the primitive basis for performance data attribution, and the fundamental problem of performance variation that will exist. Instead, it will be important to provide introspection support across the exascale software stack to understand how system ("3rd person") resources are used during execution. Furthermore, the opportunity to couple a global performance introspection capability (a "performance backplane") with online performance decision analytics inspires the concept of an autonomic performance system that can feed back policy-based decisions to guide the computation to better states of execution.

    The talk will explore these issues by giving a brief retrospective on performance tool evolution, setting the stage for current research projects where a new performance perspective is being pursued. It will also speculate on what might be included in next-generation parallel systems hardware, specifically to make the exascale machines more performance-aware and dynamically-adaptive.

    Bio: Allen D. Malony is a Professor in the Department of Computer and Information Science at the University of Oregon (UO) where he directs parallel computing research projects, notably the TAU parallel performance system project. He has extensive experience in performance benchmarking and characterization of high-performance computing systems, and has developed performance evaluation tools for a range of parallel machines during the last 25 years. His research interests also include computational science and neuroinformatics. Malony was awarded the NSF National Young Investigator award, was a Fulbright Research Scholar to The Netherlands and Austria, and received the prestigious Alexander von Humboldt Research Award for Senior U.S. Scientists by the Alexander von Humboldt Foundation. He is funded by the Department of Energy, the National Science Foundation, and the Department of Defense. Malony is the Director of the UO Neuroinformatics Center and the CEO of ParaTools, Inc., which he founded with Dr. Sameer Shende in 2004.

  • Ewa Deelman (University of Southern California) - Achievement award talk
  • Title: High Impact Computing: Computing for Science and the Science of Computing

    Abstract: Modern science often requires the processing and analysis of vast amounts of data in search of postulated phenomena, and the validation of core principles through the simulation of complex system behaviors and interactions. This is the case in fields such as astronomy, bioinformatics, physics, and climate and ocean modeling, and others. In order to support the computational and data needs of today’s science, new knowledge must be gained on how to deliver the growing high-performance and distributed computing resources to the scientist’s desktop in an accessible, reliable and scalable way. In over a decade of working with domain scientists, the Pegasus project has developed tools and techniques that automate the computational processes used in data- and compute-intensive research. Among them is the scientific workflow management system, Pegasus, which is being used by researchers to model seismic wave propagation, to discover new celestial objects, to study RNA critical to human brain development, and to investigate other important research questions.

    This talk will review the conception and evolution of the Pegasus research program. It will touch upon the role of scientific workflow systems in advancing science, and will give specific examples of how the Pegasus Workflow Management System has done so. It will describe how the Pegasus project has adapted to changes in application needs and to advances in high performance and distributed computing systems. It will discuss the interleaving of Computer Science research and software development and how each benefits from the other while providing value to other science domains. The talk will also stress the importance of forming collaborations, both within Computer Science and with other disciplines, to help solve real-world problems and have fun along way.

    Bio: Ewa Deelman is a Research Associate Professor at the USC Computer Science Department and the Assistant Director of Science Automation Technologies at the USC Information Sciences Institute. Dr. Deelman's research interests include the design and exploration of collaborative, distributed scientific environments, with particular emphasis on workflow management as well as the management of large amounts of data and metadata. In 2007, Dr. Deelman edited a book: “Workflows in e-Science: Scientific Workflows for Grids”, published by Springer. She is also the founder of the annual Workshop on Workflows in Support of Large-Scale Science, which is held in conjunction with the Super Computing conference. In 1997 Dr. Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute.