HiPC 2023

KEYNOTE SPEAKERS

Lizy Kurian John

University of Texas at Austin, USA

For ML and With ML: The New Normal in System Design  

ABSTRACT

The emerging machine learning (ML) applications put exploding demands on hardware systems, and it is important to deliver high throughput,  low latency, and low energy consumption, in order to sustain the thriving development of cognitive systems and applications. Designing efficient circuits and systems to enable, support, and harness the power of machine intelligence is important to keep the present momentum of intelligent systems. In this talk, I will describe some of our research on providing efficient hardware infrastructure for ML. 

 

In addition to designing systems for ML, we also conduct research on using ML in designing and evaluating systems.  In this talk, I’ll describe a few examples of using ML for pre-silicon performance evaluation during the design of computer systems. 

SPEAKER BIO

Lizy Kurian John  is Truchard  Foundation Chair in Engineering at the University of Texas at Austin. She received her Ph. D in Computer Engineering from the Pennsylvania State University. Her research interests include workload characterization, performance evaluation,  memory systems, reconfigurable architectures, and high performance architectures for emerging workloads. She is recipient of many awards including Joe J. King Professional Engineering Achievement Award (2023), The Pennsylvania State University Outstanding Engineering Alumnus Award (2011), the NSF CAREER award, UT Austin Engineering Foundation Faculty Award,  Halliburton, Brown and Root Engineering Foundation Young Faculty Award, University of Texas Alumni Association (Texas Exes) Teaching Award, etc.  She has coauthored books on Digital Systems Design using VHDL (Cengage Publishers, 2007, 2017),  a book on Digital Systems Design using Verilog (Cengage Publishers, 2014)  and has edited 4 books including a book on Computer Performance Evaluation and Benchmarking.  She is currently the Editor-in-Chief of IEEE Micro. She holds 16 US patents and is an IEEE Fellow (Class of 2009), ACM Fellow and Fellow of the National Academy of Inventors (NAI). 

Profile of Manish

Manish Parashar

University of Utah, USA

Computing Everywhere, All at Once: Harnessing the Computing Continuum for Science 

ABSTRACT

Emerging data-driven scientific workflows are increasing leveraging distributed data sources to understand end-to-end phenomenon, drive experimentation, and facilitate important decision making. Despite the exponential growth of available digital data sources at the edge, and the ubiquity of non-trivial computational power for processing this data across the edge-HPC continuum, realizing such science workflows remain challenging. In this talk I will explore how the computing continuum, spanning resources at the edges, in the core and in-between, can be harnessed to support science. I will also describe recent research in programming abstractions that can express what data should be processed and when and where it should be processed, middleware services that automate the discovery of resources and the orchestration of computations across these resources.

SPEAKER BIO

Manish Parashar is Director of the Scientific Computing and Imaging (SCI) Institute, Chair in Computational Science and Engineering, and Presidential Professor, Kalhert School of Computing at the University of Utah. He recently completed an IPA appointment at the National Science Foundation where he served as Office Director of the NSF Office of Advanced Cyberinfrastructure, as well as co-chair of the National Science and Technology Council’s Subcommittee on the Future Advanced Computing Ecosystem and the National Artificial Intelligence Research Resource Task Force (NAIRR). Manish is the founding chair of the IEEE Technical Consortium on High Performance Computing (TCHPC), and is Fellow of AAAS, ACM, and IEEE/IEEE Computer Society. For more information, please visit http://manishparashar.org.

Sunita Sarawagi

Indian Institute of Technology Bombay, India

Modern AI for Analyzing Large Structured Databases: Opportunities and Challenges 

ABSTRACT

Modern AI is revolutionizing the way we interact with and analyze large structured databases. Natural language interfaces to structured data are now a reality, and core tasks like forecasting are becoming more accurate via large scale modeling of the interaction among related variables. With the dizzying pace of progress on integrating LLMs with structured data, data analysis can be contextualized to real-world knowledge and events.   

 

In this talk, I will discuss the latest ML research that is enabling these capabilities. I will also discuss the challenges of reliability and efficiency in existing solutions, and present directions for future research.

SPEAKER BIO

Sunita Sarawagi researches in the fields of databases and machine learning.  She got her PhD in databases from the University of California at Berkeley and her bachelors degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999).  She was awarded the Infosys Prize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She is a fellow of the ACM, INAE, and IAS. She has several publications including notable paper awards at ACM SIGMOD, ICDM, and NeuRIPS conferences.  She has served as member of the board of directors of the ACM SIGKDD and VLDB foundation, program chair for the ACM SIGKDD conference, and research track co-chair for the VLDB conference.