The emergence of conversational AI has revolutionized the way we interact with technology, and its impact on High- Performance Computing (HPC) has yet to be fully explored. Over the past decades, mathematicians, linguists, and computer scientists have dedicated their efforts towards empowering human-machine communication in natural language and automatic speech recognition. While in recent years the emergence of virtual personal assistants such as Siri, Alexa, and Google Assistant has pushed the field forward, the development of such conversational agents remains difficult with numerous unanswered questions and challenges and combining conversational AI and HPC together presents its own set of challenges. This workshop aims to bring together researchers and software/hardware designers from academia, industry and national laboratories who are involved in designing conversational AI for HPC, and how it can be leveraged to improve efficiency, accuracy, and accessibility for end-users.
The objectives of this workshop will be to share the experiences of the members of this community and to learn the opportunities and challenges in the design trends for conversational AI for HPC. Through presentations, and discussions, participants will gain a comprehensive understanding of the potential for conversational AI to revolutionize HPC and the challenges that need to be overcome. The workshop will provide attendees with the opportunity to learn from experts in the field and explore how conversational AI can be applied to their specific areas of interest within HPC. The workshop is designed for HPC researchers, practitioners, and developers who are interested in exploring the benefits of conversational AI and its potential applications.
Arvind Ramanathan, Argonne National Laboratory
Title: The Decade Ahead: Building Frontier AI Systems for Science and the Path to Zettascale
The successful development of transformative applications of AI for science, medicine and energy research will have a profound impact on the world. The rate of development of AI capabilities continues to accelerate, and the scientific community is becoming increasingly agile in using AI, leading to us to anticipate significant changes in how science and engineering goals will be pursued in the future. Frontier AI (the leading edge of AI systems) enables small teams to conduct increasingly complex investigations, accelerating some tasks such as generating hypotheses, writing code, or automating entire scientific campaigns. However, certain challenges remain resistant to AI acceleration such as human-to-human communication, large-scale systems integration, and assessing creative contributions. Taken together these developments signify a shift toward more capital-intensive science, as productivity gains from AI will drive resource allocations to groups that can effectively leverage AI into scientific outputs, while other will lag. In addition, with AI becoming the major driver of innovation in high-performance computing, we also expect major shifts in the computing marketplace over the next decade, we see a growing performance gap between systems designed for traditional scientific computing vs those optimized for large-scale AI such as Large Language Models. In part, as a response to these trends, but also in recognition of the role of government supported research to shape the future research landscape the U. S. Department of Energy has created the FASST (Frontier AI for Science, Security and Technology) initiative. FASST is a decadal research and infrastructure development initiative aimed at accelerating the creation and deployment of frontier AI systems for science, energy research, national security. I will review the goals of FASST and how we imagine it transforming the research at the national laboratories. Along with FASST, I’ll discuss the goals of the recently established Trillion Parameter Consortium (TPC), whose aim is to foster a community wide effort to accelerate the creation of large-scale generative AI for science. Additionally, I'll introduce the AuroraGPT project an international collaboration to build a series of multilingual multimodal foundation models for science, that are pretrained on deep domain knowledge to enable them to play key roles in future scientific enterprises.