• Skip to main content
AAAI

AAAI

Association for the Advancement of Artificial Intelligence

    • AAAI

      AAAI

      Association for the Advancement of Artificial Intelligence

  • About AAAIAbout AAAI
    • News
    • Officers and Committees
    • Staff
    • Bylaws
    • Awards
      • Fellows Program
      • Classic Paper Award
      • Dissertation Award
      • Distinguished Service Award
      • Allen Newell Award
      • Outstanding Paper Award
      • AI for Humanity Award
      • Feigenbaum Prize
      • Patrick Henry Winston Outstanding Educator Award
      • Engelmore Award
      • AAAI ISEF Awards
      • Senior Member Status
      • Conference Awards
    • Partnerships
    • Resources
    • Mailing Lists
    • Past Presidential Addresses
    • AAAI 2025 Presidential Panel on the Future of AI Research
    • Presidential Panel on Long-Term AI Futures
    • Past Policy Reports
      • The Role of Intelligent Systems in the National Information Infrastructure (1995)
      • A Report to ARPA on Twenty-First Century Intelligent Systems (1994)
    • Logos
  • aaai-icon_ethics-diversity-line-yellowEthics & Diversity
  • Conference talk bubbleConferences & Symposia
    • AAAI Conference
    • AIES AAAI/ACM
    • AIIDE
    • EAAI
    • HCOMP
    • IAAI
    • ICWSM
    • Spring Symposia
    • Summer Symposia
    • Fall Symposia
    • Code of Conduct for Conferences and Events
  • PublicationsPublications
    • AI Magazine
    • Conference Proceedings
    • AAAI Publication Policies & Guidelines
    • Request to Reproduce Copyrighted Materials
    • Contribute
    • Order Proceedings
  • aaai-icon_ai-magazine-line-yellowAI Magazine
  • MembershipMembership
    • Member Login
    • Chapters

  • Career CenterAI Jobs
  • aaai-icon_ai-topics-line-yellowAITopics
  • aaai-icon_contact-line-yellowContact

  • Twitter
  • Facebook
  • LinkedIn

The 38th Annual AAAI Conference on Artificial Intelligence

February 20-27, 2024 | Vancouver, Canada

  • AAAI-24
  • Attend
    • AAAI-24 Photos
    • Accommodations & Travel
    • Job Fair
    • Lunch with a Fellow
    • Onsite Childcare
    • Registration and Visa Information
    • Student Scholarship and Volunteer Program
  • Program
    • AAAI-24 Paper Awards
    • AAAI-24 Program Overview and Schedule
    • AAAI-24 Invited Talks
    • AAAI-24 Panels
    • AAAI-24 Technical Program
    • Bridge Program
    • Demonstration Program
    • Diversity & Inclusion Activities
    • Doctoral Consortium Program
    • EAAI-24 Program
    • IAAI-24 Program
    • Journal Track
    • New Faculty Highlights Program
    • Student Abstracts and Poster Program
    • Senior Member Presentation Program
    • Tutorial and Lab Forum
    • Undergraduate Consortium Program
    • Workshop Program
  • Sponsors
    • AAAI-24 Sponsors
    • Become a Sponsor
  • Policies & Guidelines
    • AAAI Code of Conduct for Conferences and Events
    • Ethical Guidelines for AAAI-24 Reviewers
    • Policies for AAAI-24 Authors
    • Publications Ethics and Malpractice Statement
  • Organization
    • Conference Organizers
    • Program Committee
    • Senior Program Committee
    • Area Chairs
  • Calls
    • Main Technical Track
    • Special Track on AI for Social Impact
    • Special Track on Safe, Robust and Responsible AI
    • Bridge Program
    • Demonstration Program
    • Diversity and Inclusion Activities
    • Doctoral Consortium: Call for Proposals
    • EAAI-24 Call for Participation
    • IAAI-24 Call for Participation
    • New Faculty Highlights
    • Senior Member Presentation Track
    • Student Abstract and Poster Program
    • Tutorial and Lab Forum
    • Undergraduate Consortium
    • Workshops

AAAI-24 Panels

Sponsored by the Association for the Advancement of Artificial Intelligence
February 22-25, 2024 | Vancouver Convention Centre – West Building | Vancouver, BC, Canada


AI Strategic Initiatives and Policies

Friday, February 23 – 11:15 AM – 12:30PM
Location: Ballroom AB

As the massive excitement about recent advances in AI indicates, AI is a transformative technology that likely will have a deep and systemic impact on the national and global economy and security. The recognition of the potential impact of AI has led many countries to launch national strategic research initiatives and policies about the development and deployment of AI. The goal of this panel is to discuss some of these strategic initiatives and policies from different perspectives and at different levels of aggregation. We will begin with an overview of the US NSF’s National AI Institutes Program (Donlon), illustrate it with a specific National AI Institute (Goel), describe the structures and processes in the US that result in a national strategic initiative (Littman), present current and potential AI-related policies in the US (Wagstaff), and compare with similar initiatives and policies across the world (Walsh).

Panelists:

Ashok Goel
Georgia Institute of Technology, Chair

Ashok K. Goel is a Professor of Computer Science and Human-Centered Computing in the School of Interactive Computing at Georgia Institute of Technology and the Chief Scientist with Georgia Tech’s Center for 21st Century Universities. For almost forty years, he has conducted research into cognitive systems at the intersection of artificial intelligence and cognitive science with a focus on computational design and creativity. For almost two decades, much of his research has increasingly focused on AI in education and education in AI. He is a Fellow of AAAI and the Cognitive Science Society, an Editor Emeritus of AAAI’s AI Magazine, and a recipient of AAAI’s Outstanding AI Educator Award as well as the University of System of Georgia’s Scholarship of Learning and Teaching Award. Ashok is the PI and Executive Director of National AI Institute for Adult Learning and Online Education (aialoe.org) sponsored by the United States National Science Foundation.

James Donlon
National Science Foundation

James Donlon is a Program Director at NSF. He created and leads the National AI Research Institutes program, the nation’s flagship, multisector program for federally funded research pursuing the advancement of AI and AI-powered innovation in a wide range of use-inspired sectors. He is also a program lead for initiatives aimed at growing the AI Institutes into a richly interconnected research community through initiatives including the Expanding AI Innovation through Capacity Building and Partnerships (ExpandAI) program and the AI Institutes Virtual Organization (AIVO). Prior to NSF (from 2008 to 2013) Jim was a Program Manager for AI at the Defense Advanced Research Projects Agency (DARPA) where he created the Mind’s Eye program and led the Computer Science Study Group. Prior to federal civil service, Jim served 20 years in the U.S. Military, where he conducted use-inspired research and development in knowledge-based systems, intelligent tutoring systems, evolutionary algorithms, and discrete optimization. 

Michael Littman
National Science Foundation

Michael Littman is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation. The division is home to the programs and program officers that support researchers in artificial intelligence, human-centered computing, data management, and assistive technologies, as well as those exploring the impact of intelligent information systems on society. Littman is also University Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching and his research has been recognized with three best-paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery.

Kiri Wagstaff
AAAS Congressional AI Fellow

Kiri L. Wagstaff is a machine learning researcher, educator, and AAAI Fellow.  She is currently serving a one-year term as a U.S. Congressional Fellow in Artificial Intelligence, sponsored by the American Association for the Advancement of Science (AAAS).  As a Principal Researcher at NASA’s Jet Propulsion Laboratory, she specialized in developing machine learning methods for use onboard spacecraft and in data archives for planetary science, astronomy, cosmology, and more.  She also investigates how we can understand and trust machine learning systems.  She co-founded the Symposium on Educational Advances in Artificial Intelligence (EAAI) and teaches graduate machine learning courses at Oregon State University.  She earned a Ph.D. in Computer Science from Cornell University followed by an M.S. in Geological Sciences and a Master of Library and Information Science (MLIS).  Her work has been recognized by two NASA Exceptional Technology Achievement Medals.  She is passionate about keeping machine learning relevant to real-world problems.

Toby Walsh
University of New South Wales

Toby Walsh is Chief Scientist of UNSW.AI, UNSW’s new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being “banned indefinitely” from Russia. He is a Fellow of the Australia Academy of Science, and was named on the 

international “Who’s Who in AI” list of influencers. He has written four books on AI for a general audience, the most recent is “Faking It! Artificial Intelligence in A Human World”.

Special Session: Envisioning Open Research Resources for Artificial Intelligence in the US

Friday, February 23 – 1:00 PM – 2:00 PM (Bring your own lunch to the session)
Location: Room 220

Presenters: Yolanda Gil (USC), Shantenu Jha (Rutgers), Michael Littman (NSF), Cornelia Caragea (NSF)

The US National Artificial Intelligence Research Resource (NAIRR) Task Force report published a roadmap for shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support.  While the National Science Foundation (NSF) has funded significant cyberinfrastructure efforts for research in different scientific disciplines, the use of national cyberinfrastructure is not very common in the AI research community.  As AI becomes more experimental and important research breakthroughs require large-scale computations, the availability of advanced cyberinfrastructure for AI research is paramount to new AI innovations.  We invite the AI community to share ideas on how AI researchers currently access the infrastructure necessary for experimental work, desiderata for resources and infrastructure for AI research, and the resource requirements that may be unique to AI as a discipline.  The discussion will inform an upcoming NSF workshop on this topic as well as other planning activities for NAIRR.  

Implications of LLMs

Saturday, February 24 – 4:30 PM – 6:00 PM
Location: Ballroom AB

Moderator: Kevin Leyton-Brown, University of British Columbia

Kevin Leyton-Brown is a professor of Computer Science and a Distinguished University Scholar at the University of British Columbia. He also holds a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute and is an associate member of the Vancouver School of Economics. He received a PhD and an M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies artificial intelligence, mostly at the intersection of machine learning and either the design and operation of electronic markets or the design of heuristic algorithms. He is increasingly interested in large language models, particularly as components of agent architectures. He is passionate about leveraging AI to benefit underserved communities, particularly in the developing world.

Panelists: Christopher Manning, Subbarao Kambhampati, Shelia McIlraith and Charles Sutton

Christopher Manning
Stanford University

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and an ACL Test of Time award for his pioneering work on applying neural network or deep learning approaches to human language understanding. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.

Subbarao Kambhampati
Arizona State University

Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, and has recently been interested in the role generative AI systems can play there (a topic on which he is also delivering a tutorial at AAAI 24). His research group also studies the challenges of human-aware AI systems. He is a fellow of AAAI, AAAS and ACM. He served as the president of AAAI, was a trustee of IJCAI,  and a founding board member of Partnership on AI.  He is the current chair of AAAS Section T (Information, Communication and Computation).  Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.    

Shelia McIlraith

Sheila McIlraith is a Professor in the Department of Computer Science, University of Toronto, Canada CIFAR AI Chair (Vector Institute for Artificial Intelligence), and Associate Director and Research Lead of the Schwartz Reisman Institute for Technology and Society. Prior to joining U of T, McIlraith spent six years as a Research Scientist at Stanford University, and one year at Xerox PARC. McIlraith is the author of over 100 scholarly publications in the area of knowledge representation, automated reasoning and machine learning. Her work focuses on AI sequential decision making broadly construed, through the lens of human-compatible AI. McIlraith is a fellow of the ACM, a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and a past President of KR Inc., the international scientific foundation concerned with fostering research and communication on knowledge representation and reasoning. She is currently serving on the Standing Committee of the Stanford One Hundred Year Study on Artificial Intelligence (AI100). McIlraith is an associate editor of the Journal of Artificial Intelligence Research (JAIR), a past associate editor of the journal Artificial Intelligence (AIJ), and a past board member of Artificial Intelligence Magazine. In 2018, McIlraith served as program co-chair of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18). She also served as program co-chair of the International Conference on Principles of Knowledge Representation and Reasoning (KR2012), and the International Semantic Web Conference (ISWC2004). McIlraith’s early work on Semantic Web Services has had notable impact. In 2011 she and her co-authors were honoured with the SWSA 10-year Award, a test of time award recognizing the highest impact paper from the International Semantic Web Conference, 10 years prior; in 2022 McIlraith and co-authors were honoured with the 2022 ICAPS Influential Paper Award, recognizing a significant and influential paper published 10 years prior at the International Conference on Automated Planning and Scheduling; and in 2023 McIlraith and co-authors were honoured with the IJCAI-JAIR Best Paper Prize, awarded annually to an outstanding paper published in JAIR in the preceding five years.

Charles Sutton
University of Edinburgh

Doug Lenat, CYC, and Future Directions in Reasoning and Knowledge Representation

Sunday, February 25 – 9:30 AM – 10:30 AM
Location: Ballroom AB

Co-organizers: Gary Marcus and Michael Witbrock

At AAAI 2024, we honour the legacy of Doug Lenat, a pioneering figure in artificial intelligence who founded Cycorp and profoundly impacted AI research by scaling both the extent and the ambition of logic-based and common-sense reasoning. This memorial and retrospective session brings together a distinguished panel of speakers close to Doug and the Cyc project,  to reflect on Doug’s contributions to AI, particularly through his work on the Cyc project. Panelists, including Blake Shepard, Francesca Rossi, Gary Marcus, and Michael Witbrock, will discuss Lenat’s vision for AI, his groundbreaking approaches to knowledge representation and reasoning, and the enduring influence of his ideas on current and future AI research. We’ll explore how Lenat’s work on Cyc laid foundational principles for building intelligent systems and his advocacy for a comprehensive, inferentially powerful,  AI knowledge base. Attendees will gain insights into Lenat’s impact on AI’s past, present, and its trajectory towards more sophisticated and human-like reasoning capabilities.

Francesca Rossi
IBM

Francesca Rossi is an IBM fellow and the IBM AI Ethics Global Leader. She works at the T.J. Watson IBM Research Lab, New York.

Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She has published over 200 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book and she has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, and a handbook.
She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI), an executive councillor of AAAI, and the Editor in Chief of the Journal of AI Research. She is a member of the scientific advisory board of the Future of Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence (Cambridge, UK). She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners.
She has been a member of the European Commission High Level Expert Group on AI and the general chair of the AAAI 2020 conference. She is a member of the Responsible AI working group of the Global Partnership on AI and the industry representative in its Steering Committee. She has been AAAI President since 2022.

Blake Shepard
Cycorp

Blake Shepard is the Director of Ontological Engineering at Cycorp.  Over the course of his 25-year career at Cycorp, he has directed a wide range of extensions of the Cyc platform for commercial and government applications and he has published numerous articles on Cyc. Some areas in which he has led Cyc platform development include integrating Cyc with LLMs to automatically expand the Cyc knowledge base and to validate LLM output with Cyc reasoning, abductive planning for embodied AI, abductive reasoning and scenario generation for terrorist threat anticipation, computer network risk assessment, decision support for space launch facilities, learning-by-teaching, simulation of realistic emotional engagement with fictional characters in rich fictional universes, and root cause anomaly understanding for complex systems including deep-sea and unconventional oil wells. He holds a Ph.D. in Philosophy from The University of Texas at Austin.

Michael Witbrock
The University of Auckland

Michael Witbrock is a Computer Science professor at The University of Auckland, leading its Broad AI Lab. With a PhD from Carnegie Mellon University and a rich background in AI research and development, he worked alongside Doug Lenat at Cycorp for 15 years, serving as Vice President of Research. His work focuses on blending formal logic with machine learning to create intelligent systems. A passionate advocate for AI’s positive impact, Witbrock’s contributions span academia and industry, aiming to advance the field toward more human-like reasoning and social good.

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 2025 Association for the Advancement of Artificial Intelligence.
Your use of this site is subject to our Terms and Conditions and Privacy Policy.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT