Deployable AI Workshop
AAAI 2026
Topaz Councourse, Singapore EXPO, Singapore Jan 26, 2026




News

  • Poster Sessions detailed Information
  • Workshop Location and Date: Topaz Concourse, Singapore Expo Singapore. 26 January 2026 8:30 AM to 5:00 PM
  • We have updated the Tentative Schedule.
  • Please follow us on Twitter for the latest news.
  • Submission Deadline: 22 October 2025 (AOE) Due to OpenReview outage we have extended the deadline to 23 October 2025 (AOE)
  • Speaker List Updated...
  • AAAI Early Registeration deadline approaching 19 Nov 2025 (AoE), please visit Link

About

Artificial Intelligence (AI) has evolved into a vast interdisciplinary research area and we have reached an era where AI is beginning to be deployed as real-world solutions across various sectors/domains. Moving to a wider scope of deployment of the AI models into the real world is not just a simple question of translational research and engineering but also requires several fundamental research questions and issues, involving algorithmic, systemic and societal aspects, to be addressed. Therefore, it is crucial to carry out progressive research in the domain of Deployable AI and how the various deployability aspects with respect to AI models can ensure positive impacts on the society

Call for Papers

Key Dates

  • Paper Deadline: October 22, 2025 (AoE) October 23, 2025 (AoE)
  • Notification: November 5, 2025 November 8, 2025, (AoE)
  • Camera-ready: Dec 1, 2025, (AoE) (Authors of accepted papers can deanonymize the paper and update the paper in openreview)

All deadlines follow the Anywhere on Earth (AoE) timezone.

Submission Site

Submit papers through the Deployable AI Workshop Submission Portal on OpenReview (Workshop Submission Portal)

Scope

We welcome contributions across a broad spectrum of topics, including but not limited to:

  • Deployable AI: Concepts and Models
  • Privacy-Preserving AI
  • Language Models & Deployability
  • Explainable and Interpretable AI
  • Fairness and Ethics in AI
  • AI models and social impact
  • Trustworthy AI models

Submission Guidelines

Format:  All submissions must be a single PDF file. We welcome high-quality original papers in the following two tracks:

  • Short Papers: 4 pages
  • Long Papers: 7 pages
References and appendices are not included in the page limit, but the main text must be self-contained. Reviewers are not required to read beyond the main text

IAAI-26 Special Track Resubmision:  Authors of papers submitted to IAAI-26 that are not accepted are invited to consider submitting their work to the DAI-2026 Workshop, which focuses on Deployable AI. Submissions should align with the scope of the workshop.

  • You will need to submit a 2-page review response pdf including the reviews. Please follow the AAAI-style format for review response also.

Style file:   You must format your submission using the AAAI 2026 LaTeX style file. Please include the references and supplementary materials in the same PDF. The maximum file size for submissions is 50MB. Submissions that violate the AAAI style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.

Dual-submission and non-archival policy:  We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted, , provided they do not breach any dual-submission or anonymity policies of those venues. The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility:   Submissions and reviews will not be public. Only accepted papers will be made public.

Double-blind reviewing:   All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy. This policy applies to any supplementary or linked material as well, including code. If you are including links to any external material, it is your responsibility to guarantee anonymous browsing. Please do not include acknowledgements at submission time. If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing. Any papers found to be violating this policy will be rejected.

Contact:   For any questions, please contact us at dai2023workshop@gmail.com.


Tentative Schedule

This is the tentative schedule of the workshop. All slots are provided in local time.

Morning Session

09:00 AM - 09:05 AM Introduction and Opening Remarks
09:05 AM - 09:40 AM Keynote 1- AI Capabilities vs. AI Deployment: Models and methods to fill the gap, Prof. Ramayya Krishnan, Carnegie Mellon University
09:40 AM - 10:15 AM Keynote 2- Towards Reliable Assistance: Safety and Security in Sequential Decision-Making, Pradeep Varakantham Singapore Management University
10:15 AM - 10:25 AM Oral Talk 1- DETNO: A DIFFUSION-ENHANCED TRANSFORMER NEURALOPERATOR FOR LONG-TERM TRAFFIC FORECASTING
10:25 AM- 10:35 AM Oral Talk 2- Dynamic Orthogonal Continual Fine-tuning for Mitigating Catastrophic Forgetting of LLMs
10:35 AM - 11:30 AM Break and Poster Session 1
11:30 AM - 11:40 AM Oral Talk 3 - Alignment-Constrained Dynamic Pruning for LLMs: Identifying and Preserving Alignment-Critical Circuits
11:40 AM - 11:50 AM Oral Talk 4 - Efficient Multi-Model Orchestration for Self-Hosted Large Language Models
11:50 AM - 12:00 AM Oral Talk 5 - A Task-Level Explanation Framework for Meta-Learning Algorithms
12:00 AM - 12:50 PM Poster Session 2
12:50 PM - 2:00 PM Lunch Break
2:00 PM - 2:30 PM Global North, South & the Future: A Fireside Chat on Responsible AI Prof. Gopal Ramchurn (Uni of Southampton, RAI UK), Prof. Balaraman Ravindran (CeRAI & WSAI, IIT Madras)
2:30 PM - 3:00 PM Keynote 3: Foundation Motion: Auto-Labeling and Reasoning about Spatial Movement in Videos, Boyi Li, Nvidia Research, UC Berkeley
3:00 PM - 3:10 PM Oral Talk 6 - Explainability Methods Can Be Biased: An Empirical Investigation of Gender Disparity in Post-hoc Methods
3:10 PM - 4:05 PM Break 2 and Poster Session 3
4:05 PM - 4:40 PM Keynote 4 - Abstract: Finding supervision for complex tasks Pang Wei Koh University of Washington
4:40 PM - 4:50 PM Oral Talk 7- V-OCBF: Learning Safe Filters from Offline Data via Value-Guided Offline Control Barrier Functions
4:50 PM - 5:00 PM Oral Talk 8 - Quantifying Strategic Ambiguity in Corporate Language for AI-Driven Trading Strategies
5:00 PM - 5:10 PM Oral Talk 9 - Safe and Deployable LLM Adaptation: Directional Deviation Index–Guided Model Pruning
5:10 - 5:15 PM Closing Remarks

Invited Speakers




Ramayya Krishnan

Carnegie Mellon University

AI Capabilities vs. AI Deployment: Models and methods to fill the gap

Abstract
AI capabilities are developing rapidly, with new benchmarks focusing on realistic tasks. While models approach human-level performance, organizations struggle to deploy them effectively. This talk examines the capability–deployment gap and argues for a systems approach that incorporates user, organizational, and policy constraints. Methods from operations research and statistics are used to inform evaluation, robustness, and workflow redesign for human–AI collaboration.

Pradeep Varakantham

Singapore Management University

Towards Reliable Assistance: Safety and Security in Sequential Decision-Making

Abstract
This talk outlines a research agenda on developing sequential decision-making systems that assist humans. It highlights recent work on constrained reinforcement learning and defenses against adversarial attacks, ensuring agents remain safe under constraints and robust to malicious interference.

Pang Wei Koh

University of Washington

Finding supervision for complex tasks

Abstract
Language models are tackling tasks so complex that solving or even verifying them requires significant time and expertise, making it challenging to acquire training data at scale. In this talk, we will discuss three approaches to this problem. First, we will show that models can learn a surprising amount through "delta learning", that is, from relative quality differences between paired data, even if this data is of worse quality than what our model already produces. Second, we will introduce EvalTree, our approach to profiling weaknesses of language models; in turn, this lets us efficiently collect training data tailored for a given model. Finally, we will discuss our work on training DR Tulu, a long-form deep research model, by using RL with rubrics that evolve as the model trains, thereby providing discriminative training signals even on complex deep research tasks.

Boyi Li

NVIDIA Research, UC Berkeley

FoundationMotion: Auto-Labeling and Reasoning about Spatial Movement in Videos

Abstract
Understanding motion is central to physical reasoning, yet existing datasets are costly to scale. This talk introduces FoundationMotion, an automated pipeline that constructs large-scale motion datasets from videos using object tracking and large language models. Models trained on these datasets achieve substantial gains in motion understanding and spatial reasoning, outperforming strong closed- and open-source baselines.

Workshop Organizers




Aravindan Raghuveer

Aravindan Raghuveer

Google DeepMind

Arpita Biswas

Arpita Biswas

Rutgers University

Arun Rajkumar

Arun Rajkumar

IIT Madras

Devika Jay

Devika Jay

GridSentry

Rahul Vashisht

Rahul Vashisht

IIT Madras

Program Committee