Publication October 30, 2025

VIABLE Lab Presents at AECT 2025 in Las Vegas

VIABLE Lab presented research at AECT 2025 in Las Vegas, exploring human-centered AI in education through tools for instructional design, critical thinking, and student engagement.

The VIABLE Lab participated in the AECT 2025 International Convention held at Planet Hollywood in Las Vegas from October 20-24, 2025. Hongming (Chip) Li represented the lab on-site, presenting collaborative research involving multiple team members.

Emerging Learning Technology Award
Chip Li received the Emerging Learning Technology Award from the Division of Emerging Learning Technologies (DELT) at AECT 2025 for SmartFlash, an AI-powered adaptive learning tool that transforms learning materials into personalized flashcard sets and interactive games. The project was developed in collaboration with Hai Li, Salah Esmaeili, Nazanin Adhami, and Dr. Rui Tammy Huang.

Navigating the AI Era in Education

As artificial intelligence rapidly transforms educational landscapes, educators face a pressing question: How can we harness AI's potential while preserving what makes teaching and learning fundamentally human? The VIABLE Lab's presentations at AECT 2025 explored this challenge from multiple angles, addressing both opportunities and concerns in AI-enhanced education.

Supporting Educators in the Age of AI

Teachers today stand at a crossroads. On one hand, AI tools promise to streamline time-consuming tasks like curriculum planning and assessment design. On the other, many educators worry about losing their professional agency to automated systems that obscure their decision-making processes. How can AI truly support, rather than replace, educator expertise?

Our team addressed this challenge through LOGEN AI[1], a collaborative tool that reimagines the relationship between educators and AI in instructional design. Rather than generating complete learning objectives automatically, LOGEN AI implements a structured multi-stage workflow where educators remain the primary decision-makers. The system combines AI capabilities with established educational frameworks like Bloom's Taxonomy, achieving substantial agreement with expert classification while maintaining pedagogical transparency. This approach demonstrates that AI can augment educator expertise without diminishing professional judgment.

Taking this human-centered philosophy further, our work on the Pro-CaRE platform[2] explored how students themselves can participate in designing AI tools that serve them. Through a participatory Learning Experience Design approach, we examined how engineering students co-designed this explainable recommender system for internships, ensuring that AI explanations align with actual student needs and decision-making processes.

Rethinking Student Engagement with AI

If we want students to think critically in an AI-saturated world, we need to fundamentally rethink how they interact with these technologies. The VIABLE Lab presented two contrasting approaches to this challenge.

First, our Provoking Minds platform[3] flips the traditional AI tutoring model on its head. Instead of providing students with answers, the system orchestrates debates between AI agents representing different scientific perspectives. Students observe these debates, actively analyze arguments, and receive adaptive feedback on their analytical reasoning. By positioning AI as a tool for intellectual provocation rather than information delivery, we aim to cultivate the critical thinking skills students desperately need in an era of AI-generated content.

Our complementary research on the VETTING Chat platform[4] revealed the challenges of this vision. When we examined how undergraduate students actually interacted with both general and filtered ChatGPT models in a machine learning course, we found that students often engaged in transactional rather than exploratory dialogue. Even with verification layers designed to prevent direct answer provision, students tended to seek quick solutions rather than deeper understanding. This finding underscores a critical point: changing the AI tool alone isn't enough—we must also cultivate different habits of engagement.

Behind the Scenes: AI as Research Partner

Even as we develop AI tools for learners and educators, our lab is also examining how AI can accelerate educational research itself—with appropriate human oversight. Our work on systematic review methodology[5] demonstrated both the promise and limitations of using large language models for data extraction. Testing three leading LLMs (Google Gemini 1.5 Flash, Gemini 1.5 Pro, and Mistral Large 2) on 112 studies, we found that while AI can dramatically speed up literature reviews, human-in-the-loop validation remains essential. We developed an open-source software tool to facilitate this collaborative process, exemplifying our philosophy that AI should augment rather than replace human expertise.

Similarly, our research on analyzing student persistence[6] explored how different LLM types (ChatGPT-4o, ChatGPT-o3-mini, DeepSeek/Amazon Titan) can extract psychological constructs from conversational data. This work, conducted in partnership with ETS, advances methods for scalable qualitative analysis while maintaining the rigor that educational research demands.

Zooming Out: What Does It All Mean?

Our umbrella review of AI in STEM education[7] synthesized findings across multiple systematic reviews, providing a comprehensive view of how AI applications are reshaping science and mathematics learning. This meta-perspective helps us situate our individual projects within broader trends and challenges in the field.

Together, these seven presentations painted a picture of educational technology research that refuses simple narratives about AI as either savior or threat. Instead, we demonstrated through concrete tools and rigorous research that the future of AI in education depends on how thoughtfully we design these systems, how critically we evaluate their impact, and how deliberately we preserve human agency in the learning process.

While only Chip attended the convention in person, the research represented collaborative efforts across our entire lab team and valued partners from Johns Hopkins University, Wright State University, and ETS. The conversations sparked by these presentations reinforced our commitment to developing AI tools that enhance rather than replace the irreplaceable human elements of teaching and learning.

Conference Presentations

[1] Li, H., Zhang, S., Lee, S., Trexler, M., & Botelho, A. F. (2025, October). LOGEN AI: Implementing a Transparent Human-AI Collaborative Tool for Enhanced Instructional Design. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA. [Platform] [Details & Slides] [ResearchGate]

[2] Hwang, W., Li, H., & Shin, J. (2025, October). Co-Designing an Explainable AI Educational Recommender System: A Student-Centered Approach. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA. [Platform]

[3] Li, H., Lee, S., Zhang, S., & Botelho, A. F. (2025, October). When AI Agents Battle: Can Their Debates Transform Learning and Spark Critical Thinking? In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA. [Platform] [Details & Slides] [ResearchGate]

[4] Zhang, S., Li, H., Lee, S., Schroeder, N. L., & Botelho, A. F. (2025, October). Investigating Student Interactions with General and Filtered ChatGPT. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA. [Platform] [ResearchGate]

[5] Schroeder, N., Jaldi, C. D., & Zhang, S. (2025, October). Extracting Data for Systematic Reviews with Large Language Models: A Study and Software Tool. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA.

[6] Ober, T., Zhang, S., Zapata, D., Schroeder, N., & Rivero, M. (2025, October). Comparing Different LLM Types for Extracting Indicators of Persistence from Students' Dialogues with a Pedagogical Agent. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA.

[7] Zhang, S., Jaldi, C. D., & Schroeder, N. L. (2025, October). Integrating AI in STEM Education: Insights from an Umbrella Review. In Association for Educational Communications and Technology (AECT) International Convention 2025, Las Vegas, NV, USA.

Note: These are conference presentations. For more information about specific presentations, please contact the authors.

Las Vegas Adventures 🎰

Beyond the conference sessions, Planet Hollywood's location in the heart of the Strip made it impossible to ignore Vegas's unique energy. Conference mornings started with Eggslut's breakfast burgers – proper fuel before diving into the day's presentations. The city operates on its own temporal logic, and it didn't take long to adapt: Gordon Ramsay Burger at midnight became a natural part of the rhythm. Many restaurants here run 24 hours, which actually worked perfectly for late-night conversations when the formal sessions had ended but ideas were still percolating.

Between sessions, fresh seafood plates offered surprisingly refined moments amid the Strip's sensory intensity. Walking around during breaks, The Sphere dominated the skyline – Vegas's newest architectural landmark with its massive LED displays currently featuring Wizard of Oz visuals. During the day it's an impressive piece of engineering; at night, the Strip transforms into something else entirely as natural light fades and the neon spectacle takes over.

The city's 24-hour coffee shops proved essential for those inevitable late-night presentation prep sessions. There's a particular energy to rehearsing a talk at 2 AM in a place that never sleeps – whether it's productive momentum or just caffeine is debatable, but it worked! The temporal flexibility, the constant availability of food and conversation spaces, even the sensory intensity – all of it aligned surprisingly well with the conference's packed schedule and intellectual demands.

Leaving Las Vegas, watching the sunset paint the desert sky in oranges and golds from the airport, provided an unexpected moment of clarity. We'd spent days presenting research on human-centered AI, on designing technologies that respect rather than exploit human attention, on preserving agency in automated systems. And we'd done it in a city that represents the opposite approach – an environment perfected over decades to capture and hold attention, to guide behavior, to make disengagement psychologically difficult. Vegas has solved engagement through constant novelty and optimized reward schedules. Education faces a different challenge: creating engagement that builds capacity rather than dependency, that makes people more autonomous rather than more responsive to external triggers. Holding a conference about human-centered AI in a city built on attention capture made the stakes of our work visible in a way no paper could. Every technology we design is making implicit claims about human agency and development. The question is whether we're building tools that expand human capacity or merely capture human attention. In Vegas, that difference is impossible to ignore.