Breakout Sessions - 10 a.m.
Introducing "Curated Pairs"
To provide diverse perspectives, select sessions are paired. These 60-minute blocks feature two 30-minute talks on complementary themes. Look for matching labels (e.g., [Pair A]) and please plan to stay for the full hour.
Room Assignments
Location detailswill be posted byMay 12.
Track: Teaching, Learning & Student Formation
- Nathaniel McLeroy,Graduate Student, School of Social Work, and Graduate Production Assistant, Media Technology Services
[Two Presentations in 1 hour] [Pair A]
As AI reshapes labor, education, health, housing, and public safety, its impacts fall earliest and hardest on communities already facing structural vulnerability. Yet higher education’s AI discourse often centers on pedagogy and productivity, overlooking the social systems into which these technologies are deployed.
This session argues that AI governance is fundamentally a social challenge. Social work offers tools that AI efforts lack: systems thinking, equity analysis, community engagement, and an understanding of unintended consequences. But the field itself remains underprepared, with limited training on how emerging technologies shape policy, practice, and social outcomes.
The session examines this dual gap and makes the case for integrating social-work-informed frameworks into AI education across disciplines. Using examples from workforce development, community practice, and AI governance, it outlines practical steps universities can take to build more ethical AI infrastructure and prepare a workforce capable of navigating technological change.
This is a 30-minute presentation paired with "Linguistics and AI: an inquiry-based course from a disciplinary perspective."
- Christopher Geissler,Visiting Assistant Professor of Linguistics,Eastern, Slavic, and German Studies (MCAS)
- Emily Hay '26, Linguistics major
- Jasmine Maas '26, Linguistics major
[Two Presentations in 1 hour] [Pair A]
Linguistics and Artificial Intelligence, a new course taught in Fall 2025, turned the tools of linguistics to study text-to-speech (TTS) systems and large language models (LLMs). Each student conducted five small-scale research projects, writing two-page abstracts formatted like submissions to computational linguistics conferences. Students defined their own research questions, but were constrained by topics appropriate for a particular methodology: vocal resonance, pronunciation, syntactic variation, and discourse analysis.. This format required students to critically examine AI systems, while leveraging their developing knowledge of linguistics to understand this new topic. Students report developing a greater understanding of what AI systems are and reflecting on AI systems in new ways. Overall, the course provides a case study in how disciplinary study and learning about AI can benefit each other.
This is a 30-minute presentation paired with "AI as Social Infrastructure: Why Tech Literacy Needs A Social Work Lens."
- Vincent Cho,Associate Professor,Educational Leadership and Higher Education, LSEHD
[Two Presentations in 1 hour] [Pair B]
This hands-on workshop invites faculty to confront a practical question: where in your teaching do students get stuck, and could a well-designed chatbot help? Drawing on the facilitator's experience developing custom chatbots for graduate students in professional programs, the session moves participants from identifying a specific friction point in their teaching to drafting a deployable chatbot prompt. Along the way, it surfaces the design decisions that matter: how rubrics and assignment expectations can be made visible through prompt design, how guardrails shape what a chatbot will and will not do, and how involving students in that process deepens their understanding of what is being asked of them. The workshop assumes no prior experience with custom chatbot design and is intended to leave participants with both a working tool and a framework for thinking about when one belongs in their teaching.
This is a 30-minute presentation paired with "Socratic AI: Adaptive Oral Assessments, AI-Driven Conversations and Pedagogy in the Classroom."
- Can Erbil,Professor of the Practice, Economics
[Two Presentations in 1 hour] [Pair B]
In first-year courses, generative AI has quietly broken traditional assessment. Written homework and short answers no longer reveal what students actually understand. This talk presents a set of applied, classroom-tested strategies for redesigning introductory courses in an AI-rich environment. Drawing on large-enrollment teaching at Boston College, it shows how AI-supported oral assessments, AI-augmented lectures, and adaptive learning tools can be used to require students to explain concepts aloud, apply ideas in real time, and demonstrate understanding that cannot be outsourced to a chatbot. Rather than banning AI, these approaches integrate it directly into course design while restoring clarity, rigor, and engagement in foundational courses. The session focuses on concrete implementation choices, what worked, what failed, and how instructors can immediately adapt these methods to first-year classrooms.
This is a 30-minute presentation paired with "Making the Invisible Visible: Designing Custom Chatbots to Support Student Work."
- Chris Strauber, Senior Liaison Librarian,University Libraries
- Steve Runge, Senior Liaison Librarian, O'Neill Library
[Two Presentations in 1 hour] [Pair C]
Conversations about Generative AI at the Ƶ libraries tend to ask some ethical and epistemological questions the general discourse does not. This presentation will discuss how LLM’s include only a fraction of human knowledge, and how LLMs obscure scholarly communication.
The Internet is at best a convenience sample of human knowledge. Most archives exist primarily on paper, and archives are only one possible store of knowledge. Much knowledge is lost entirely, much is in minority languages with limited web presence, and much is yet to be found or archived, let alone digitized.
LLM chatbots hide their sources. Most companies have been evasive about what data LLMs have ingested, and that data is, in the words of one researcher, a “slurry” of information. Few people know how LLMs develop parameters in their training and fewer yet the programmatic sources of unpredictable or inaccurate output.
This is a 30-minute presentation paired with "The Coming AI Crisis: Why Most Companies Are Already Out of Bounds."
- Lindsay Timcke, Managing Partner Timcke Risk Management LLC, Accounting
- Francis J Nemia, Audit Committee Member, Board Director
[Two Presentations in 1 hour] [Pair C]
AI adoption is accelerating far faster than the governance structures required to manage its ethical, societal, and operational risks. More than 70% of organizations now deploy generative AI, yet fewer than 20% maintain formal governance frameworks, and only a small minority document model lineage or training‑data provenance. This gap has become a defining ethical risk: opaque systems are making consequential decisions without explainability, auditability, or accountability. Regulators are responding with escalating expectations around transparency, safety testing, and verifiable control, signaling that AI will be treated as a regulated system rather than a productivity tool. Meanwhile, structural weaknesses—unstructured data, fragile pipelines, shadow AI use, and undisclosed vendor models—are amplifying systemic exposure. This presentation examines why AI opacity is emerging as a societal risk, how liability shifts to deploying organizations, and why verifiable, governed, and independently validated AI is now the ethical baseline for responsible enterprise adoption.
This is a 30-minute presentation paired with "LLM’s Are Not What They Say They Are: A View from the Library"
- Kyle Fidalgo,Academic Technologist,Law School
- Maureen Van Neste, Associate Professor of the Practice, Law School
- Jake Samuelson, Legal Information Librarian & Lecturer in Law, Law School
- Raul Carrillo, Assistant Professor, Law School
- Ross Martin, Adjunct Professor, Law School
[Panel Discussion]
This session brings together faculty, librarians, and academic support staff from Ƶ Law to share practical approaches to AI integration across teaching, learning, and administrative functions. Through lightning-style presentations, panelists will demonstrate how they've implemented AI tools in their daily work—from developing educational workshops using custom AI assistants and templates, to integrating chatbots into classroom pedagogy, to designing immersive simulation exercises for 1L students. Rather than theoretical speculation, these short talks focus on real workflows, lessons learned, and tangible outcomes from our ongoing AI initiatives. Attendees will gain insight into the Ƶ Law AI Fluency program, see examples of AI-enhanced course design in legal education, and learn how different roles across a school can collaborate to build institutional AI literacy. The session concludes with Q&A, offering an opportunity to discuss challenges, considerations for legal education contexts, and strategies for cross-functional AI adoption.
- Adrian Aziza,Boston College student,Applied Data Science
- Julia DeVoy, PhD, MTS, MBA, MLS '26, Associate Dean of Undergraduate Programs and Students, LSEHD & Co-Founder of Inter-institutional Design for Impact Initiative
[Workshop/Demo]
This session presents an epistemic, student-facing AI study tool that turns course materials into do-now actionable steps while supporting knowledge-making rather than rote memory. The AI tool’s workflow scaffolds core epistemic moves: question formation, claim–evidence mapping, uncertainty calibration, counterargument testing, and next-action experiments (what to verify, read, ask, or measure) and tags each step to course learning objectives so individual progress becomes visible. Instead of producing final submissions, the AI Study Tool uses a voice-preserving refactor loop (student inputs draft; AI offers cited revision strategies and reasoning; student then chooses and rewrites) plus explicit constraint capture (rubric, audience, citation rules). A provenance layer: AI Dialogue Log, change history, and verification ledger; records prompts, short output excerpts, what the student adopted/overruled, and what was fact-checked, producing a concise ‘progress brief’ that enables more individualized, strategic feedback. The result is a learning-centered pattern for ethical, transparent AI use that strengthens reasoning and authorship.
- Tim Lindgren,Assistant Director, Design Innovation,Center for Digital Innovation in Learning
- Noël Ingram, Digital Teaching Programs Administrator, Center for Digital Innovation in Learning
- Colleen Dallavalle, Associate Vice President, Student Engagement & Formation, Division of Student Affairs
- Belle Liang, Professor & Ascione Family Formation Fellow, Lynch School of Education & Human Development
- Elisa Liang, Ph.D. Student in Counseling Psychology, Lynch School of Education and Human Development
- Student panelists:
- Toby Ting, Junior theology/philosophy major
- Mackenzie Duffy, Sophomore
[Panel Discussion]
How do students actually think about AI—and how can we create space for honest conversation at Boston College? This panel shares two connected programs that grew out of CDIL's 2024-25 Student AI Advisors working group. First, a summer research internship (CDIL and the Purpose Lab) trained undergraduate interns in qualitative research methods so they could interview other students about formative experiences of AI at Ƶ. Those findings informed a second initiative: "Ƶ Students Talk AI," a peer-led pizza conversation program developed in partnership with Student Affairs. Students hosted small-group discussions with friends and classmates, then completed reflections on what they heard and learned. The panel will share preliminary findings from both programs and discuss the collaborative process—how units across campus worked together to treat AI as a conversation starter about student formation, trust, and dialogue.
- Philip Aldrich,CTO - Verterim, Adjunct Professor - Ƶ
- Alexia Prichard, Senior Instructional Media Producer, Ƶ; Winner 2025 MIT AI Film Hackathon; SXSW FuturePIXEL House 2026-Official Screening
[Presentation]
Complex learning concepts are difficult to learn, much less understand and retain. What if students could "experience" the concept through AI and VR? This presentation will outline real world problems Chief Information Security Officers and Chief Risk Officers face at many organizations. The traditional teaching method would be to show students a busy visual meant to convey intricate scenarios and problems without experience or context. This presentation will show how AI and VR can bring these concepts to life for students to watch, interact, and engage within the reality of the concept itself.
- Melanie Hubbard, Head of Digital Scholarship & Data Services
- Micah Lott, Associate Professor of Philosophy
- Paula Mathieu, Associate Professor of English
- Dave Thomas, Digital Scholarship Specialist (Libraries)
- Andrea Vicini, SJ, Michael P. Walsh Professor of Bioethics (Theology)
[Panel Discussion]
The “What Is Wrong with Generative AI” panel will feature contributors from the Departments of English, Theology, and Philosophy, as well as the Libraries. Drawing on their disciplinary perspectives and technical expertise, they will articulate the issues they believe students and faculty should understand to engage with generative AI critically and ethically within and beyond the classroom. Generative AI’s embedded biases, frequently misunderstood capabilities, depersonalizing tendencies, effects on critical thinking, and material impacts, among other topics, will be discussed.
Despite the deliberately provocative title, this session is not intended to be a takedown of generative AI. Instead, it aims to give attendees a more holistic and realistic understanding of what it is, its capabilities, and its impacts, recognizing that it is here to stay. There will be ample time for audience responses and questions.
Track: Research
- Cristina Maier,Assistant Professor of the Practice,Computer Science
[Presentation]
Association Rule Mining aims to discover frequent co-occurrence patterns in data and has been widely applied in domains such as market basket analysis, recommendation systems, and customer behavior analysis. Traditional association rule mining treats items as flat symbols in structured datasets without incorporating semantic relationships. As a result, the discovered rules are often redundant, fragmented, or overly specific, limiting their interpretability and practical usefulness. This study explores the use of generative AI to identify high-level semantic concepts that improve scalability and enable the discovery of more meaningful patterns. Experimental results demonstrate that concept-level rules uncover broader and more meaningful patterns than traditional item-level rules while maintaining relevance and precision.
- Katie Kidwell,Nursing & Health Sciences Liaison Librarian,University Libraries
- Elliott Hibbler, Head Librarian, Scholarly Platforms and Discovery Services, University Libraries
- Melissa Uveges, Ph.D., M.A.R., RN, HEC-C, FAHA, Assistant Professor, Connell School of Nursing
[Presentation]
Artificial intelligence is moving fast, which can make the research landscape feel like the Wild West. This session offers a high-level "drive-by" of how AI and automation tools can support the research lifecycle, from the first spark of an idea to the final published paper. We’ll explore how these diverse tools can be strategically and safely integrated into your workflow. Using a librarian’s lens, we’ll do a quick tour of where automation & generative AI can actually save you time (like keyword discovery and data extraction) and where it’s likely to steer you off course (hallucinations and lack of context). We’ll discuss specific tools as well as ethical considerations and mandates in publishing. This isn't a deep-dive tutorial, but a chance to see what’s possible, navigate risk, and connect with other researchers across campus who are navigating these same tools, so come with questions and suggestions!
Matt Gregas,Director, Research Services, ITS
Leonard Faul, PhD, Research Statistician, Research Services, ITS
Melissa McTernan, Academic Research Services, ITS
[Presentation]
This presentation introduces a series of NotebookLM Notebooks designed as interactive guides for exploring advanced statistical topics. Because the notebooks are curated by expert statisticians at Research Services, users can be reassured that responses are accurate and up-to-date. These resources are specifically intended to help researchers deep-dive into selected statistical methodologies. Researchers may interact with the notebookLMs to understand the intended use case of each method, access relevant and vetted literature, receive guidance on when the methodology is appropriate based on data structure and research questions, or even generate audio overviews to learn more about the topic. The notebooks may offer practical advice on implementation using industry-standard software, including R and STATA. By bridging the gap between statistical consultation and independent execution, these tools empower our research community to apply rigorous analytical techniques with greater precision and efficiency, while ensuring that AI assistance is grounded in verified documentation and high-quality data sources. As always, we advise researchers to schedule a consultation with the Research Services Statistical Team in advance of putting the methodologies into practice, which is incorporated into the notebookLM conversational framework as a persistent reminder.
Track: Operational Efficiency
- Catherine Conahan, DNP,CSON
[Case Study Presentation]
The transition to the American Association of Colleges of Nursing (AACN) Competency-Based Essentials requires nursing programs to map curricula to complex domains, competencies, and sub-competencies. This presentation describes the development and implementation of an AI–driven chatbot, Florence ™, designed to support nursing faculty in curriculum mapping and evaluation. The chatbot analyzes course syllabi with learning activities, and aligns them with the AACN Essentials using natural language processing and structured competency frameworks. Faculty users can query the tool to identify gaps, redundancies, and concordance. Preliminary use demonstrates improved efficiency, time saving, increased consistency in mapping, and enhanced faculty engagement in competency-based curriculum review. This AI-enabled approach offers a scalable, transparent, and faculty-centered solution to support curricular transformation for ongoing accreditation and program evaluation efforts in nursing education.
- Debbie Hogan,Assistant Doctoral Program Director/Adjunct Instructor,School of Social Work
[Presentation]
Higher education administrators manage an expanding range of responsibilities, often leaving limited time for strategic thinking and innovation. AI offers a practical solution by supporting routine administrative tasks and creating space for deeper, creative work. In my role at the School of Social Work, I have used built‑in AI tools to streamline email communication and newsletter production, improving both efficiency and clarity. More recently, I have explored how AI doctoral program assistants can function as interactive “Content Navigators” for multiple stakeholders. These assistants help faculty and PhD students locate and interpret doctoral program policies, and they guide prospective applicants through admissions requirements in accessible, real‑time conversations. By providing immediate, accurate information at the point of need, AI systems reduce repetitive inquiries and free human administrators to focus on program development and innovation. This presentation will highlight practical approaches and early outcomes from integrating AI into academic administrative workflows.
- Ravindra Harve,Enterprise Data Architect, Information Technology Services
- Lance Tucker, Associate Director (Data & Reporting), ITS
[Presentation]
In 2014, ITS explored address verification software to help departments maintain accurate postal addresses and improve communication with University constituents. Research with Gartner, vendors, and peer institutions revealed that this solution was popular among organizations with large customer bases that faced inconsistent address formats from unregulated sources.
The primary challenge was cost. Options ranged from tens of thousands of dollars for format standardization to hundreds of thousands for full address verification. University Advancement used an annual batch cleansing service. International address verification was the most expensive, with costs increasing with scope.
In 2026, we revisited this challenge, exploring AI tools as a cost-effective alternative. In this session, we will present 5 use cases to determine whether this approach was helpful.
- Scott Olivieri,Managing Director, Web Services,Office of University Communications& Instructor, Capstone Program, MCAS
[Two Presentations in 1 hour] [Pair G]
Data is not just in back-office databases—it's everywhere. Chaotic email inboxes, massive Excel and Word documents, and scattered department web pages are critical sources of unstructured data we can tap into to gain a deeper understanding of our organizational operations, opportunities, target audiences, and challenges.I'll share three examples of how I used Gemini, NotebookLM, and Claude to transform this unstructured madness into actionable insights.
1. SUPPORT & CUSTOMER SERVICE. Our group email support inbox was stuffed with thousands of messages. We responded and then archived the message.
AI tools helped us transform this into a useful report that categorized requests, documented workload, identified successes, and helped us be proactive rather than reactive.
2. WEBSITE EFFECTIVENESS. How do people perceive your department or school website? AI has transformed user testing. I will show you how to begin assessing website effectiveness by using synthetic personas and provide tips on optimizing your website for GenAI.
3. REPORTING. Finally, I'll share how we used AI to transform a 200 page word document with dozens of tables and images into an interactive and engaging reporting application.
This is a 30-minute presentation paired with "More than just Chatbots: AI and the Future of Graduate Enrollment."
- Adam Poluzzi,Associate Vice Provost, Graduate Enrollment Management, Provost Office
- Frances Stearns, Director, Graduate Enrollment Services
- Brett DiMarzo, Director, Graduate Enrollment Digital Strategy
- Alyssa Volivar, Associate Director, Graduate CRM Operations
[Two Presentations in 1 hour] [Pair G]
AI is redefining Graduate Enrollment Management (GEM) amid a looming "demographic cliff" and tightening budgets. This session moves beyond the "if" of AI to provide a pragmatic roadmap for implementation, centered on operational efficiency and hyper-personalized engagement.
The presentation explores three critical pillars:
(1) Admissions: The shift from simple automation to evaluative decision-making and the impact of applicant-side AI usage.
(2) Marketing & Recruitment: Navigating radical changes in student search behavior while leveraging human-centered content as a differentiator.
(3) Boston College Case Study: Insights into our collaborative, cross-campus framework for ethical integration.
Attendees will learn to identify admissions friction points, optimize marketing for "AI discoverability," and navigate the ethical implications of automation. This session offers a collaborative framework for evolving operations and leading change together.
This is a 30-minute presentation paired with "Data is Everywhere: Start Learning from Your Emails, Documents, and Web Pages."
- Peter Salvitti,Chief Technologist, ITS
[Presentation]
University staff navigate complex processes every day: document lookups, first-draft writing, cross-departmental communication, all built for an earlier generation of tools. This session demonstrates how two tools already available through our Google Workspace, Gemini and NotebookLM, can immediately streamline those workflows, with no new software and no IT ticket required.
Through several live demos, attendees will see AI retrieve cited answers from policy documents, draft job descriptions from rough notes, analyze anonymized survey data, translate technical jargon into plain English, and more. Every workflow follows a single principle: Human-Led, AI-Supported. The AI handles the lookup and the first draft; you make the decisions.
Attendees leave with some ready-to-use prompts and instructions to turn any prompt into a permanent, reusable tool using Gemini Gems. All activities are covered under our university's Google Workspace Enterprise data protection agreement.
