All articles

A Companion, Not a Shortcut: Rethinking Support in a Modeling Course

A Companion, Not a Shortcut: Rethinking Support in a Modeling Course

It wasn’t the 11,000 questions that made me rethink how I teach. It was one.

A student who had never come to office hours, never raised their hand, sent a note: “I asked HiTA. It didn’t give me the answer, but it helped me figure it out.” No punctuation. Just that.

There are moments in teaching—quiet, unassuming ones—when something shifts. Not the material. The posture toward it. This was one of those moments.

In Fall 2024, I redesigned HADM 4205/6205 with one central goal in mind: to embed a new kind of support into the heart of the course—not just more access, but a different kind of presence.

That support came in the form of HiTA, a GPT-powered virtual assistant trained on all course materials. HiTA wasn’t just bolted onto the class as a chatbot. It was woven into the course’s fabric, designed to act as a modeling companion students could turn to at any point. It answered content questions, clarified technical steps, pointed students to relevant examples, and offered gentle nudges when they got stuck.

During the 2024–2025 academic year, across four sections and 155 students, HiTA responded to over 11,000 queries. That volume alone signaled something important: students were engaging not just with the material, but with the process of working through confusion. And they were doing so at times and in ways that traditional teaching structures can’t always accommodate.

The course already used a sandbox-style architecture: students progressed through Excel-based modeling tasks, each building in complexity. A gamified self-assessment layer added immediate feedback—conditional formatting for errors, embedded scoring tabs—but these were tools to encourage autonomy, not end points in themselves. What made that structure work was HiTA. When students hit a wall, HiTA provided context, reminded them where a concept was introduced, or offered a new way to frame the problem.

I’ll admit, I wasn’t sure this would work. I was trained to value the clarity that comes from a live exchange—how a single follow-up question can shift a student’s understanding. I worried that an AI assistant might flatten that experience, or worse, become a crutch.

But that’s not what happened.

What shifted wasn’t the depth of engagement—it was its rhythm. Students still needed help. But they needed it at midnight. They needed it after a wrong submission. They needed it when the stakes felt low enough that asking a question didn’t seem worth a full email.

And they needed a space that didn’t judge the question itself.

HiTA became, as one student called it, “a quiet coach.” Not a shortcut, not a solution engine—but a guide. Students weren’t just solving problems; they were learning to articulate what was confusing. That shift—from waiting for help to initiating it—changed how they moved through the course.

Instead of stalling out and walking away, many stayed in the loop. They used HiTA to orient themselves, then went back to the spreadsheet. They used the self-grading tools to check progress, found where they’d gone wrong, then asked HiTA how to think through the mistake. Students didn’t become less independent; they became more willing to persist on their own terms.

Even the weakest students showed more resilience. Fewer dropped off mid-semester. More re-submitted revised models. They still needed guidance—but now they could get it in smaller increments, right when confusion began. They didn’t need someone to debug the file for them. They needed someone (or something) to say: you’re close, here’s a nudge.

And for many students, that made all the difference.

These shifts showed up not just in anecdote, but in data. Between Spring 2021 and Fall 2024, our average instructor rating increased from 4.64 to 4.88. Pedagogy scores improved by 0.41 points. Feedback scores jumped by more than 0.3. But I wasn’t focused on numerical gains. I was watching how the texture of the course changed—how the back-and-forth deepened, how students took more ownership of their modeling process.

The self-grading tools played a big role in this too. These templates flagged modeling errors immediately—wrong signs, broken references, inconsistent formulas—and gave students a score out of 100. But they didn’t correct the errors. They left that part to the student. Over the course of the semester, we removed the grading logic entirely, asking students to troubleshoot on their own. HiTA became a bridge in that transition: pointing students toward the right logic pattern, helping them isolate where a formula might be failing, or nudging them back to a concept they had skimmed over.

Looking back, I think I underestimated how powerful anonymous, judgment-free support could be. We often talk about scaffolding in terms of content—breaking down a task into pieces. But emotional scaffolding matters too. It’s the difference between seeing an error as a challenge and seeing it as proof that you don’t belong.

HiTA gave us a way to build that kind of scaffolding at scale. It didn’t replace accountability. Students still had to do the work. But it made it easier to reengage after failure—to try again without shame.

There were, of course, limitations. HiTA struggled with multi-step logic. It couldn’t “see” Excel files, so its debugging advice was only as good as the training it received. Some students used it only sparingly; others needed orientation to learn how to ask effective questions. But the broader pattern held: usage was highest during take-home exams, deadline windows, and complex deliverables—precisely the moments when live help is hardest to provide.

This raised a deeper question for me—one I’m still working through: What kind of support actually helps students grow?

I used to believe that being available—visible, responsive, “present”—was the gold standard. And that still matters. But presence doesn’t always mean being in the room. Sometimes it means designing systems that anticipate confusion. Sometimes it means giving students ways to self-correct, reflect, or recalibrate before they even raise their hand.

HiTA didn’t change the rigor of the course. But it changed how students experienced that rigor. It turned stress points into learning moments. It gave them a guide—but not a shortcut.

Looking ahead, I’m interested in how AI might move from reactive to generative. We’ve begun experimenting with structured interactions where students prompt HiTA to generate components of a model—then evaluate its logic, refine it, and rebuild it. The goal isn’t to outsource the thinking, but to build fluency in reviewing machine-generated work—a skill many of these students will need in industry.

We’re also preparing to move the course out of the computer lab and into larger lecture halls. With HiTA and our self-assessment tools in place, students don’t need physical proximity to get support. They need responsive infrastructure and clear guidance—things we can now provide asynchronously, at scale.

I don’t know whether this model will work everywhere. I’m still refining how HiTA handles edge cases, how to keep it pedagogically aligned, and how to introduce it in ways that feel natural rather than intrusive. But I’m more convinced than ever that independence and support are not opposites. They can be designed to work together.

In a course built around real estate financial modeling, there’s no shortcut to learning. But there can be a companion. And for students learning how to navigate complexity—sometimes alone, sometimes with help—that can make all the difference.

About The Author

Daniel Lebret

Daniel Lebret

Senior Lecturer, Nolan School of Hotel Administration, Cornell S C Johnson College of Business, Cornell University