AI Talk: Teach AI. Teach with AI.
- Juggy Jagannathan
- Dec 17, 2025
- 6 min read
I have been an adjunct professor in the Computer Science Department at West Virginia University for decades. The subject I usually volunteer to teach is Natural Language Processing, which I offer once every two to three years. And every time I teach it, the course is different. The last time I taught it—two years ago—the course was titled Natural Language Processing with LLMs. Apropos for the times.
This year, however, the chair of the CS department convinced me to teach Applications of Neural Networks—an entry-level graduate course on AI. I had never taught this course before. I don’t quite know what possessed me, but I said yes. Thus began my journey of teaching AI to graduate students.
This blog chronicles that experience: 15 weeks of three-hour sessions, once a week.

AI Policy
Given that I have been using language models for almost a decade—and LLMs ever since they entered our world—I knew I wasn’t going to restrict their use in class. In fact, I did the opposite.
This was the AI use policy published in my syllabus.
Academic Honesty and AI Usage Policy
Plagiarism will not be tolerated. However, the use of AI tools is allowed and encouraged in completing assignments.
Students should:
Use multiple AI systems (e.g., ChatGPT, Claude, Gemini, Copilot) when seeking help.
Provide a short analysis comparing the answers and describe how they integrated AI input into their final work.
Maintain academic integrity by ensuring understanding of submitted work. In-class quizzes will be used to verify comprehension of assignment material.
Be aware that there will be in-class quizzes to ensure understanding of material covered in assignments.
Class Materials
As mentioned earlier, I had never taught this course before. However, the content itself was quite familiar. I had no desire to create everything from scratch—especially given that there is an enormous amount of high-quality, freely available material. Some of the content I used liberally (with attribution) included:
Geoffrey E. Hinton: the perfect place to “borrow” lectures on backpropagation.
Introduction to Deep Learning by Sebastian Raschka - 270 lectures with videos on deep learning!
Stanford CS224N: Natural Language Processing with Deep Learning: my go-to resource for NLP for over a decade.
That said, I did create some original content—largely with liberal help from AI assistants. I maintain paid subscriptions to ChatGPT, Gemini, and Claude, and I used all of them extensively.
The topics for which I created my own slide decks focused on more recent advances: AI ethics, diffusion models, and my favorite topic—Agentic AI.
With today’s AI tools, one can almost conjure new slide decks on any topic. However, I quickly realized that wholesale generation doesn’t work well. Instead, I settled on a process that seemed effective:
Ask an AI assistant to generate a detailed outline of a topic (e.g., AI and ethics) for a one-hour class. Modify as needed.
Ask the assistant to generate a slide-by-slide outline.
Take each slide, one at a time, and use an AI assistant to create the actual PowerPoint slide. Claude, in particular, is excellent at producing visually appealing slides. Sometimes I used Microsoft Copilot as well.
This is a painfully slow process. Each slide almost always requires tweaking, which makes one wonder whether it’s worth the effort. AI constantly amazes me with excellent content—and just as often confounds me with utterly idiotic output.
Assignments
The course was structured primarily around assignments and projects—once again with AI assistants coming to the rescue.
The assignments were designed to test understanding of various models: ANN, RNN, CNN, LSTM, and Transformers—essentially the evolutionary arc of neural networks.
All assignments were created as Google Colab notebooks, generated by AI. The notebooks were essentially stubs with instructions on what to build. AI also generated the solution notebooks. Since I was volunteering to teach the course, I convinced the chair that I needed a TA. The TA handled grading all assignments. AI agent technology is not quite ready for that task yet.
Because students were encouraged to use AI for assignments, I also created in-class quizzes covering the assignment material—simple multiple-choice quizzes. Naturally, AI was used to create those quizzes as well.
Each quiz consisted of about 8–10 questions and was administered in class using old-school technology: paper and pen.
Research Paper Presentation
Each student had to select a research paper relevant to the course and present it to the class. They were encouraged to choose papers aligned with their project topics. On this front, I had no recourse to AI. I simply listened—attentively—to a series of short presentations from the students.
Research Project
Students selected their own research projects. The class had 22 graduate students, drawn from a wide range of departments: Chemistry, Biomedical Engineering, Mechanical Engineering, Robotics, Civil Engineering, and Computer Science.
Each project had to involve neural networks in some form. Students were required to:
Deliver a 15-minute in-class presentation
Submit a conference-style paper (8–10 pages) describing their work
And, the code.
I sat through all the presentations—two full three-hour sessions. Then came grading. I decided not to delegate project grading to my TA—but instead to my AI assistants.
Here’s the process I arrived at (after much trial and error, with emphasis on trial):
I prompted an AI assistant (Gemini) with the course syllabus, project goals, and constraints, and asked it to generate a detailed evaluation rubric prompt. The result was about a page and a half long.
For each student, I created a separate project in ChatGPT and uploaded the student’s presentation, report, and code (often multiple files).
I generated evaluations and scores.
Initially, the scores were quite liberal—which aligns with my grading philosophy for graduate students. The first few evaluations looked reasonable, until I encountered a very high score that contained substantial hallucinated content.
Back to step one.
This time, I explicitly required the AI to extract verbatim evidence from the submitted materials when justifying scores. The prompt became longer, but the results were far more stable. I still reviewed every evaluation manually, but the process was dramatically faster. Each output also included detailed feedback—about a paragraph long—which I shared with students as “AI feedback,” along with a brief one-line comment from me.
Some Reflections
A few weeks ago, my IIT classmate Venkat sent me a blog post by the President of Georgia Tech titled:
It’s a fascinating piece, and I realized I was—perhaps inadvertently—implementing at least the first half of his vision. His prescription:
Teach AI
Teach with AI
Research AI
Help others benefit from AI
Educational institutions clearly need to adapt to a rapidly evolving landscape. AI is no longer something only computer science students study—it is globally relevant to everyone and everything. Age is no barrier either.
Just this week, a 7th-grade student in our community developed an AI-based wildfire prevention app and was recognized by a congressman for her work.
So what skills are actually needed to harness this technology effectively? What will industry be looking for?
Current AI technology is incredibly powerful—and incredibly stupid at the same time. My mantra has always been:
Never trust, and always verify.
Using AI extensively throughout this neural networks course has not changed that belief one bit. That said, each new generation of models does seem to become incrementally more reliable.
Student Feedback
I created a survey (using AI, of course) to gauge student reactions to the use of AI in the course. The response was generally positive. A few representative comments:
“Indeed, learning with AI makes learning easier, but over-reliance on it could make us dull. Just use it as an additional tutor, not an answer-generating machine.”
“Since use of AI was allowed, I could achieve and learn more than I would have done on my own.”
“Some more engaging lectures could help.” (Ouch.)
Several students also commented on their need for better familiarity with Google Colab, particularly given limited compute credits. The lack of institutional resources in this area is indeed a knock on the university system.
My goal from the outset was to ensure that students learned how to effectively use AI assistants. I believe I accomplished that.



Comments