#160 - AI for Learning: G.B.U.B

The potential is there...as long as you play your part

 

Welcome back to On the Fly.

Prof Mike leading the way today as we continue our “Good, Bad, Ugly, Beautiful” series with AI for Learning. Because this is how most people use it, right? For search. For information.

But that’s just scratching the surface of its capabilities, as long as you’re willing to put in the work. Because that’s what true learning requires. It takes time and attention and focus.

If you turn to AI to learn something, there are two possible outcomes:

  • You put in the work and AI makes the learning more efficient, or

  • You mistake information for learning, put in no work, and exit the same as you entered.

Just like it’s always been. People who try, learn. People who don’t, don’t.

The difference now is that, for the people that do, AI is a godsend that takes effort farther faster.

The Bad is stuff we already know - LLMs hallucinate; don’t always give (accurate) citations; default to surface-level, generic summary; and can be biased (while masking it in mundane neutral language). All of this is to say, the information you get isn’t great. If it’s too broad, too slanted, or flat-out false, you’re not learning. Or worse, you’re learning the wrong things and using that wrong to make important decisions.

A silver lining is that a lot of this is avoidable by A) being more specific in your prompt, and B) using better tools. 

But no matter what, the bad turns Ugly when people think they can outsource the work I spoke about in the intro. That silver lining? Writing better, more-focused prompts requires active engagement. Searching out, experimenting with, and then using better tools requires active engagement. If you’re someone who wants to just “let the AI do it,” odds are you’re not doing either of those things. Which means you won’t learn, and you’re wasting time essentially playing with a fidget toy.

What’s worse is that as you’re doing this, you’re also building bad habits, literally re-wiring your brain to “learn” in this way. It’s been a concern since the dawn of the internet —

Media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought…technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains…

“Is Google Making Us Stupid?” by Nicholas Carr

— but we’ve reached the next evolution of this fear, and there’s nothing we can do about it. We can’t make people actively engage. Having information at our fingertips via apps and social media has trained us to want things quickly and conveniently. Many will attempt to master material with minimal effort, which can and will be problematic down the line. The hope, I guess, is that the AI corrects the bad and saves the passive chatters from themselves.

The Good news is we won’t make this mistake. We see AI for what it is - a collaborator as opposed to replacement - and we’ll be much better for it down the line. We’ll be thoughtful with our prompts and re-prompts until we get the right information, and we won’t just scan over ChatGPT output absently and call it learning; we’ll interrogate the information and vet the sources and take enough time with the material that it’s embedded in our minds.

And when we need to, we’ll go beyond the chat. Because for me, that’s where the potential is Beautiful.  

I mentioned last week that I’m currently collaborating with Gemini, and it has encouraged me to use Notebook LM. I’m still very much in the “taking the tour” phase, just getting acquainted with its features and functions, but I can already tell it’s an incredible tool.

What many won’t like - not us, of course; everyone else - is that it requires active participation.

Here’s one of my experiences:

  • I created a Notebook with the sole purpose of helping me learn how to maximize my paid tier of Gemini Advanced.

  • I used Notebook LM’s search tool to track down sources related to my purpose. I conducted several searches, each one focused on some different aspect of Google Pro tools, and then selected which ones to add to the notebook and which ones to bypass.

  • I read the summaries (automatically generated by the notebook) of each source to get a better sense of what was in each one.

  • I’ve been chatting with it, asking it questions and getting answers pulled directly from said sources, tailoring the learning to my specific needs.

You might say to yourself, “Wait, can’t an LLM do this?”

I had a similar thought, so I asked an LLM to explain the difference:

The core difference lies in the source of authority. While LLMs are general-purpose models trained on a massive, broad dataset, Notebook LM is a grounded “research assistant” designed to work strictly within the boundaries of documents you provide.

Gemini

This is where YOU come in. You control which sources or documents it pulls from, thus regulating its ability to be biased or misinformed.

That last line in the chart is what stands out to me. The LLM is the starting point; Notebook is where deeper, true learning takes place.

  • If you want it to analyze a single document, you can focus on just that one.

  • If you want it to read and synthesize across 50, you can do that, too.

  • If you want to ask follow-up questions about the primary source material you’ve just read, you can do that.

  • If you want to add your own work to the Notebook and ask it to check your work against the sources, you can do that.

  • If you want to convert text into any number of other forms - audio, video, infographics, and more - you can do that.

I’m sure there’s plenty I haven’t discovered yet, but so far, so good. I’ve got several different notebooks created, each with its own dedicated purpose to keep my interests organized, which has helped focus my learning. And despite my inexperience, it’s already yielded fruit! I created a notebook with all my teaching materials for a course I’ve taught for 14 years now. You’d think I’d have it figured out by now, but in no time at all, Notebook LM read through everything, noted flaws in my scaffolding, and explained why I should reorganize my lesson plan.

No misinformation. No “You’re right, I made a mistake…” Just good, old-fashioned help with a modern, high-powered twist.

So this is where I see the beauty - in tools that function as assistants rather than servants, tools that require effort and attention to get the most out of them so that they, in turn, can get the most out of you.

⬆️ It reviewed select sources and converted material into an infographic with a single click ⬆️

Prof Mike breaking down AI for learning honestly made me take a step back. This is another level.

Using AI has clearly evolved. What started as a way to generate ideas, answer random questions, and expand on your thoughts has grown into something that builds apps and teaches you in ways that weren't possible three years ago. Mind blowing 🤯 

The beautiful part I love most about Google's Notebook LM is how it only focuses on the sources you share with it. Nothing else bleeds in. Be honest for a moment…your ChatGPT log is probably a mess at times 😭 And that clutter doesn't just sit there. It sometimes can creep into other conversations. I’d bet you’ve had that moment where ChatGPT pulls something from a completely different chat and you're sitting there like, "I didn't want that brought up. Why is it now bringing that up?" Now imagine that happening when you're trying to learn something new. That noise can become a distraction, which is a real problem when dealing with LLMs.

For fun, I want to show you how I would have used Notebook LM to help launch On The Fly if it had existed three years ago.

Before we launched this newsletter, Prof Mike and I hoped on a 1 on 1. We had a sense of what to write about, who the audience was, and what format would work. We talked through it for a while and eventually figured it out.

If Notebook LM existed for me back then, here's what I would've done:

  • I would have had an LLM research 20 competing newsletters and compile them into a document.

  • Upload that into a dedicated Notebook.

  • Then asked it to study what's working with those newsletters - their content strategy, voice, tone, subscriber growth, what makes each one strong, different, etc.

  • I'd essentially be learning from operators who were already ahead of me, using sources I selected and controlled, and then I’d ask…”okay, now, how can we make On The Fly stand out from the crowd and be unique?”

  • Hopefully from there it would take me to the moon.

But that’s the shift. It's not just about asking AI for answers. It's about giving AI the right material and asking it to teach you to get you better.

PS — I’m excited to try out Google’s Notebook LM now. I’ll report back in another edition!

Before You Go!

Thanks for reading. Next week, we’ll be back with the next installment of our “Good, Bad, Ugly” series, focused on AI in your personal life. I can’t wait for this one.

See you then!

Find Dan on LinkedIn

You are now On The Fly & In The Know.