Being well-versed in AI

One of the advantages of teaching a class is the opportunity to listen to students’ questions, and in many cases, the questions open up new venues of discussions. Last week, as I was talking about how to properly using and integrating AI into learning processes, one student brought up two interesting questions. The first one is that what does it mean to be well versed in AI. Well versed is an intangible metric. The second question, or perspective, is that they cannot shake the feeling of cheating when using AI, even when they only use it to help with the learning process. In this entry, I will attempt to address both issues.

Using AI (properly) is not cheating

I will address the second question/perspective by stating up front that using AI (properly) is not cheating. In some sense, the students have been learning the right (or traditional) way, so to speak. This means attending lectures, taking notes, and studying on their own with some assistance from Google Search. Now, as they start to learn how to use AI to help with the learning process, they skip many steps they are used to. Understandably, this brings up the feeling of cheating, or the impostor syndrome (the AI is doing it, not me!).

How do we address this nagging feeling? An analogy would be like you are learning with a tutor from the learning center. This is not cheating, if you are engaging with your tutor actively by reasoning, questioning, and understanding what the tutor is telling you without simply asking the tutor to show you the final answers. Are you paying attention to what you need help with? Do you understand why you need help? Have you reflected on the explanations? As long as you are not just automating your brain to cruise to the final answer, then you are learning and therefore it is not cheating.

Being well-versed in AI

Now that we have made peace with the fact that learning using AI is not cheating, we are back to the first question: What does it mean to being well-versed in AI? I propose the following three pillars of learning with AI.

The first pillar is intentionality. When you learn with AI, you have to be intentional. This goes back to what I said earlier about not putting your brain on cruise control. You set your learning goals (do not let it be the final answers for your assignments). You need to ask AI explicitly to show you its reasoning and you need to reflect on this reasoning afterward. You want to be very mindful of mindless copy and paste. To be honest, even if you are typing the entire answer from AI into your document manually, I would argue that you would already have learned something from it, one way ot the other. One example approach is for you to the AI to for hints (or even solutions) of a problem in plain English. After looking at these hints/solutions, you can go back to write as much of answer as you can reason/remember, the ask the AI to evaluate and analyze your answer for correctness and differences.

The second pillar is conversation. In order to be intentional, you have to engage. Engaging here means that not treating AI like a glorified search engine. You can think of it as your partner. If you are a programmer, AI is your proverbial rubber duck. This rubber duck can give you actual feedback, unlike a literal rubber duck. Unlike your human counterparts, you will not have to bribe it with coffee afterward. Talk to the duck, argue with the duck, discuss with the duck. For example, ask for pros and cons, ask for holes in your reasoning, and ask for potential edge test cases.

The third pillar is balance. It is true that in learning through AI, you will lose the opportunity to train some of the traditional skill that you would get if you learned the traditional way. In order to make up that, you have to balance between AI-supported and non-AI learning activities. Use AI to help you with complex scaffolding issue, but then also do easy thing on your own to rebalance your learning portfolio. For example, you can use AI to help you to go through the three medium LeetCode problems with you, but then you should sit down and do two or three easy problems. And do this again and again, Eventually, you can move on to have AI help you with difficult problems and use the medium problems as your balancing act. This way, you build raw brain muscle and confidence that AI can’t substitute.

With these pillars, after you have gone through all of the above AI-enabled learning process, then how can you quantify your AI capability? I think you can try and use some of the following competencies.

  • Prompt literacy: Are you able to properly form the questions? Are you able to evaluate the responses from your prompt and adjust it as necessary?
  • Skepticism: When do you smell that AI is bsing you? When do you think that you have to verify your output?
  • Error diagnosis: Can you catch and fix errors from AI-generated code? Can you understand why these errors happen through the code generation process?
  • Tool selection: With thousands of models avaialble (e.g., Hugging Face’s library), can you choose the right one for your context?
  • Integration: Can you merge AI suggestion with your own reasoning into a coherent solution?
  • Transfer learning: Are you able to use a previous solution from AI and adapt it to a new problem on your own?

Being well-versed in AI, then, is not about memorizing prompts or chasing every new model. It is about building a disciplined relationship with the tool: learn with it, not from it; working with it, not offloading to it. If you approach AI intentionally, conversationally, and with balance, you will not only overcome the fear of cheating but you will discover a way of thinking that makes you more independent, not less. In the end, AI becomes less of a crutch and more of a training partner, helping you grow the very critical thinking skills that no machine can replace.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • YARRR: Yet Another Rejection, Resilient Required
  • Entry Level Positions of the Future
  • One Ring to Bring Them All
  • The Software Jobs Aren't Gone, They Have Only Moved
  • Trying out Ollama locally