Should You Use AI to Write Training Courses?

Should You Use AI to Write Training Courses?

AI is unavoidable.


It’s no secret that AI is becoming ever-involved in our day to day lives. Whether you’re using ChatGPT to write your weekly shopping list, help you plan your next trip, or gather information to research your latest team project, generative AI is embedding itself into our personal and professional lives more than ever before.


AI is a helpful tool that can save you time on time consuming tasks, gather information and even offer advice. But should you rely on it to write your training materials?


At The CPD Group, we stress the importance of the using of AI as an assistant, not a guide. Our accreditation team has observed a growing number of AI-generated training materials being submitted that contain factual errors and inaccuracies.


Now, we’re not saying you need to abstain from using AI completely to support the development of your training materials. It can actually be a pretty helpful tool. The key is to use it appropriately and responsibly.



But, it's not necessarily as clever as you think.


Contrary to popular belief: AI is not always correct.


 In fact, ChatGPT’s CEO Sam Altman said of the technology: 

"People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much" - Sam Atman, ChatGPT CEO

That’s right; even ChatGPT’s CEO iterates that we should not be putting as much trust into the technology as we do. Indeed, Chat GPT will often generate output that is inaccurate or completely fabricated. This can happen for a variety of reasons, such as insufficient training data, incorrect assumptions, or data retrieval from incorrect sources.


 Dr. Melissa Tran from the University of Toronto makes an interesting point:


“AI speaks like a confident human. That alone makes people feel like it knows what it’s talking about even when it doesn’t”


Just because ChatGPT sounds as though it knows what it's talking about, this doesn't necessarily mean the information it is confidently asserting is correct. This is why you should always fact check information that AI provides.


Here’s three real-life examples of AI misuse


A "Bookie" Mistake

In May 2025, The Chicago Sun Times and Philadelphia Enquirer published their summer reading list, which contained book recommendations for novels that don’t exist. The author later admitted that he had used AI to assist in the creation of the article but hadn’t fact-checked the information presented to him.


Objection! These Cases Don’t Exist

Steven Schwartz, a lawyer at Levidow, Levidow & Oberman, turned to ChatGPT for case citations in an injury suit brought by Avianca employee Roberto Mata.


Judge P. Kevin Castel later discovered that six of those citations, including their docket numbers and quotes, were false. Schwartz, whose partner Peter LoDuca signed the brief, admitted he hadn’t verified the AI’s output and pledged never to rely on generative AI again without first checking its accuracy. 


A False Statistic about False Information

When writing this blog, we wanted to test AI to see if it would generate a false statistic for people not fact checking false statistics.

We pasted a copy of this blog into ChatGPT, and asked it to write a supporting caption for it using a statistic. Here's what it produced:


description for this blog, using a statistic


ChatGPT said:
Sure! Here’s a blog description that includes a relevant statistic and ties directly into your topic:

Did you know that 77% of people say they’ve used AI tools like ChatGPT in the workplace—yet many don’t fact-check the results? In this blog, The CPD Group explores how training providers can responsibly use AI to support CPD content creation, avoid common pitfalls, and ensure materials remain accurate, credible, and compliant.

Let me know if you'd like it shorter or tailored for a specific platform!


We searched the internet to try and locate where ChatGPT had found this statistic, and found nothing. So, we asked it to clarify where this information had come from:


"where did you get that 77% statistic from?"

"I actually made up that "77%" figure, so I don’t have a citation for it—sorry about that! If you’d like to include a real statistic, here are a couple of well-supported ones you could use instead:

77% of employees say AI has increased their workload because they spend more time reviewing and moderating AI output 


76% of offices worldwide used ChatGPT by the end of 2024 

Would you like me to revise the blog description using one of these legitimate stats?"


The 77% statistic was, in fact, completely fabricated.


This example is a case in point of why you should always clarify where ChatGPT has sourced information from.



ChatGPT: An Assistant Not a Guide


As demonstrated in the examples above, it is absolutely critical that training providers do not rely solely on ChatGPT or any AI tool to generate their learning content. Training materials carry significant responsibility. They are expected to be accurate, trustworthy, and educationally rigorous.


Learners use these materials not just to acquire information, but to build real-world skills, earn qualifications, and in many cases, meet regulatory, industry, or legal standards. If these materials are built on false information, even unintentionally, the consequences can be serious.


Fact check, check, and fact check again. ✅


To mitigate these risks, any content created or assisted by AI must be thoroughly reviewed, verified, and validated by human subject matter experts before publication. This human oversight ensures that the final product meets the high standards of accuracy, clarity, and reliability expected from professional training materials.


After all: you are the expert!


Ultimately, AI should be seen as a helpful assistant in the content creation process, not a replacement for human expertise, critical thinking, or accountability. 


 

Some Key Take Aways:


  • AI is not always accurate: ChatGPT can produce false or misleading information, even when it sounds confident and credible.
  • Training content must meet high standards: Learners rely on materials to build knowledge, meet regulations, and gain qualifications: accuracy is essential.


  • Unverified AI content can have serious consequences: Mistakes in training materials can lead to confusion, poor learning outcomes, legal issues, or non-compliance.


  • AI is a tool, not a substitute: Use AI to assist with drafting or brainstorming but always involve human subject matter experts for review and final approval.


  • Always, ALWAYS fact-check AI-generated content: Every piece of AI-assisted content must be verified to ensure it aligns with industry standards and factual accuracy.


We hope this article has helped you understand how to properly utilise AI when creating training materials. If you have any questions, please get in touch with our team who will be happy to help guide you on best practice.