Two people shaking hands

Building trust and transparency in generative AI learning

The ethical implications of AI can be confusing, but we're here to demystify AI and help you take a responsible approach to AI usage.
Taylor Cole, Copywriter

In the past year, we’ve seen AI transform from a new phenomenon to a tool that’s becoming seamlessly integrated into our everyday lives. From powering chatbots to crafting personalized learning experiences, AI is rapidly transforming the landscape of learning and development. In fact, in 2023 we saw more than 8,000 enrollments in AI content! While we can’t predict the future of AI’s role in L&D, its potential to enhance learner engagement and boost team productivity is undeniable. 

The seemingly endless possibilities of AI have brought about excitement and enthusiasm, but the arrival of this new technology has also understandably stirred up some concern regarding its ethics. For example, machine learning algorithms rely on vast amounts of data to learn and improve over time.  

In the world of L&D, this might look like an AI-powered LMS collecting employee data to personalize their learning experience, which raises questions about how learner data is collected, used, and protected. 

This level of personalization is a positive use-case – it allows employees to quickly and easily find the right content for their goals, so they can spend less time searching for courses and more time learning. However, it’s important to be aware of AI’s ethical considerations to ensure it’s used responsibly. We’ve compiled some tips and insights to help you do just that. 

Pull quote with text: The combination of human proficiency and AI efficiency is where the real magic happens.

Don’t ditch the human element

With GenAI tools like GPT-4 and Dall-E on the rise, it’s important to remember that these aren’t hands-off solutions to content creation. For example, if you prompt GPT-4 to create a learning strategy for you and then simply copy and paste the AI-generated text into your strategy, chances are, most readers are going to notice. It takes a real person to review the AI-written content for accuracy and brand voice alignment, and then make any needed changes. Sometimes many changes will be necessary to get it right and sometimes light edits will do, but regardless, it’s rare that AI will generate exactly what you need without any human involvement. 

Rather than using AI to replace human expertise, we suggest using it as a supplement to your or your team’s expertise. You’ll find that the combination of human proficiency and AI efficiency is where the real magic happens. 


Build trust through transparency 

Transparency in AI usage is crucial. According to a study by UKG, 78% of C-Suite leaders say their business is actively using AI, but 54% of employees have no idea how their company uses it. This knowledge gap can breed mistrust and uncertainty in the workforce, but the solution is simple. Of the survey participants, 75% said they would be more willing to embrace AI if their company was transparent about its AI use. The message is clear: when companies are honest about how they’re using AI, employees get on board. 

Pull quote with text: 75% of employees would be more willing to embrace AI if their company was transparent about its AI use.

Conversely, employees need to be truthful about AI’s role in their work. In a survey by Fishbowl, 68% of professionals said their boss doesn’t know they’re using AI at work. Dishonesty about AI-generated content can be considered plagiarism, so be clear about how you’re using AI. Remember, transparency builds trust. 


Prioritizing policies

AI data privacy and security is a hot topic these days. As an emerging technology, AI is complex, and its role in society is still being determined. Consequently, it’s continually being evaluated and regulated both locally and globally. To remain compliant, organizations and individuals alike need to stay current on new laws and requirements regarding AI, adjusting their usage and privacy policies accordingly. 

Be sure to check privacy policies associated with your learning and development programs, including policies from your LMS and your content provider. Find out if their products use AI, and if so, understand how they’re accessing and using learner data. Make sure your learners are also aware that these privacy policies are available to them and that they understand how to access them. 

The inherent bias of AI 

AI algorithms are only as accurate as the data they're trained on. When faulty training data is used in machine learning, or the person training the AI consciously or unconsciously inserts their own prejudice into the data they use, the AI may produce incorrect or biased results. These results are commonly referred to as “hallucinations.” 

Additionally, “black box” artificial intelligence refers to AI that doesn’t allow users to see how it reaches its conclusions. We see point A (our prompt to the AI) and point B (the results the AI produces), but everything that happens in between is a mystery. Without knowing how the AI reached its conclusion, it can be difficult to understand whether the conclusion itself is biased. 

While mitigating bias within AI models themselves might be beyond your control, you can still take steps to ensure responsible AI usage. Along with reviewing the privacy policies for information on how they handle bias in AI, try reaching out to the organizations directly to get information about how they’re addressing bias in the R&D phase. When shopping for AI tools and L&D programs, look for those that prioritize fairness and transparency in their development and deployment. 

Invest in education 

Equipping your team with the knowledge and skills they need to navigate the world of generative AI learning is crucial. The Go1 library offers a wealth of AI-related content, including courses on AI ethics, safety, and security. By investing in employee education, you can foster a culture of responsible AI usage and build trust in this transformative technology. 

Our approach to AI

So, you might be wondering – how is Go1 ensuring they’re using AI responsibly? 

Image with text: Go1 AI. Generative AI built on top of the largest content dataset. Discovery experiences. Content management. AI assisted curation. Smart content evaluation.

First and foremost, we are committed to adhering to ethical standards and treating your personal information with respect. While we aim to continuously enhance our product to better meet customer needs, we won’t make changes at the expense of breaking trust. The following are some steps we’re taking to maintain integrity in our product with the introduction of AI features: 

  • Guidelines: We've created a set of guidelines that all AI products and features must adhere to. These guidelines ensure our AI products are built ethically and function responsibly. 
  • Development: We prioritize ethics in the development stage of AI products, and we promote transparency by openly communicating how we make decisions about our products. 
  • Security: All customer data is safely encoded. We don’t share your data with third parties or other Go1 customers, and we don’t use your training data or inputs to improve AI models. Data is hosted on Microsoft Azure and is not transferred to OpenAI. 
  • Bias: We prioritize fair outcomes for all users by actively working to identify biases in our AI models and creating feedback loops in the event of detected bias. 
  • Adaptation: We stay up to date on the latest ethical concerns regarding AI and we continually revisit and adapt our products accordingly. 

We believe trust and transparency are key to unlocking learning potential, and we’ll always prioritize customer safety and satisfaction. Learn more about how Go1 is using AI to enhance learning here.

For more insights, subscribe to the Go1 newsletter to stay on top of all the latest L&D trends. Or, you can book a demo today to discover how Go1 can help with your team’s learning needs. 

Go1 helps millions of people in thousands of organizations engage in learning that is relevant, effective and inspiring.
Latest stories and insights