Skip to content
Article

The Human Factor: Navigating Behavioral Barriers on the AI Frontier

 

While the excitement around AI grows, overseeing the full potential of this development remains highly challenging. Opportunities, possibilities, concerns and risks of AI seem immense. What we do know is that human behavior will play a crucial role in this development and heavily impacts the value it can bring to organizations. By using AI, people will adopt new behaviors that affect how they and organizations work.  

AI comes with many behavioral challenges, such as: How will people use AI in our organization? Is AI going to be an advantage or risk? Will our decision-making process heavily rely on AI in the future? Will we be at risk because people don’t act according to regulations? How does it affect our way of working? Do we need to learn new skills? How does AI relate to our strategy?   

In our experience, technical challenges and barriers are tough, but behavioral barriers might be harder to overcome. By understanding and addressing behavioral barriers, organizations can turn AI initiatives into transformative successes. In this post, we will dive into common behavioral barriers organizations face when adopting and leveraging AI. 

First things first: AI is just a means to an end 

Regardless of the technology or approach, behavioral change is the key to successful transformations. AI is no exception here. Like other developments, universal behavioral principles can help organizations navigate complex change processes.  

It may be stating the obvious, but there is a noticeable trend of organizations embedding AI without having clear goals in mind. Questions like “Where can we embed AI in our processes?” are not uncommon. These questions make us wonder if the adoption of AI is driven by clear, envisioned objectives or simply by ‘the bandwagon effect’—a cognitive bias that refers to the human tendency to adopt certain behaviors, beliefs or attitudes because others are doing so.  

Whether you’re looking into the possibilities of AI - or any new shiny tool, framework or methodology - it starts with two fundamental questions: What are the results you want to achieve, and how do you want to get there? This is exactly where behavioral science comes into help you specify your results and the behaviors that will help achieve them within specific organizational contexts. If you want to read more about how behavioral science helps specify results and behavior, we recommend that you read this blog post 

And even if you have concluded that AI can help you achieve your goals, it’s still nothing more than a means to an end. ‘Embedding AI in our process’ is not a result. What will be the result of embedding AI in process X?  

For now, let’s assume you have specified your results properly. The next question is how to get there. Which human behaviors are needed? This is where understanding the most common behavioral barriers and proactive strategies to overcome them becomes crucial. 

Behavioral barriers in AI adoption 

Let's dive into three behavioral barriers you will most likely encounter on your AI adoption journey.   

  1. AI means change. Change can trigger resistance 
    Change, like AI, introduces uncertainty, especially because it is complex and often feels abstract. As humans, we tend to want to eliminate uncertainty. So, when potentially disruptive developments like AI enter the stage, questions such as, ‘How will AI influence my decisions?’ or ‘What happens if AI makes a mistake?’ can fuel resistance. For instance, in an organization implementing AI for customer service, an employee might wonder if AI will make their role redundant or if their interactions will constantly be monitored and evaluated by an algorithm.

    As with any other change or transformation, you must strategize for potential resistance. From experience, we know that organizations tend to focus on triggering behavior by sending a lot of information, telling people about the why, the importance and risks of the transformation, and what is expected of people. All of this is done in an attempt to make the change more appealing. 

    Behavioral science shows that experienced consequences predict future behavior better than triggers. This means that if we exhibit a certain behavior and it’s followed by positive consequences for us, we are more likely to repeat that behavior in the future. On the other hand, if we exhibit a behavior and it’s followed by negative consequences, we are less likely to engage in that behavior again. For example, an employee experimenting with AI for customer service may receive recognition from leadership and colleagues for successfully streamlining operations, motivating others to adopt similar practices. This positive consequence might lead to the employee continuing to experiment, potentially leading to valuable new insights. While this is a simple example, it illustrates how focusing on positive consequences can help drive desired behavior.
  2. Skills and knowledge gaps
    Changes and transformations usually require different ways of working. It may also imply new skills and knowledge not currently present in the organization. This can lead to uncertainty. For example, a software development team adopts an AI-based code review and optimization tool to improve coding efficiency and reduce errors. While one developer excels at using the tool, consistently providing optimized solutions and insights, others might feel left behind, experiencing fear of missing out but unsure how to leverage the tool effectively, or feel less valuable. These consequences might cause behavior that you don’t want to see. For example, people avoiding the usage of a tool, talking others out of using the tool, conflicts within teams, or other behaviors that might distract teams from delivering. 

    First, it’s very important to know what these skills and knowledge gaps contain and where they might occur. More importantly, this should be in line with your goals and results. Often, AI adoption requires new skills, such as interpreting AI outputs or validating algorithms. These gaps in knowledge can create resistance, as employees may not fully understand what is expected of them. Consider a financial institution implementing an AI tool to assess creditworthiness. Employees not only need to learn how the tool works but also how to explain its decisions to customers. Defining these skill gaps clearly, aligned with organizational goals, allows for more effective training and smoother adoption.

    The first step is to identify the gaps and their relation to results. Then, you can think about the behavior needed to close these gaps. Being specific about results and behavior is crucial here. We’ve seen organizations communicating things like, “We need everyone to keep thinking critically when it comes to AI!”. Although it sounds trivial and no one would disagree, ‘thinking critically’ is not behavior according to the definition in behavioral science: behavior is what I can see you do or hear you say. Behavior refers to observable actions—what someone says or does—making it essential to define it concretely.

    ‘Thinking critically’ is wide open to (mis)interpretation, and there are many behaviors under this container concept, like “reviewing AI-generated insights for logical consistency by adding comments in Jira” or “questioning assumptions behind AI recommendations by discussing them with colleagues.” Being specific and aligned on these concepts will prevent frustration and miscommunication, which might enable faster AI adoption.     
  3. The need for strong leadership  
    Similar to many disruptive developments, leadership has its work cut out for it. It gets to lead organizations through the wilderness that’s called AI, preferably with a compelling and well-thought-out strategy that reduces uncertainty and resistance among employees. Good luck! 

    From a behavioral science perspective, leadership is responsible for creating and enabling an environment in which people can show desired behavior that is aligned with goals. To enable such an environment, leaders must focus on specifying and communicating results very clearly. They must also work with teams to define key behaviors and create an environment that supports them. This could involve, for example, recognizing efforts in a way that feels like reinforcement for the people showing the behavior or hosting forums for sharing AI-related lessons.

    Leaders heavily influence the social norms and, thereby, the behavior of groups. The behavior of others can trigger our own behavior, highlighting the importance of the phrase ‘leading by example.’ Leadership must not only chart a clear AI strategy but also demonstrate the behavior they want to see in others. If you value experimenting with AI and embracing the failures and lessons learned as much as the successes, your own behavior must reflect that as well. More on the role of leaders in sustainable transformations can be found in this article 

 

Behavioral science as a universally applicable force 

While the technical challenges of AI are unique, this article shows that behavioral challenges remain universal. By applying the principles of behavioral science, organizations can not only implement AI successfully but also navigate any other transformation with greater confidence and effectiveness. 

If you are struggling with the role of behavior in adopting AI, feel free to contact us. We look forward to helping your organization navigate behavioral barriers as you embark on your AI journey. 

 

 

 

 

Explore more articles