AI adoption and usage

AI at work: trust cannot be forced

By:
Aaron Kants
Contents

The use of artificial intelligence has already been strategically implemented in many organisations. But even where it has not, everyone still has access to free AI tools. Whether it is ChatGPT, Claude, Gemini or another platform, these tools help people write texts, summarise information, organise ideas and perform analyses.

Yet AI is often used less than it could be. Not because people have forgotten the hundred times managers told them to use it, but because trust is missing. And trust does not emerge automatically. If a tool feels unreliable, complicated or “not relevant to my job”, it is quickly set aside. Especially in workplaces where the surrounding culture does not support its use.

Why don’t people trust AI?

In my view, the most common reason people do not trust or use AI lies in emotional factors. Often, the issue is not a lack of technical skills, but uncertainty about when AI can and cannot be used. If I teach AI everything I know, will it start taking over my tasks?

AI can produce answers that sound convincing but are fundamentally wrong. That creates caution. If a person still has to review all the work afterwards, it may feel easier to do it themselves from the start.

Another reason is fear of making mistakes. Many people are unsure what they are allowed to enter into AI tools. Is client information acceptable? What about contract text? Or internal company memos? When the rules are unclear, people choose the safer option and avoid using AI altogether.

The third reason is very human. A new way of working creates discomfort. If someone has spent years preparing reports, emails or analyses in a certain way, using AI can feel like questioning their own skills. In reality, it should be the opposite. AI does not need to replace experience. It can help present experience more effectively. That is why managers must create confidence and psychological safety around AI use.

Why does AI struggle to gain traction at work?

AI is often discussed in overly broad terms. The conversation focuses on complex models, automation and the future of work. Meanwhile, employees think very practically: how will this help me complete a frustrating task faster today? If the answer is unclear, adoption will not grow.

Introducing AI does not need to begin with a large-scale project. Even simple steps can create significant value.

For example, ChatGPT or Copilot can help:

  • turn long texts into short summaries;
  • make emails clearer and more polite;
  • create action points from meeting notes;
  • identify key risks or problematic sections in reports. 

These are not futuristic use cases. They are everyday situations where AI can reduce repetitive work and give people a better starting point for meaningful tasks. Once people feel more comfortable, they can start experimenting with more advanced tools, such as ChatGPT agents that automate time-consuming workflows.

Trust grows through controlled use

AI should not be trusted blindly. In fact, it should not be. A good principle is to treat AI as an assistant, not a decision-maker. It can provide a draft, but the person remains responsible for the final content. It can highlight possible errors, but the person decides whether they are actually important. It can suggest ideas, but the person chooses the direction.

Trust also grows when organisations create clear boundaries for AI use. For example, teams should agree:

  • what information may be entered into AI tools;
  • which tasks are suitable for AI support;
  • when outputs must always be reviewed;
  • who people can turn to if questions or doubts arise. 

When the rules are clear, using AI feels safer. Not only technically, but emotionally as well.

Start with small wins

The best way to expand AI adoption is to begin with simple examples. Not training sessions that try to cover everything at once, but practical situations that regularly occur in people’s daily work.

For example, a team could agree on a one-week experiment: we will use ChatGPT or Copilot only for organising meeting notes. The following week, we test email drafts. Then we evaluate what genuinely helped and what did not. This kind of approach reduces pressure. No one needs to become an expert immediately. It is enough for people to experience that AI helps them complete some tasks faster.

AI needs good questions

The quality of AI output depends heavily on how questions are asked. A vague question produces a vague answer. For example, “Write an email to the client” is not a particularly effective prompt.

A better approach would be: “Write a short and polite email to a client explaining that we need two additional documents. The tone should be friendly, professional and not overly formal.”

This does not require technical knowledge. It simply requires thinking clearly: what do I want, who is this for and what should the outcome look like?

Habits matter more than technology

Adopting AI is not only an IT issue. It is also about work habits and workplace culture. Trust develops when people are allowed to experiment, make mistakes, ask questions and see real benefits.

We do not need to begin with complicated models or large systems. Often, ChatGPT or Copilot, combined with clear agreements and a few good examples, is enough.

If AI use in your team is still occasional, start with a simple question: which tasks take too much time but do not require all of our thinking capacity? That is often where the first practical AI success story begins. And that is also where trust starts to grow.

In the end, it is important to remember that today everyone has the technical ability to use public AI models. The real question is whether people are emotionally ready to embrace them.