20/10/2025
It’s Cybersecurity Month in the IT world. And this year, there’s a key question on many IT leaders’ minds: can we really trust AI-driven security at scale?
Trust has always been a fundamental concept in security. As organisations scale AI initiatives in critical areas like threat detection, fraud monitoring and access management, we believe that trust becomes even more decisive for success. After all, if your teams can’t trust the systems you’re putting in place, those systems wind up creating risks rather than mitigating them.
So, how can IT and security leaders trust new AI-driven tools while also ensuring employees trust them too? Here’s a practical action plan for IT and security leaders, especially in trust-driven industries like banking, insurance, retail and manufacturing. Get in touch to find out what Getronics can do to support your trust adoption.
The double meaning of trust
First: what do we mean when we talk about trust in the context of AI and security? Actually, it has two different meanings:
- The human side: Do your employees actually trust the AI security tools you’re choosing for them, or do they see them as a threat, a burden or something to work around?
- The tech side: Can your AI security tools be trusted to do what their vendors or developers say they can do?
Both sides matter, and both determine whether AI creates value or simply creates new risks.
If you neglect the human side of trust when adopting new AI tools, you risk low adoption, shadow practices that bypass official systems and a false sense of security that leaves your organisation more exposed. It also violates your team’s trust when you choose tools that add complexity rather than improving efficiency. That’s a big reason why 69% of CIOs suspect their employees are using unauthorised AI tools at work, according to Gartner’s 2025 AI Hype Cycle report.
On the tech side, CIOs have to look past the overwhelming hype and constant over-promising that we’ve seen from the AI community over the past several years. Just like any other security tool, AI-powered tools need to prove themselves under real conditions, with transparency, auditability and resilience against emerging threats like prompt injection and access control failures.
Integrating trust into AI adoption
Many organisations face adoption challenges when rolling out new AI tools. That’s probably because nearly half (46%) of employees consider AI a threat to their jobs, according to a BCG report. It certainly doesn’t help that the AI industry continues to hype the potential for their tools to redefine work as we know it. Employees have legitimate concerns. It’s up to IT and security leaders to address those through carefully planned adoption strategies.
AI success story: MinterEllison
A Gartner case study shows how MinterEllison, a global law firm, built trust amongst employees by launching an AI literacy programme that was both structured and social. Measures included setting aside 12 hours over 12 weeks for employees to learn, crediting training time toward performance targets and recruiting internal “digital coaches” to promote adoption.
Within months, weekly AI users jumped from 250 to 1,600 (a 6.5x increase) with over 4,000 credited learning hours logged. This story shows that employees trust AI more when they feel structurally supported, and changes are introduced gradually.
Education, transparency and communication
The need for structured education is only going to grow. In its “Predicts 2025: AI and the Future of Work” report, Gartner says that by 2028, 40% of employees will first be trained or coached by AI when entering a new role, up from less than 5% today. If employees expect AI to guide them from day one, companies can’t afford to leave literacy to chance.
Clear communication also matters. Be upfront and realistic about what AI tools can and can’t do. Show staff how much their human oversight matters. And don’t just say ‘trust the tool’. Teach and encourage them to question and interpret outputs instead. When employees know that their judgment is still valued, AI feels less like a threat and more like backup.
Choosing AI security tools worth trusting
The other side of the equation is whether IT and security leaders can trust AI systems to perform reliably and securely. Here too, the risks are real. Gartner warns that over 50% of successful cyberattacks against AI agents through 2029 will exploit access control issues such as prompt injection. At the same time, organisations are struggling with accuracy drift, bias and unpredictable cloud costs that erode ROI.
These challenges explain why cybersecurity remains a top-three priority for CIOs across banking, insurance and retail. But the good news is that examples of trustworthy AI deployments are already emerging, as the Gartner case studies below show.
Citizens Bank, for example, has rolled out a carefully selected orchestrator agent to manage back-office tasks safely within controlled workflows. In insurance, a Dutch firm is using AI to process straightforward motor claims automatically while routing complex cases to human adjusters. Both examples teach us the same lesson: leaders can trust AI when it is well suited to the use case and has clear guardrails, and when humans stay accountable for high-risk decisions.
Industry perspectives
The trust challenge looks slightly different from one sector to the next:
- Banking: Trust is inseparable from compliance. Leaders need AI systems that can cut false positives in fraud detection and keep audit trails regulators can follow without question.
- Insurance: Bias in underwriting or claims decisions isn’t just an ethical problem, it’s a regulatory and reputational risk. Bias checks and explainability tools are essential.
- Manufacturing: Safety is non-negotiable. Plant managers won’t rely on AI predictions about equipment failure unless they know when and how human review applies.
- Retail: With staff turnover high, shadow AI is the big risk. Retailers must treat AI literacy as seriously as data literacy to keep adoption safe and productive.
Making trust scalable
So how do leaders make trust scalable when launching AI security projects? A few patterns stand out across industries:
- AI literacy first. Employees won’t adopt what they don’t understand. Programmes like MinterEllison’s prove that structured training pays off in adoption and safe use.
- Clear oversight rules. Define when humans step in and make sure everyone knows it. This prevents both overreliance on AI and mistrust of its outputs.
- Auditability. Every AI-supported action should leave a trace that can stand up to regulatory and customer scrutiny.
- Cost and risk governance. Monitor cloud spend, accuracy drift and access control just as closely as you would financial controls.
- Culture change. AI will undoubtedly transform workplace culture in the years ahead. CIOs, CISOs and CHROs all play a role in making AI trustworthy at scale.
We’re all excited about the possibilities of scalable, AI-backed security. But this Cybersecurity Month, one thing is clear: organisations have to overcome the trust gap if they want to succeed. Employees won’t use tools they don’t believe in. Leaders can’t scale systems they can’t rely on.
By addressing both the human and tech sides of trust, you can start creating value from your AI initiatives faster, while strengthening your workforce and driving innovation in the long term.