Wrap-up: Responsible AI Leadership – Webinar Insights
If you missed AIM’s recent webinar, Responsible AI Leadership: What every leader needs to know, here is a wrap-up of the key ideas from the session. You can also watch the full recording.
Nobody approved AI's arrival in Australian workplaces. It just turned up
At AIM's recent webinar, Responsible AI Leadership: What Every Leader Needs to Know, held on 23 April 2026, the conversation explored the following issue: organisations are moving quickly on AI, but leadership direction and oversight are not keeping pace.
Host Lu Ngo, Head of Digital Skills at the Australian Institute of Management, shared 2025 statistics that made that hard to ignore. According to McKinsey, 92% of companies plan to invest in AI over the next three years. Yet only a third of those companies have proper governance controls in place (EY, 2025). Only 1% of leaders consider their organisation mature in AI deployment (McKinsey, 2025).
Joining Lu was Adam Morton, an independent AI and data leadership advisor and founder of Versix.ai, with over 20 years working with ASX 200 companies and global organisations across Australia, the UK and the US.
What’s the Biggest AI Challenge Leaders are Facing?
Before Adam's talk began, Lu put a poll to the audience: where do you see the biggest challenge when leading AI use in your organisation?
When the responses came through, nothing really stood out. Managing risk and governance (23%), workforce skills and capability (22%), setting clear expectations for AI (16%) and rethinking workflows (15%).
Adam picked up on it straight away - "There's no kind of outright winner... and that's half the challenge...There's not one standout risk. But because it's across everything, permeating every business unit, every process, it's really hard to manage."
Your Role as a Leader Isn't to Use AI - It's to Set the Conditions
Picture a radar screen full of flight paths. Adam opened with that image, and not by accident.
Air traffic controllers do not fly the planes. They do not own the airspace, and they are not approving every individual flight in real time. What they do is define the conditions under which flights operate: the rules of separation, which altitudes are available, the protocols for when something goes wrong. "For me, that's what responsible AI leadership looks like today."
In most organisations, those conditions are not clear. People are already using AI.
"It's just literally turned up on the doorstep of everyone's laptop."
A lot of leaders are now responsible for something they never really signed off on. "I'm really not sure I fully understand what's happened with AI in my workplace, in my team. But all I know is I'm responsible for it."
When AI Gets It Wrong, the Tool Usually Is not the Problem
Adam runs with a Garmin. After a hard session it might tell him to rest for 20 hours. He usually ignores it. "The Garmin's right about what it measured, but it's got absolutely no idea about what it didn't measure." No idea about the race in three weeks, or that sitting at a desk all day might mean a run is exactly what he needs.
When something goes wrong, the instinct is to blame the tool. In Adam's experience, "nine times out of ten, the tool did exactly what you asked it to do. The problem was the question which was getting asked within that context."
He shared an example from a large automotive business across four countries. The team built a tool so sales staff could ask performance questions in plain language. The reports looked fine at a glance.
What nobody caught was that the AI was comparing revenue across countries without converting currencies. All three currencies were landing as Australian dollars.
The CEO spotted it before the reports went out, having been at the business long enough to know the numbers didn't feel right. The team blamed the tool. But the tool had done exactly what it was asked. Nobody had built in a check.
Adam's question: "What happens if the CEO hadn't been there to catch it? That's the risk, and that's the version that's playing out in most organisations today."
AI will not tell you if the question itself is off. Teams that stop checking that distinction gradually lose the habit. "By the time you recognise this, it's quite a slow, subtle change in behaviour. Often people have lost that capability, or it's diminished."
What Happens When Leaders Don't Set Clear AI Expectations
Adam has six chickens, a cat and two dogs. The dogs, he admitted, do whatever they like. Not because they are difficult, but because he has not been consistent about what happens when they misbehave. They have concluded that the rules are negotiable.
When a leader has not stated a clear position, people do not hold still waiting for one. Work still goes out. Decisions don't stop. Teams watch what gets praised and what gets questioned, they have conversations and they arrive at their own version of what’s acceptable. "The silence or that gap is filled by those organic rules that everyone just decides what they need to do."
Adam posed a practical test. If you pulled three people from your team right now and asked each of them individually, not what the policy document says, but what they genuinely believe is acceptable AI use in your organisation, would you get the same answer from all three? "Probably not. And that gap is really a leadership problem and not a technology one."
How AI Changes the Way Your Team Works Without Anyone Deciding It Should
Most organisations still treat AI like a rollout. There’s a launch, some training, and it’s done. “But AI is not a single event.” It shows up in day-to-day work and starts changing it, often before anyone has decided how it should be used.
Adam shared an example from a procurement team. Licences for their invoice system were expensive, so IT pulled the data into a warehouse and added an AI layer. Anyone could ask questions in plain language. It worked. Faster access, lower cost.
Except each person prompted the AI differently. And because it answers the question it is given, not the one you meant, results varied across the team.
A month passed before customers noticed: some chased for invoices already paid, others with overdue accounts left untouched.
"The tool did exactly what it was asked to do." The problem was everything around it.
When Adam talks about guardrails, he knows what people hear: process, control, red tape. "The aim isn't to slow things down… it's to enable a safer path”. Work still gets done without guardrails. It just carries more risk.
So, the real question is this: in two years, do you want your people to be better at their job, or just better at using the tool?
What’s Keeping Leaders Up at Night
After Adam's presentation wrapped, Lu put the second poll to the room: which statement best reflects your organisation today?
Nearly 37% said teams in their organisation were experimenting with AI without clear guardrails. A further 18% said AI use was growing but leadership guidance was still evolving. Only 9% said leaders had clearly communicated expectations for AI use.
Adam was not surprised. Governance will never keep pace with AI adoption. It was not designed to. "That's the gap where leaders need to fill with that stated position. And often they don't. And it leaves teams to fly blind and fly on their own and make their own judgments."
The AI Leadership Questions the Audience Was Sitting With
Lu then opened the room to questions from the audience.
How do you know if a team is even asking AI the right thing? Adam's response: "Write the problem down in one sentence. Not the task. The problem." If that sentence is unclear, the output will be too. No model fixes a poorly framed question.
Once AI gives you an answer, ask it what it left out. "What did you consider and decide not to tell me?" That's where things tend to slip through, the assumptions and the context that never made it into the final response.
Lu mentioned she'd told a colleague she saved time using AI, only to be told she probably shouldn't say that out loud. In many teams, AI is still something people keep quiet. Better to ask, "which parts did you use AI for?" and the conversation opens. One pushes it underground. The other brings it into the work.
Leading AI Use in Your Organisation Without Being a Technical Expert
Most leaders already have a sense of how they want AI used.
The question is whether someone in your team could actually use that view when they're under pressure.
"Not having and communicating that stance… is pretty much the same as not having a position at all."
If your organisation doesn’t yet have a formal policy in place, that doesn’t mean standing still. It means aligning with your leadership team on a shared position and building a shared understanding and knowledge about this space. From there, be clear with your team about where AI is appropriate, where it isn’t, and who is responsible for checking outputs before anything goes out.
"The leaders that I see getting this right aren't the most technically sophisticated people in the organisations... they're the ones who've been deliberate about what I call the airspace that they effectively own... what moves through it, under what conditions, who's accountable for something when something goes wrong."
It’s not about controlling everything that happens. It’s about being clear on the conditions and not leaving the responsibility unclear.
If you found this wrap up of the Responsible AI Leadership: What every leader needs to know webinar useful, AIM has recently launched a two-day short course, Responsible AI Leadership, designed to help leaders build the practical frameworks to own and defend AI-informed decisions in their organisations. Find out more.
Explore more Digital Skills Short Courses, Microcredentials and Vocational Qualifications.
