Read on for:

🔓 Get access to our Anti-AI Slop Guide and ClearAI Gym
👮‍♀️ Our read on the new rules for cyber security in the AI era
💻 Link to our webinar where we answer: “What can Copilot actually do?”

📫 The new rules of AI security.

Like most children, I loved riding bikes. One day I was riding with friends near an industrial estate, apparently enjoying the views since I forgot to watch where I was going. Bam. I rode straight into the back of a parked semitrailer. 

I was winded. Eyes watering (crying). It hurt. 

But I couldn’t tell my friends. I brushed myself off, wished mum was there to pick me up, then rode to catch up with them. By some act of divine mercy none of them saw. 

The vibe around AI and cyber security is a bit like my 12 year old self – enjoying the view with very little awareness of what’s about to hit. Open endpoints, attacks on vendors (👀 Vercel), API key leaks, and more could be just around the corner. This drop is about the new rules, who they apply to, and what we can all do about it.
(Spoiler: they apply to all of us.)

Also, we’re officially launching analogue access – our referral program where you can get rewarded for telling your colleagues, friends and family about this newsletter. Read on to find out how.

Josh Phillips
CEO and Co-Founder

🇦🇺 AI News x Australia

  • $25b Microsoft investment down under: PM Albanese and Microsoft CEO Satya Nadella shook hands on a $25 billion deal – Microsoft's biggest-ever Australian investment. By 2029, the spend will expand Azure AI infrastructure, grow the Microsoft-ASD Cyber-Shield, partner with the AI Safety Institute, and skill three million Australians in AI.
    Comment: A “yuge” deal but a reminder that as AI infrastructure scales, so does the surface area we have to secure – more on that below.

  • ACMA flags the risk-reward balance: New reports from the Australian Communications and Media Authority show AI is turbocharging innovation in Aussie telco, media and gambling. The regulator is tempering the hype with a warning: businesses must sharpen their focus on misinformation and consumer protection as adoption outpaces current safeguards.

  • In the cloud: Gingerah Energy is planning to build “gigawatt-scale” AI data centres in Western Australia’s north. The strategy aims to turn the Kimberley into a massive AI training hub to power both Australian industry and the booming Southeast Asian market.

🤠 The AI Round-Up

The Good
Australia’s Cyber and Infrastructure Security Centre (CISC) has confirmed AI-related cyber incidents in critical infrastructure must now be reported under the SOCI Act – within 12 hours if significant.
Comment: The systems we all depend on – power, hospitals, banks, NBN – are now formally accountable for AI-related security failures. This is a welcome closing of a regulatory gap in a critical area and feeds into our Feature below.

The Bad
Web hosting giant Vercel recently got breached – not through its firewall, but through a single employee who'd signed up for an AI productivity tool (Context.ai) using their work Google account and clicked ‘Allow All’. Months later, malware on a Context.ai laptop handed attackers the OAuth token, and they walked straight into Vercel's internal systems. Customer credentials are now for sale on a hacker forum for US$2 million.
Comment: Every ‘Connect with Google’ or ‘Allow All’ click at work is a door – and it stays open until someone shuts it. If you've signed into AI tools with your work account, audit them. Again, this is a nice segue for The Feature.

& The Ugly
Iran is, by some measures, outmanoeuvring America in the digital narrative war by using AI to pivot from rigid state messaging to viral, satirical content. Leveraging AI to generate high-engagement memes and animations mocking US policy and amplify their influence across global social feeds.
Comment: AI collapses the cost of influence operations to near-zero. Satire and humour bypasses the defences propaganda (like a dodgy press release) usually triggers. It's another version of the same theme running through this drop: the rules of trust are changing faster than most of us can track. Cast a critical eye over memes as you would a questionable news article.

🎟️ Events + 🎁 Goodies

🎟️ ClearAI Presents: What Can Copilot Actually Do?
Tuesday 5 May 10:30-11:15am AEST
Honestly, this was the number one Google search suggestion related to Microsoft Copilot. So we thought we’d help answer the question for you.
Join us for a practical session on using Microsoft Copilot in the most productive way possible, including advanced prompting techniques, creating M365 agents, and using specialised frontier agents. As always, it’s free!

🎟️ Brisbane Business Leader’s Lunch
Friday 15 May, 12:30-3:30pm AEST, $150pp
If you’re a business leader in Brisbane and interested in an exclusive leader’s lunch near the CBD, reply to this email for tickets. Leaders will have a chance to discuss the real opportunities and challenges of AI in business with peers in a conversation hosted by seasoned executive Ruth Limkin (Founding CEO, The Banyans) and our very own Josh Phillips.

🎟️ Brisbane Claude Workshop
Friday 5 June, The Vita Nova, $1,500pp
We’re running an all day in-person Claude workshop going deep on what you can do with Chat, Cowork, Code and Design. Stay tuned for ticket releases. Spots will be limited.

🎁 How to Use AI Without Losing Your Mind
Robbie found this great essay by Dan Hockenmaier covering a productive framework for AI use without destroying your own cognitive abilities. A useful read for anyone who is grappling with this as they use AI.

📝 The Feature

The AI security conversation is heating up.

AI tools are getting better and better. Providers are racing to ship. Everyone seems to be building the next big AI product.
But when Jensen Huang said in March 2026 that AI agents “can access sensitive data, execute code, and communicate with external entities”, did anyone else have alarm bells going off? OpenClaw has been reshaping enterprise AI throughout 2025. And Claude Mythos, Anthropic's embargoed state-of-the-art model, is already dividing the security community despite limited access.

On the other side, Patrick Gray at Risky Business captured the bind perfectly: “Can this be secured to an acceptable degree? The answer is that it has to be – and it's going to be the definition of acceptable that changes, not the actual security threshold of the product.”

The debate about whether AI agents will enter the enterprise is over. We're now negotiating what we're willing to accept and how to secure them within a changing paradigm. And what we're accepting is significant. Security has long rested on three pillars: confidentiality, integrity and availability. Organisations are quietly trading away pieces of all three, not recklessly, but incrementally and (possibly) ignorantly.

This doesn't have to be a bad trade. Enterprise AI can be architected carefully, data can stay sovereign, the opportunity is real. But the assumptions governing enterprise security for decades are changing faster than most organisations (and leaders) can track. Here are the new rules:

i) Your perimeter disappears. Security used to mean building walls, patrolling them and patching up any cracks. But AI agents don't respect walls. Reaching out, connecting and acting across them can be the only job they’re given.

ii) Documents become weapons. Attackers are learning to hide malicious instructions inside files and emails that your AI will read and obey. It's phishing but machine against machine.

iii) Access control calculations change. A useful agent needs broad access. A secure one needs narrow access. Most organisations are quietly (and maybe unknowingly) choosing useful.

iv) When things go wrong, you may not know why. Can you reconstruct what your AI agent did last Tuesday? For most current deployments: not really.

v) The rules haven't caught up. Compliance frameworks were written before any of this existed. The gap between what's being deployed and what regulations require is growing fast.

So what does this mean for us?

Ahead of us is a wealth of opportunity and a need to not overexpose ourselves while capitalising on that growth. The temptation is to assume this is someone else's problem. It isn't. Every person who uses AI at work – or at home – shapes the security posture around them. Here's where to start.

If you’re a leader. Stop asking, “Is our AI secure?" Start asking, “What have we traded away to make this work, and did we know we were trading it?" Demand an inventory from your AI team: which agents are running, what data they touch, whose credentials they borrow, and what happens if one misbehaves on a Friday afternoon. If no one can answer that in plain English, you don't have a governance problem, you have a visibility problem. The Australian Signals Directorate's guidance on engaging with AI is a sensible place to ground the questions you put to your CIO or vendor – starting with visibility. Forward this to the person in your org who'll have to answer the questions above.

If you’re a worker. You're now part of the security perimeter, whether you signed up or not. Three habits:
1. Remember that anything you feed an AI – emails, shared docs, external PDFs – could contain instructions hidden for the AI, not you. So be careful what you point it at, especially from outside sources.
2. When an agent suggests an action that touches money, permissions or external recipients, slow down and verify it the way you'd verify a stranger claiming to be your CEO.
3. And if something feels off – an unexpected file, an instruction you didn't write, a tool behaving oddly – flag it. Noticing strange behaviour early could save your org a lot of pain later.

If you're using AI in your personal life. The cookie thing wasn’t just a meme – we’re seeing people hook up AI to all kinds of data. So the same logic from work applies, just higher personal stakes and fewer guardrails. Be deliberate about what you give an AI access to – your inbox, your calendar, your photos, your bank. 'Connect' is a one-click decision with a long tail, so read the permissions before you tap accept. And when an AI does something useful but slightly surprising, ask yourself how it knew, and investigate.

The opportunity ahead is real. So is the risk of sleepwalking into AI security risks (like riding your bike into a parked semitrailer because you weren’t looking around) but the way through isn't fear or paralysis. The solution is people (read: everyone) knowing what's changed, what's at stake, and taking relevant action to keep themselves safe.

🔓 Introducing: analogue access.

Speaking of people knowing what's changed…
The best newsletters grow because readers tell their mates. So we're making it worth your while with our new referral program: analogue access.

Send analogue to one person who'd love it, and we'll send you the Anti-AI Slop Guide. You can plug it into any AI – ChatGPT, Claude, Copilot, Gemini, whatever you use – to teach it how to preserve the things that make us human: diversity of thought, unique creative ideas, different voices and lived experiences. It’s designed to help you write clearer and better with AI.

Get four mates onboard, and you'll get early access to ClearAI Gym before the doors open to everyone else.

Welcome to the ClearAI community. Your link's below.

Thanks for joining us. See you in the next drop.

Yours in humanity,

Keep Reading