Read on for:

🤖 What all the new frontier model updates actually mean for us
📝 Our 15 AI policy recommendations to put Australia on a good path
💻 Link to our Claude Code 101 webinar

📫 All the gear and no idea.

I had a mate growing up who would start a new sport and buy every possible bit of kit. When it came to the sport, however, he was clueless. Cricket season rolled around and he had a new bat, pads and gloves, but he’d be out for a duck every time.
“All the gear and no idea”. 

AI has felt a bit like my mate for the past two years. All the intellectual capability, but no idea when it comes to real world application. Give it a spreadsheet and it fails hopelessly. Try and find an email and it’s still just as bad as Outlook search. 

But just like with my mate, who eventually learned to use his bat and managed to string together a few decent shots, AI has turned a corner and is increasingly adept at using the tools it’s given. 

In this edition of analogue, I’ll be breaking down how that change affects you, as a leader and user of AI.

P.S. Welcome to all our new subscribers. It’s so good to have you here.

Josh Phillips
CEO and Co-Founder

🇦🇺 AI News x Australia

A lot of news since our last drop. A lot of bad news too, including about wars:
🪖 US and Israel vs. Iran (pretty crazy stuff – was this on your 2026 bingo card?),
👩‍💻 Pentagon vs. Anthropic (we cover this below), and
🌻 though not a major news story: the constant war between whether we keep figuring out how to use AI well vs. logging off completely and starting an organic, gluten-free, unplugged commune in Tasmania.

Even so, here are some handpicked AI headlines closer to home.

  • The WiseTech Wake-up Call: Australian logistics giant WiseTech Global has announced it will cut 2,000 jobs over the next two years, citing the “end of the coding era” as AI takes over software development tasks. The move has put unions on high alert, with the ACTU calling for greater transparency and “just transitions” for workers. Experts warn this is merely the beginning of a structural shift in the Australian white-collar workplace.

  • BYO-AI aka the Shadow IT Surge: Despite corporate hesitation, Aussie employees aren’t waiting for permission. New reports show a significant number of Australians are bypassing company policies to experiment with generative AI tools at work. For business leaders, it’s a clear signal: if you don’t provide a safe, official AI framework, your team will likely build their own –bringing a host of security and data privacy risks with them.

  • Cracking Down on Deepfakes: The Australian Government is tightening the net on digital harm with new laws targeting sexualised deepfakes. The legislation aims to modernise the criminal code, making the non-consensual sharing of AI-generated explicit imagery a serious offence. It’s a critical step in establishing guardrails as the line between ‘real’ and ‘rendered’ continues to blur.
    Tip: Keep reading on for how we think deepfakes could be tackled to safeguard trust in our elected officials and democratic institutions.

🤠 The AI Round-Up

The Good
A team of Australian researchers at the Centenary Institute have just secured major funding to commercialise PanaceAI, a first of its kind Australian-made platform designed to help clinicians make better cancer treatment decisions. Early trials showed positive outcomes, including increased accuracy on risk assessments and a 25% reduction in unnecessary thyroid operations. It’s a perfect example of AI doing the heavy lifting on data so doctors can spend more time connecting with and giving attention to patients in their care.

The Bad
Professor Toby Walsh addressed the National Press Club in Canberra on 24 February, warning that Australia is “dangerously unprepared” for AI and needs to “ramp up investment in AI significantly”. In a speech titled ‘AI: Boom or Doom’, he highlighted how a lack of timely and adequate AI regulation would have negative impacts on future generations of Australians. Watch the full speech on the Press Club’s YouTube channel here.

& The Ugly
You may already be tracking the very public spat between the Pentagon and Anthropic.
Following a new Pentagon directive requiring ‘unrestricted access’ for all lawful purposes, Anthropic CEO Dario Amodei drew two red lines: Claude cannot be used for mass domestic surveillance or fully autonomous weapons.
Topping off responses to this from others within his administration, the US President ended up ordering a six-month phase-out of Anthropic’s technology within US government agencies. The clash has highlighted a tension between embodying responsible AI, national security requirements and how the ethics of war should adapt to AI tech. And yet, large public support for Anthropic’s position has been widely reported.
But now, OpenAI has jumped in to fill the $200 million contract void that Anthropic will leave. OpenAI has stated that the same red lines Dario raised have been baked into their agreement with the Pentagon, with the addition of one more: “No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).”
Comment: Dario rightly cited governance (privacy, reliability and proper human oversight) and capability concerns, without being obstructionist to the Pentagon’s mission and objectives. And the obvious issue is that AI chatbots are simply not reliable enough (or the right type of AI) for lethal, autonomous decision-making. Unlike traditional software, the models are probabilistic. They ‘guess’ the next step, making them prone to unpredictable behavior (which is unhelpful in ‘the fog of war’). In regard to OpenAI’s contract, only time will tell if the inclusion of those three red lines will be honoured as intended.

🎟️ Events + 🎁 Goodies

🎟️ Claude Code 101 Webinar (via Microsoft Teams)
Tuesday 10 March, 10:30-11:15am AEST
Wondering why everyone is raving about Claude Code? We’ll show you how to use it so you can decide for yourself if it’s worth the hype.

🎟️ Sydney Business Leader’s Lunch
Thursday 19 March, $150pp
If you’re a business leader in Sydney and interested in an exclusive (read: limited seating and intentionally intimate) business leader’s lunch near the CBD, please send an email to [email protected] and we’ll send you more details. Under Chatham House rules and around one large lunch table, attendees will have a chance to discuss the real opportunities and challenges of AI in business with world-class leaders including our very own CEO, Josh Phillips.

🎁 AI 2035: Australia’s Opportunity Playbook
Australia faces a simple choice: we can be an AI leader or an AI follower. We wanted to help shape the choices that would make us the former. We worked with the Menzies Research Centre on this playbook built on three foundations: economic growth, security and setting up Australia as a world-leader in AI. Read our 15 AI policy recommendations to secure Australia’s prosperity in the AI era ahead.

🎁 AI Can’t Save You from Unclear Thinking
We’re loving this article from our mate Lachy Nicolson at Leader Guide. It’s aimed at business owners who can choose to either be rewarded or exposed by using AI.

📝 The Feature

If you haven’t noticed, we’re back in the season of AI model releases. Although, to be honest, with the flow of new models from frontier labs coming thick and fast it feels like it’s less of a season and more of a “every week there’s a new model – forever” type vibe.

So let Josh break down the things that actually matter for you.

Three and three

Three of the major labs, Google (Gemini), Anthropic (Claude) and OpenAI (ChatGPT) have all released changes to their models in the last three months.

For the uninitiated, here’s a shortlist of models by each provider (please don’t mention the naming problem we have as an industry):

Google: Gemini 3.1 Pro, Gemini 3.0 Deep Think, Gemini 3.0 Flash

Anthropic (Claude): Opus 4.6, Sonnet 4.6, Haiku 4.5

OpenAI: GPT-5.2, GPT-5.3-Codex, GPT-5.2 pro

Checkmate.
The rivalry between AI frontier labs fuels the “every week there’s a new model – forever” feels.
Image: Jisoo’s imagination x Gemini

Alongside major releases, models are often being updated in the background, with tweaks to improve performance.

All of this is happening in real-time, while we users try to wrap our heads around how to apply AI in our regular context.

But there’s some interesting data from the latest range of releases. Read Josh’s full LinkedIn article here to see what all the model updates really mean for us.

🕰️ The Analogue Edit

This drop, our friend Sam (NSW) shared what’s been helping him live slowly:

🌿 Planting for a Future You Won’t See

As a digital native who grew up in the city, never far from a phone, I’ve had to really practice the art of being offline.

Lately, that practice has looked like sitting at the feet of experts – people like the farmer and poet Wendell Berry. He writes about people, nature, and place in a way that’s both soothing and a bit jarring, especially when you’re used to the high-speed hum of a screen.

In his poem Manifesto: The Mad Farmer Liberation Front, he delivers a sound reality check. And a reminder of what we’re doing here at Clear AI, we want technology that elevates humanity, serving people for good, today and well into the future. 

Ask the questions that have no answers.
Invest in the millennium. Plant sequoias.
Say that your main crop is the forest
that you did not plant,
that you will not live to harvest.
Say that the leaves are harvested
when they have rotted into the mold.
Call that profit. Prophesy such returns.
Put your faith in the two inches of humus
that will build under the trees
every thousand years.

I encourage you to read the poem in full, ideally slowly and on paper. 

🪵 Carving Time

The virtual world is thin. It lacks weight. Most of us are starving for something vivid – something that actually exists when the power goes out. Something you might do sitting around a candle lit table or a campfire. 

Lately, I’ve been trying to disconnect by picking up a knife and a block of timber for a bit of wood carving – whittling, to be exact. Why do I love it? 

It’s tactile, occasionally leaving splinters. There is a real risk of cutting yourself.You cannot rush or go against the grain. It is subtractive, you find the form by taking away the material. It puts you in a flow state that a glass screen can’t simulate. It’s meditative. It’s tactile. It’s human-paced.

I’ve been enjoying the work of artist Carol Russell lately, take a look at this gorgeous wombat.

If you have something helping you live slowly that you’d like to share, let us know at [email protected]. We’d love to hear it and maybe include it in the next drop!

Yours in humanity,

Keep Reading