Read on for:
🥊 Deepfakes vs. democracy
💻 Link to our free Claude Cowork webinar next week
⚖️ What the Meta and Google lawsuit verdict means
📫 Fool me once.
Welcome to another edition of analogue. Glad you could make it.
You may have seen this meme doing the rounds last month. I’ve updated it à la Nano Banana to really capture the current collective feels.

Today is April Fool's Day. I thought about an elaborate AI-generated prank to pull on you (think: a video of a local petrol pump with Unleaded 91 at $1.20/litre) as I want to talk about deception. Not the harmless kind: the whoopee cushion, the fake spider, the “my friend has a crush on you. lol jks ew” (or was that just me who got fooled like that 🥲). The AI kind. The kind that is accelerating faster than our laws, institutions and instincts can keep up with.
Working in government for almost a decade, I saw firsthand how sophisticated influence operations were. And that was before generative AI handed advanced capabilities to anyone with a smartphone or laptop connected to the internet. So to get a more detailed (and hopeful) picture of the issues, I called Aussie-abroad-Oxford-PhD Callum Harvey who’s spent a lot of time thinking about and working on this stuff.
Keep reading for the full readout of that discussion (it’s quite grounding tbh) and a family box of AI knowledge nuggets curated by the ClearAI team.

Jisoo Kim
Co-Founder + Director
🇦🇺 AI News x Australia
Sovereignty over Scrapes: The government remains divided over whether AI giants can scrape Australian creative works for free. While Attorney-General Michelle Rowland ruled out a ‘free pass’ for AI training last year, tech lobbyists are now warning this could stall local investment. For now, Canberra is sticking to a ‘permission first’ model – prioritising the development of a paid licensing framework to protect local creators.
Comment: A clear licensing framework would give companies legal certainty while ensuring the people whose work trained these models actually benefit. But the threat that investment will walk if creators are protected deserves scrutiny. Big tech companies aren’t here for our lax IP rules. We should be reminding them that they’re here for stable governance, strong institutions, skilled talent, and access to a diverse market.Supply Chain Surprise: Attackers poisoned LiteLLM – a popular, freely available software tool that developers use to connect AI applications to models like ChatGPT. This is known as a ‘supply chain attack’ where attackers compromise a component within your system that you already trust. The Australian Signals Directorate has flagged this type of attack as a growing risk to Australian organisations.
Comment: If your business uses AI tools, your risk doesn't stop at the tools you've chosen – it extends to everything those tools are built on. A compromised component that your team never directly chose or installed can still expose your data. We recommend doing an audit and knowing what's inside your AI stack, not just what's on top of it.Laying the Ground Rules: Minister Tim Ayres released new expectations for data centre developers – requiring them to support renewable energy, use water responsibly, create local jobs, and make computing resources available to Australian start-ups. Projects that don’t align won’t be prioritised through federal approvals.
Comment: It seems like an attempt to get ahead of the backlash we’ve watched unfold in the US, where $64 billion worth of data centre projects were blocked or delayed in under a year as communities pushed back over power bills, water use, and the feeling that Big Tech was taking more than it was giving. Our government has answered the question of who should benefit (i.e. Australians) but hasn’t been outlined yet how it plans to enforce that.
🤠 The AI Round-Up
The Good
OpenAI has indefinitely shelved plans for an erotic chatbot, citing concerns from staff and investors about the impact of sexualised AI content on society. One adviser warned the company could be building a “sexy suicide coach” – AI that pairs intimacy with vulnerability, with no clear safeguards. The fact that internal pressure killed this before launch is a rare win for conscience over commercialisation. It won't be the last time this question comes up – but this time, the right people said no.
Comment: OpenAI also just shut down Sora because they were burning millions daily on a tool that could generate convincing synthetic video at scale. The market for “make any video you want” turned out to be smaller than expected – but also because the social value of that content is depreciating fast. Read on for The Feature that touches more on this.
The Bad
AI data centre investment is holding up a large part of the US economy, accounting for 39% of recent GDP growth. The Iran war is now threatening that through energy disruptions, private credit wobbles, and oil-driven inflation all at once. Those shocks ripple out to Australians through rising costs (⛽️🥲) and superannuation exposure to global tech stocks. But the war has also exposed the core vulnerability: the global AI boom was built on cheap Middle Eastern energy, and that advantage is no longer certain.
Comment: This could push companies toward solar, wind and more efficient infrastructure – which is exactly where Australia has a real (still unrealised) advantage.
& The Ugly
A US jury found Meta and Google liable for designing deliberately addictive platforms accessible to children. The verdict went straight to design: infinite scroll, autoplay, and algorithms engineered to space out dopamine hits like a slot machine, targeted at children with underdeveloped brains.
Comment: This is ugly on many fronts. Children were knowingly harmed for profit and, as a society, we have to sit with that. For Big Tech, the legal reckoning is just beginning: thousands of similar lawsuits are likely to follow. In Australia, our eSafety Commissioner is enforcing the U16s social media ban Big Tech lobbied against. This verdict just gave us more ammo.
🎟️ Events + 🎁 Goodies
🎟️ Claude Cowork Webinar
Tuesday 7 April, 10:30-11:15am AEST
Join us for a practical session on how Claude Cowork can automate your day-to-day file management, task handling, and business processes using nothing but basic language prompts. Oh, and it’s free!
🎁 Dabbling with Dispatch
Dispatch is the new feature that lets you assign tasks to Claude from your phone and have them completed on your computer – ready for when you’re back at your desk. If you can’t make our Claude Cowork webinar, read this guide on how to set it up (on Pro and Max plans only).
📝 The Feature
Something has changed in how we relate to information online. Most of us can feel it.
AI-generated images, videos and voices are now cheap, fast and convincing enough to fool anyone on a bad day. The result isn't a world full of people believing everything they see. It's a world where people have stopped trusting anything at all.
That loss of trust has a cost. Democracy doesn't run on perfect information – but it does need a shared baseline of reality. When that erodes, so does everything built on top of it.
So it’s probably good that I sat down with Callum Harvey for a chat. He’s currently doing his PhD at the Oxford Internet Institute researching cyber threat intelligence and AI policy. His prior roles span the Australian Department of Industry, Science and Resources, CyberCX, and the Harris Cyber Policy Initiative at the University of Chicago. We talked about where we are at with mis/disinformation and deepfakes: what's working, what isn't, and why the answer probably starts smaller and closer to home than we think.

Remember the time when we’d just read about today’s reality as fiction? At least it’s not a huge stretch of the imagination for us.
Image: Jisoo x Gemini
JK: I want to start with the landscape. When you look at AI-generated misinformation and disinformation right now, what are we actually dealing with?
CH: The way you can look at this in a very hopeful sort of way is that, yes, there is a lot of mis and disinformation online, but we also don't have a particularly great read as to whether any of it's effective. There is no easy way to measure the effectiveness of any kind of influence operation. That goes back to the Cold War if not before. If you read Thomas Rid's book, Active Measures, both sides were doing leaflet drops over East or West Berlin. It was a spectacle, but we don't know if any of that worked. And the same is true today.
JK: So we don't actually know if mis/disinformation and deepfakes are working?
CH: We don't. And that's actually kind of good news, in a way. But the inverse of that is you can't measure the policy response either. So people don't need to roll over and give up on this – they definitely shouldn't – because people do believe information that runs counter to fact, and that sucks. We need to build up people's media literacy to counter that. A lot of it is in the eye of the beholder, which I find really fascinating. A lot of people in government talk about how difficult it is to counter disinformation emerging from Russia and China, but we have no metric for whether it's doing what it says on the tin.
Read the full interview here.
🕰️ The Analogue Edit
From our mate Sam in NSW:
Lately, my four-year-old and I have been poring over Where’s Wally? books together. While I scan the page for Wally, Wenda, Wizard Whitebeard and friends, he sees more!
He notices the tiny, absurd sub-plots I’ve trained myself to filter out as ‘noise’: a knight tripping over his own shield, a seagull stealing a hat or a baker decorating a cake in the middle of a battle. He pauses, points, and breaks out into intermittent fits of laughter at the sheer silliness of it all. As adults, we’ve become experts at filtering, but this little human is seeing so much more because he isn’t looking for a result – he’s just looking.
It brings to mind the work of author Frederick Buechner, who issued a call to “pay attention” to the wonder of ordinary moments. He once noted: “It’s so easy to look and not see what we pass through in this world... If you’re like me, you see so little. You see what you expect to see rather than what’s there.”
Buechner diagnosed our modern condition as a kind of sleepwalking. He urged us to stop thinking, stop expecting, and stop living in the past or future – to simply stop ‘doing’ and start paying attention.
As we head into the Easter long weekend, I hope you find the time to slow down, and notice the things you usually filter out. And as the autumn air begins to crisp up, I’ll be practicing what I preach by returning to a favourite slow-paced ritual: preparing a hearty bowl of Oyakodon (etymology: ‘parent and child’ chicken and egg bowl). There is something deeply grounding about the rhythmic slicing of spring onions and the gentle simmer of dashi that reminds me to stay in the room.
If you’ve made it this far, thanks for joining us again. We’d love to know your thoughts – please hit us up. Otherwise, Happy Easter and see you in the next drop.
Yours in humanity,
