
📣 Edinburgh-based author Ronee Hulk wrote a book on our AI-driven future, which came out in late 2025, called Dear Future: You Can Keep The Change.
I asked Early Line readers if anyone knew who Ronee - a pesudonym - actually is. Ronee themselves got in touch, almost straight away, and we began a correspondence… which led to three things.
First, I reviewed Ronee’s (excellent, thought-provoking) book. You can read that review in the ideas section of this Early Line, from mid-February. If you want to read an informed view of how AI might change our world, and the options we have as individuals and as a society, I think Ronee offers an accessible and through-provoking primer.
Second, Early Line readers sent in their questions about AI - those, and Ronee’s answers, are published below. They’re separate to the daily email simply because I didn’t want to edit them down too much to fit past Gmail’s censorious word count limit: the questions, and answers, are interesting.
Finally, Ronee surprised me by putting together McIntoshbot.com, an AI version of me, Early Line editor Neil McIntosh. It is both flattering… and rather shocking. He explains why he did it in one of the answers, below. Do have a play around with it - it shows both how straightforward it is for someone to put together a sophisticated AI-powered website, the likes of which would have been unimaginable three or four years ago, in what Ronee insists took no more time than a coffee break. (It’s worth saying: the Early Line is entirely powered by human effort, fuelled by coffee. The AI-created illustrations on this page are the most AI this newsletter uses).
Readers’ questions answered
A Q&A with Ronee Hulk, author of Dear Future: You Can Keep The Change

An AI-generated image of Ronee Hulk, created by Google’s Gemini, based on this Q&A’s text
🗣️Paddy asks: What's the motivation for anyone to destroy the world? Haven't we always reabsorbed displaced labour? Are we really headed for a Matrix future?
RH: Your optimism makes sense, especially since it's grounded in history. The spinning jenny didn't end work 250 years ago, and neither did the internal combustion engine. Even self-checkout tills didn't eliminate entry-level retail jobs. Every new technology replaced some jobs but created others. We never knew what the new roles would be, but they always appeared. That's how it's always gone. But this time, there's an important difference.
Earlier technologies replaced physical labour. AI, on the other hand, replaces human thinking. When looms replaced weavers, designers were still needed. When spreadsheets replaced bookkeepers, accountants and auditors still had to check and approve the work. Now, AI can analyse data, write reports, make forecasts, and even make final decisions. For example, in the past two years, many big companies have quietly hired fewer graduates because their own AI tools can create presentations and market analyses in minutes. Most law firms now use AI to automate discovery. So, while JP Morgan's headline about AI adding $12 trillion to global GDP sounds exciting, this growth isn't because we're all working harder. It's because humans are becoming less central. The work will be better and cheaper.
You are right that businesses need customers. But productivity and employment can easily drift apart. If AI can make goods and services at no extra cost, profits can rise even as fewer people are employed. For a while, markets will likely celebrate this new efficiency and the higher profits it brings. But there will be a cost. Sooner or later, we'll see what kind of imbalance this causes. Someone will have to figure out how to support people who no longer have an income.
To address the opening question specifically‚ I would say that there isn't currently any evidence that the driver of the revolution is motivated by a malicious intent to destroy the world. But what is underway is optimisation without brakes, which (I believe) will have a colossal impact on each of us, and not all of it is good. For those that have read my book, you may be familiar with what I referred to as 'The Extinction Equation' in the foreword; this is not about an evil machine, it is about compounding risk. We are not heading for a Matrix scenario where a single machine enslaves us overnight. The more probable future is softer and therefore more dangerous. This is a future where we delegate and willingly accept the convenience. We trade judgment for efficiency. It is incremental.
Are we still in charge? Sure, for now (in theory). In practice, increasingly not. The direction of travel is set by private laboratories, vast flows of capital and a geopolitical rivalry that is almost impossible to overstate. The illusion of control persists long after meaningful steering has gone. Your optimism should not be abandoned. But it would benefit from an update. Paddy V 1.1.
🗣️Penny asks: What do you think of Palantir, government data and the NHS? What might this mean for UK citizens?
In Chapter 3, 'Behind the Curtain,' I discuss the power of algorithms and how they handle information. Chapter 4, 'The Cost of Perfect Health,' looks at healthcare. Palantir is central to both topics.
In the US, Palantir software has been used by ICE to integrate datasets informing immigration, travel and law enforcement. Here in the UK, its NHS contracts perform a number of functions‚ They consolidate hospital capacity, waiting lists and operational data into a single platform. On paper, this might look like efficiency. In practice, it is actually the centralisation of visibility. The NHS, as wonderful as it is, is ultimately a unified data estate linked to a single payer. That is both its vulnerability and its strength. A platform that can see national level patient flow in real time could (theoretically) reduce waiting lists and optimise theatre use. If misused, it could also stratify access, model risk profiles and quietly influence resource allocation in ways that will be utterly invisible to the rest of us. And so the recent awarding of large NHS data platform contracts has quite rightly prompted questions from clinicians about data governance. So I would conclude that the issue is not whether optimisation is valuable, but rather, who is it that controls the optimisation logic. What is it, that will ultimately make the decisions that impact on our access to healthcare?
Peter Mandelson's lobbying isn't the problem. The real issue is that once public data is built into private systems, it is very hard to switch providers. The government ends up depending too much on the vendor, which gives the company a lot of power. That should worry all of us. If we take a moment to imagine a future where health models can label people as high cost or low compliance, we can also picture a future in which ethical risks get even bigger. Models trained on old data can-and do-carry bias. We've already seen policing in the US unfairly targeting minority groups. Healthcare risk scoring could do the same. The impact on UK citizens depends on how the systems are governed. If we have strong transparency and public audits, these systems could really improve outcomes. Without that, there’s a risk they become black boxes making important decisions about care. That's why in Chapter 9, I call for an ‘AI Ethics Charter’ and an 'Actions and Consequences Register'. When data is collected and used at scale, it's only fair that people can see how, by whom, and for what purpose. Efficiency isn't neutral; it shifts power. The real question is whether that shift is open and accountable.
🗣️ Ewan asks: Should we bother learning AI? Or will it just become infrastructure like email?
Ten years ago, kids were encouraged to learn how to code, and governments bet heavily on this. Now, AI can write and deploy code better than most developers. Does that mean the advice was wrong? Not completely, but it was missing something. Education needs to quickly move away from training for specific careers and instead embrace adaptability. You don't need to know how models work to use AI, just like you don't need to understand email protocols to ping someone a message. But people who understand the way in which AI reasons, makes mistakes, and optimises will have a big advantage right now.
I'd guesstimate that over 99.99% of people in the UK don't realise they could build an app just by typing a description into a chatbot. That's the opportunity: if you can imagine it, you can build it. I don’t have any aspiration to become a motivational speaker, so I'll keep it brief and specific with a real example. I spent no longer than 5 minutes typing some fairly basic instructions into a relatively accessible platform called ‘replit’ to create a cheeky homage to EarlyLine’s editor Neil McIntosh. This is a real world demonstration of what is possible right now, it’s intended to be fun and light-hearted…. Take a look at the output from this daft (and potentially slanderous) misuse of Neil’s kind engagement with my book by visiting McIntoshBot.com. By way of background, I instructed replit to create a series of games, an article generator and a chatbot themed around Neil McIntosh. My point is this, if you have an idea that needs a web or mobile presence, you don't have an excuse to sit there and do nothing. You don't have to rely on a third party to build and launch a web or mobile application. Over the past month, I've built several new web apps, each would have cost 6-figures+ and taken at several months to build out just two years ago. Today, each misadventure cost me less than $300. I doubt these platforms will let you retain complete ownership over the generated code without an exit fee forever (they do now), but they will for at least a few more months. That's the window of opportunity. Now is the time.
So the practical advice is this: Learn how to prompt and interrogate AI systems. Learn where they are weak, not just where they are strong. You don’t need to master every tool under the bonnet. But you must not be passive. I'd encourage everyone to play around with replit.com, claude.com, Base44.com. The internet became the basic infrastructure. AI will too. But before that happens, those who leverage AI as a partner will create the next wave of value.
🗣️David asks: Silicon Valley bosses send their children to tech free schools. Would you let your children learn with AI?
The hypocrisy is real. Many tech leaders limit screen time for their own children, even while selling optimisation to others. That says a lot about how they view the risks. In Chapter 5, I criticise the factory model of education. In AI, we now have the beginnings of the first real way to personalise learning. A child who loves birds could learn statistics through migration data. A student interested in music could learn maths through harmony. Should we integrate AI in learning? Yes, but not uncritically. AI can be extraordinarily effective for foundational literacy and numeracy….for critical thinking too. Evidence from pilot programmes already shows improved engagement and retention. But AI should not be a substitute for human mentorship.
Schools are not only knowledge factories. They are social laboratories. Conflict, boredom and collaboration are formative. So I would suggest that AI is deployed liberally for personalised pacing in mathematics and sciences, language learning through conversational immersion and experimentation (guardrails). But I would insist on retaining human teachers as ethical anchors, real world projects that involve physical play. The real risk isn't deployment of AI in learning spaces, but rather letting optimisation turn childhood into test scores and numbers. I have 4 children, the two eldest (19 and 16) are familiar with some of the AI tools and both are already fairly despairing of the direction of travel for the jobs market (without encouragement from their father). Neither of them have read my book from cover to cover, although they have read the Foreword, Chapter 2, Chapter 4 and the epilogue (I don’t wish to discourage anyone from reading the whole book, but I think a sufficiently clear picture of the threats and opportunities can be achieved with these four). In terms of allowing my own children to learn using AI tools…..Absolutely, assuming the guardrails I have outlined above are in place.
🗣️David also asks: If we are the horses, whose needs are we serving?
In the Epilogue, I write that we are the horses, unaware that the car has already arrived.
In 1900, horses served human transport needs. When cars replaced them, it was because humans valued speed and efficiency. If we're like the horses, then right now we're serving the needs of optimisation itself. Corporations seek margin expansion. Governments seek growth. Consumers seek convenience. AI serves all three. No one wakes up wanting to replace humanity. But every board meeting that approves automation as a mechanism to reduce cost, is contributing to that direction. We're not being replaced out of spite, but because the system values efficiency above anything/everything else. The deeper question is not who wants us replaced, but what logic is driving the replacement. It's not about what people need. It's about the system's push for optimisation.
🗣️Colin asks: What about personal, private AI models instead of giant cloud systems?
My book largely focuses on cloud-based systems because that's where most power is concentrated right now. But on-device AI and edge models are improving quickly. Apple is embedding models locally. Open-source communities are compressing models that run on consumer hardware. Sovereign AI is emerging in multiple countries.
A decentralised AI system should mean that we don't have to depend on just a few big companies. It would also give us more privacy whilst reducing the risk of being monitored.
However, scale matters. The most sophisticated models need a lot of computing power, which gives an advantage to those who have it. Smaller models help protect independence, but they are not as capable as the largest systems. So yes, a less scary architecture is possible. However, it requires investment in new open models, hardware sovereignty, cultural preference for decentralisation. If we do nothing, things will naturally move toward centralisation. But if we try, decentralisation is still possible.
🗣️Christine asks: I wonder how far you see my job, teaching, replaced by AI - and I know that’s a common response, to think one was indispensable, but surely the personal input is actually vital in engaging with so many individual personalities over a career?
In Chapter 5, I say directly that secondary school classrooms as we know them could disappear within five years. But that does not mean the actual teaching of the material will be automated first. AI is able to explain algebra in many different ways, right away. It can spot if a student is confused and correct course… It does not tire. What remains human for longer is pastoral care, moral modelling, social arbitration and most importantly, inspiration.
But we have to be honest. As AI improves, even some mentoring roles may be handled by machines. The key for teachers is to move up the value chain. They should become designers of educational experiences, not just deliverers of content. Teachers can interpret AI outputs whilst looking after students' development. Teachers who stick to the old lecture model may be replaced. Those who become guides for thinking and ethics will last a lot longer (but still not long enough).
🗣️Rosie asks: Have you bought land?
In Chapter 11, I say that land is the last asset you can't copy or make more of. You can't print land or create it with code. Land is the base for food, water, and energy. We're already seeing tech leaders buy land for strategic reasons. That's no accident. Whether as a safety net or a core value, land gives people optionality, where money might matter less. So the real question isn't whether I've bought land, but whether we understand why people at the forefront are quietly paying more for it. But yes, I have bought a manageable amount of land (a smallholding farm of circa 30 acres).
🗣️ Neil asks: Is there reassurance? Or have you given up hope?
I understand why the book might feel unsettling. It removes the comfort that comes from not knowing what will happen. But it isn’t nihilistic. The Epilogue is a call to action….Let’s build things, start families, create and launch new concepts, and get involved instead of stepping back. I don't think we can stop where things are headed, but I do believe we can influence and shape how it happens. Reassurance doesn't come from pretending problems will just disappear. We need to recognise that human connection matters. Authentic creation still has value. We can still shape governance. Purpose can still be chosen. The worst thing we can do now is nothing. The future isn't fully in our control, but for now, we still have some influence over what happens. And that is enough reason to keep on keeping on.
Dear Future: You Can keep The Change is available in a variety of formats from various sources: here’s a link to its Amazon page.