I’m sure everyone and their mom are now aware of AI. This thing just won’t leave the headlines.
Well, I might as well add a title of my own then.
First, to my “favourite” part, the disclaimer: I am no IT professional but I do work in IT adjacent department of a management consulting firm (and some of my colleagues seem to be fully on the AI bandwagon). I am also no lawyer, so I’m going omit the discourse on intellectual property rights altogether. I am an Anglophone Literatures and Cultures undergrad, and quite a fan of Friedrich Nietzsche and, lately, Nicholas Nassim Taleb, so there goes my countercultural critical thinking cred. Most of my knowledge regarding generative AI was formed by various reporting by The Guardian and a few thinkpieces I found elsewhere. In light of all this, consider this post an educated rant but don’t expect technical intricacies and links upon links of proofs (although I would add some links). And, last but not least: this rant is going to be primarily about generative AI (I’ll be referring to it as “genAI” below, to save me some typing), because that’s what the hype is primarily about and I feel that this distinction is often so implicit, it gets lost. And as to AI as such, I’ve no strong opinion on it — yet: I’m not informed enough, as it’s too wide a subject.
With the housekeeping out of the way, let me present the two aspects that I consider absolutely crucial when it comes to genAI and that I feel are not stressed enough, with dire consequences. I will designate them as Human and Environmental.
While the latter is straightforward enough, the former needs some definition. I opted for the broader name because the aspect involves too many things in a highly interconnected manner: epistemology, ontology, ethics, etc., but all pertaining to us, people. So I’d rather split the issue of genAI, as I see it, along the lines of what it means for us as humans, and what it means for the not-us, the environment.
I. The Human Aspect
Let’s start with the creatives. A lot of panic around genAI comes from artists and writers that feel that their livelihoods are in danger. At the risk of coming across as a b*tch, my advice to most of these people would be to seriously think if they didn’t overstretched their hobby. Not every skill of yours must be monetized (and I blame so called hustle culture for pushing people into that idea). A hobby isn’t supposed to bring your money, it’s supposed to bring you joy. And while it’s certainly admirable that someone works on improving their drawing or writing skills, doing so at the expense of your professional/financial wellbeing goes against common sense, really, because in the long-term it undermines your own interests. Also, here’s a cute picture I saw on Pinterest that puts this matter into a different perspective:
Besides, while a good majority of us has a creative streak, — hell, I’d go as far as to say that’s a healthy human feature! — not all of us are artists, so let’s chill. Let them, who are hopelessly possessed by the muses, do their thing.
The point is, that if your industry (let’s call it that) gets upended by genAI, then something tells me your industry was in trouble for a while now. Too many people can display their art and writing these days anyway. The whole place is oversaturated. The same applies to copywriting. And even to academia. Maybe, just maybe, there are industries to which genAI should serve as the last, not the first, nail in the coffin.
And yet, in my mind, for all kinds of creatives, there is a silver lining in the form of the biggest problem with genAI:
Generative AI has no concept of truth.
How on earth is it possible that this fact isn’t emphasised in every each article on genAI is beyond me. One might suggest that this statement is always implicit, but my impression is that it’s a bit too implicit, to the point that the general public is blind to this simple but crucial fact.
I mean, a colleague of mine (and I reiterate, I kinda work in IT and some of my colleagues are into genAI and we had some internal presentations about it) shared in our group chat that she screwed up a recipe and asked genAI for an advice on how to fix it.
I sincerely hope it was a tongue in cheek on her part, and maybe it was. But then I remember seeing stories along the lines of people making their dogs drink bleach to treat some tummy troubles, or what have you, because that’s what ChatGPT suggested they do. For all the suffering pets and kids alone the creators of such tools deserve a separate, unbearably stinky, cauldron in hell.
Honestly, your guess is as good as mine as to how this is possible. Poor media literacy, that is, people are not informed about how you cannot program truth into an algorithm? Or maybe post-truth culture, meaning people simply stopped caring about truth altogether, as long as they’re either having fun or their lives are made superficially simpler, even if just through saving that much effort of typing your question into a proper search engine and going through several replies? Rampant techno-optimism, that the more “advanced“ the thing, the more correct it is?
Regardless of the reason why people are oblivious to the fact that genAI doesn’t actually know anything, one thing remains as clear as day: genAI is by its nature unreliable, in two ways. One, you cannot vouch for its correctness (that what it returns when you ask it to draw a cat, is actually a cat, and not some malformed creature). Two, you cannot vouch for the replicability of its output (that it would always be the same cat if you ask for a cat).
Although arguably you don’t always require the latter kind of reliability, you most certainly need the former. And that’s why the creative folk, in my opinion, should chill a bit, because you actually know what “cat“ is. And, more importantly, you can make connections: a “cool cat“ may look like Salem from Sabrina the Teenage Witch, or like Steve McQueen.
And yet, the hype is on. I can only wonder if its main propeller is all the money being thrown into it, into its development and propagation, into research into it. I’ve just read an article in The Economist about several research groups trying to understand the nature of genAI’s so called “hallucinations“ — what a cute name for blatant mistakes! — and how they want “everybody doing it“. Imagine all that drive, money and time going towards something tangibly positive?!
Also, notice all of that deceptive humanization of AI. It’s “learning“. It’s “hallucinating“. Guys, no, it’s just a very very complex statistical algorithm.
But hey, I probably just don’t get it. It’s the future! It’s progress! How exactly that’s supposed to happen is anyone’s guess. I mean, is it going to be better chatbots? Or indeed, as some seem to speculate, you’ll be able to get a full book or even a TV series based on your prompt, for your own amusement, for a fraction of the cost? And then what?
What’s really bothering me in this discourse is an umpteenth iteration of progress meaning our lives getting more convenient. Not better or more just — but simply faster, more comfortable. Less interaction with real people, too — yay! Surely, progress in, say, design of trains really is about making them faster and more comfortable. But even there you want them to also be affordable and sustainable. Nothing of which genAI is, and we’ll get to that in the second section, after I finish expressing my concern for the direction we’ve been bent on taking for the past few decades at least, it seems.
Comfort is good, but as with anything, too much comfort is plain bad for you — the point that goes through millennia of human thought save the last few centuries. And then they dare to complain about how people are lazy and what not. Of course they would be, if their environment is designed like that, if the implicit message is that life of leisure and convenience is The Real Thing. I mean, Kim K may talk about getting your ass off your couch and working however many times she’d like, but she herself is a walking representation of leisure and comfort, all her work is about that, from reality TV to underwear brand. Not that it doesn’t take a huge amount of work of many people to put it all together, but that’s what you have to think about, that’s not what you’re generally presented with. What you see everywhere is how your life should be easy and fun, and how you deserve it, how it is attainable in just a few clicks and, sometimes, dollars— and in the future even fewer clicks would be necessary! And then you discover that you can’t put in any sustained effort without self-sabotaging yourself because you shouldn’t be exerting yourself at all, it’s unfair, you deserve better than this! And god forbid you have to make a phone call.
This cognitive dissonance between the constant messaging coming from entertainment and consumerist culture, and, well, just reality of life that you should, you know, do your work, take care of your close ones, run errands, exercise, do your dishes — the reality that there are some responsibilities beyond finishing the newest seasons of The Bridgertons — gasp! And, believe it or not, those responsibilities are actually good for you! They are part and parcel of The Real Thing. The (not so) funny thing is that this progress that’s a child of speed and comfort is viewed as pursuit of happiness and liberty, whereas if you ask me, we’re digging our own graves with this, spiralling out of control and burning our planet.
Which is more to show that genAI is symptom of the times, not the great disruptor.
Now to that burning planet.
II. The Environmental Aspect
This, I feel, gets even less attention than the lack of the concept of truth in genAI.
As if we weren’t abusing the Earth’s resources enough, we needed to produce sh*tton of data, store it in huge data centers, and then, on top of that, to run AI models over it.
Would you believe it, “Google’s goal of reducing its climate footprint is in jeopardy as it relies on more and more energy-hungry data centres to power its new artificial intelligence products. The tech giant revealed Tuesday that its greenhouse gas emissions have climbed 48% over the past five years.“ By 2026 electricity usage of data centers worldwide is projected to double from its 2022 levels (2022! Not 2012 even: 2022!). And these things are also very thirsty: “Water usage is another environmental factor in the AI boom, with one study estimating that AI could account for up to 6.6bn cubic metres of water use by 2027 – nearly two-thirds of England’s annual consumption.“
Additionally, data centers take a lot of space while creating few jobs. What they might create for the local communities, however, is noise, which isn’t loud enough to be officially considered a hazard but which is present enough to mess up with the health of people living in the vicinity.
The funniest thing is then to read how these companies claim that they’re still determined to use clean energy, to invent better chips, to come up with more efficient ways of running the whole thing, etc. (because they’re so rich and can afford to at least try throw even more money into the pit). Like, guys, that’s cute, but… promises, promises, and we know full well how that usually goes with rich guys with plenty of power. It just doesn’t go anywhere.
Because the truth is that these Tech Bros don’t care about the environment, because why would they if they aren’t going to suffer any consequences whatsoever. Are they gonna toil in the 40C degree heat because they work outdoors? No. Is their one and only house gonna be washed away by a tropical hurricane strengthened by the climate crisis? No. Are they gonna miss coffee or chocolate because it’d become too expensive due to the climate crisis? Nope. These guys have enough money to throw at almost whatever problems that come their way (hell, the amount of money some of them are already throwing out towards “biohacking“ is mindblowing). What they care about is not our overall wellbeing — even if they say so, that’s a marketing ploy, — what they care about is outdoing their competition.
All of that is just a very elaborate pissing contest. Just boys being boys at the expense of the entire planet.
But to get funding for their expensive toys, they have to come up with how you sell this thing to as many people as possible. Cue all the “it’s the future!“ and “it’s gonna make our lives so much easier, you guys“. I don’t know, I’d rather take a less overheated planet, please. Even if I’d have to write that cover letter or whatever myself. Same goes for clothes and whatever things they sell at Temu, but that’s another discussion.
Also, one run of a large language model (LLM), which, for example, is behind ChatGPT, apparently costs a 100 million dollars. 100 000 000. One run. I was kinda happy that the article didn’t mention how many or how often runs usually happen because I would’ve had to go search for the proverbial pitchfork.
You know, back in Tsarist Russia of the 19th century guys that made fortunes tend to spend at least some of it on charity like hospitals and schools. Was that the best system? Of course, not. But at least the rich felt some obligation to give back. Even if that was out of peer pressure of Christianity or honour culture, some people got their situation tangibly improved at no cost for them.
Oh, I almost forgot to mention another area of spending of those rich guys: Art.

This painting, Matiss’s La Danse, was commissioned by Sergei Shchukin, along with another two, for the stairway of his house. Which was stuffed with contemporary art. And Shchukin opened parts of the house for visitors and even conducted tours himself from time to time. A whole generation of Russian artists got their education there.
I’m not even sure what it’d take for someone to persuade me that genAI and its creators are anywhere near this level of humanity.
On a second thought, no, I’d still like to go for genAI with a mental pitchfork.
I want you, my dear reader, to consider getting one, too.
At least in the form of actively not engaging with genAI stuff, like ChatGPT. If you’re in the position, avoid or delay integrating genAI at whatever product or service you work on, startup or corporate.
Also, maybe ignore (gen)AI articles and videos, unless they are covering its environmental impact, so as to lessen the traffic to the content on the topic.
Because from where I’m standing the whole thing is running on pure hype. I truly believe that genAI will implode within the next five years, because how unsustainable it is financially, which, alas, it the only metric people in power care about. There’s no good business case to support burning that amount of money. Unless they decide to funnel money from other areas of their businesses because it’s such a dear toy, plus the sunk costs fallacy is a thing.
Of course, that doesn’t solve the bigger issue of AI and data centers (aka “the Cloud“). The only thing I can think of as of now, and this is gonna be too radical and nigh impossible even for me, yet I must add in good faith: don’t engage with content too much. Save your likes, save you comments: because they have to be stored somewhere, and then processed! Even your views! None of that is truly for free. So, go read a book. And turn off cloud backups of your photos on your phone. On that note, I turned off the archive feature on my Instagram profile. Even more radically, you should also post and comment less, but I’ll leave that option on, since that’s key to communication, and we need that. But the next time you wanna post something hateful or overshare-y, maybe reconsider that. For the sake of the planet.
Thank you for bearing with me through my rant. Of course, I didn’t cover everything possible about genAI. Of course, there’d be one or two decent and beneficial use cases for it. But my belief is that even so, it’s not worth it. As I said, I think it’s a bubble that’s gonna burst big.
In the meantime, I might update this programme post, my manifesto if you will, with new ideas along the two declared lines. Or maybe some of the readers will come up with more tips?
In any case, time will tell.
Articles for your consideration
Subprime Intelligence (Ed Zitron's Where's Your Ed At) — on how genAI doesn’t really know anything, plus the revealing nature of business ties between genAI companies and Big Tech giants like Amazon
The Staggering Ecological Impacts of Computation and the Cloud (The MIT Press Reader)
Power grab: the hidden costs of Ireland’s datacentre boom (The Guardian)
Google’s emissions climb nearly 50% in five years due to AI energy demand (The Guardian) — some stats I mentioned in the post
Real criminals, fake victims: how chatbots are being deployed in the global fight against phone scammers (The Guardian) — probably the only worthy use case of genAI I saw so far. But then, the scams themselves are now powered by genAI, so why support this ouroboros at all.
and on the final note: go (re)watch Wall-E! See what handing over our personalities and responsibilities and growth in terms of skillsets for the sake of convenience does to us — and the planet! And to think that it is some 15 years old already!