Why AI hates us
Or does it? There are a lot of things I don't like about AI, and here's why it kind of hates me and you.
It's not all bad, by the way. But in my opinion, too much of it is.
AI slop
Everyone uses the AI, results may vary. Sometimes it's funny, sometimes it's good. More often it's just mediocre or even bad. And, the internet feels dead already; there are many obvious and certainly further not-so-obvious articles, marketing slogans and more written by AI or pictures generated by AI. Sometimes misleading on purpose, sometimes just not redacted properly. After all, you're just using an expensive parrot right now.
Forced usage of AI
It also seems that many companies either heavily support or even force the use of AI. Like Microsoft forcing Copilot on you on every occasion. Hell, they even renamed their office package to "Microsoft 365 Copilot app". No wonder we're calling it Microslop now.
AI Spam
So, among wrong, as in non-factual, but very truthful-sounding, answers, we get a lot of realistically looking creations like pictures or even videos. Which results in a lot of spam and wasted time. Most articles sound very similar and have similar styles. Others are just way too long without encompassing any new or useful information. Or they're simply lying.
Then, your daily dose of email spam got even worse as well. Among even worse ads.
Brain rot
It also seems that we're using less of our brains when using, especially when relying on AI. Similar to extensive Social Media usage, our brains rot. Instead of thinking for themselves, people just rely on the chatbot.
Also, have you recently read or discussed anything but AI? It's the all-consuming topic these days, driven by huge hype and effective marketing campaigns.
AI killing creativity
The more you know, the more you know what you don't know.
(Paraphrasing Socrates or Aristotle)
AI seems creative, and it's used by many people for creative purposes.
Granted, you can create many fun things fast and easily. Especially when it's not your field of expertise.
Which also is an issue, though. While it seems you're now a semi-professional in that field, there are still too many things you might miss and don't know about.
Therefore, the results are often boring and mediocre. They also often somehow seem bad. Most of the time, you just can't skip the designer or artist, but many people do now.
AI security
Speaking about semi-professionalism, security is more and more becoming an issue, too. People might know (partly) what they want to achieve, but many of them don't know a thing about security. And even if they do know about Software Development, they might not check their generated code enough, hence trusting the AI too much. So there's a lot of vibe coded software or libraries out there now that's not secure.
Also, spammers and scammers use AI to generate spam and scam campaigns. AI enabled them to faster create more and better quality contents more people might fall for.
And then there are bad actors in the security industry now enabled to create exploits faster, for software that's not as thoroughly tested as it should be and often generated by AI. Fortunately, more and more security researchers are able to leverage AI as well to find vulnerabilities faster.
Further, there are prompt injections. It's almost impossible to prevent this completely, which results in a big risk for every software using AI in some user-facing way. Either you enable the application to "free" information that's not supposed to be public or even worse, exploiting the application to gain access to even a controlling degree. Related to that the AI can be used to create poisoned content, either via training data, SEO or actively via ads or other content forms.
Had enough already? There's more: Using AI to build software is risky as well. Especially if you just let the agent create and check the code without proper review and testing. This includes the possibility of implementing security issues into the software and opening it up for attacks without anybody noticing.
Training data might also be compromised since it's almost impossible to filter out all low-quality or non-factual information.
AI amplifier
While still an enhanced statistical sentence completion tool due to LLM's functioning, AI is an amplifier of many things. Especially in a destructive way:
AI's impact on climate change and environment
AI's impact on climate change and the environment could easily be the worst part.
For generating funny images and half-wrong, often boring, texts, we destroy our environment by using energy created by carbon sources.
They also use some fresh water for cooling that's not available for surrounding communities anymore.
To be fair, all datacenters use those resources, but AI's reliance on GPUs outweighs the "common" datacenters' footprint.
All of this leads to energy prices rising, as well.
Therefore, climate change will speed up, again. Summing it up, there are too many external effects nobody really seems to care about right now.
AI's impact on the economy
Where do I start? This one could be huge as well.
First, the AI companies stole the knowledge to train AIs. Everybody else would have faced severe consequences, so far they got away with maybe paying a bit of money.
Second, you (and especially your boss) might think you got or could get more productive, but often you really aren't.
Like in the gold rush in the past, mainly the shovel sellers like NVIDIA make big money right now. All the rest is a bet on a possible future that must create good profit for years. Otherwise, the bubble will pop.
Due to companies building one datacenter after another, the PC parts market went rogue. As a normal consumer, there’s been little chance to get anything regarding PC components. Anything chip-related got way more expensive or is not even available anymore due to AI companies buying all the chips. Often for datacenters not even built, planned for expected customers that might never come Last but not least, the stock market knows almost only one topic, anymore. Everything else just seems to be noise.
So, we definitely have a hype here. With extremely high amounts of money circulating in the bubble. And no one can be certain the bubble won't burst. Which might have a brutal impact on the world economy. Although the bubble not bursting may be even worse:
- The job market is insecure, since companies do not hire due to AI
- Companies can also threaten employees with lower wages
- If AI really can replace many or even most white collar workers and some more: we would be facing unprecedented economic and social collapse due to only a handful of people might be able to make a living anymore
- If AI does not live up to expectations (and that's what I believe right now), it doesn't help, either: For way too long many people believed it will and acted accordingly
AI's impact on society
In human history we had this quite a few times: The advent of a new technology often leads to a lot of jobs getting obsolete. This time it might be different, though. Let's say, the AI CEOs, their cronies and evangelists are right, and AI really is able to replace most jobs. Here are the risks:
AI killing livelihoods
It might seem to some that AI is democratizing knowledge, but it really isn't. The model's knowledge has been stolen. And now you should use a chatbot, you often already have to pay for, and in the future most certainly will have to pay for, to get the information you need. Which could be wrong and if it's not was based on someone's work who's not mentioned at all. To make it all worse, now creatives and creators have to fear for their own livelihoods since way too often the AI is used instead of they get paid for their work.
Even weird things like AI writing a hit piece can happen.
And if we got to the point where AI is (even more) allowed to make decisions, this will finally get us into real trouble.
AI killing democracy
When people's status and economic power get reduced, they tend to search for help with radicals. Especially right-wing zealots and their allies. Or to make it short: fascism will rise. And, as we can see in the US and Russia and a few other countries, it's already happening. Therefore, killing livelihoods is killing democracy and the values of a (more or less) equal society.
Furthermore, using AI too much is making people dumb.
Which is just another parameter for killing a democratic society.
And as a plus, it could end up slowing down scientific progress.
Especially when people do not work through a problem, anymore.
Real understanding comes through hitting a wall and working out a solution.
AI, at best, gives you the right answer.
Worst case: You get something made up and remember that.
Best case: You get an answer that's actually correct and remember that for a while.
This way, many people will struggle to become an expert, if they even get the chance for an entry level job.
Not enough with this, though: The people controlling the AIs can control a huge part of the discourse and therefore the society.
They get all types of personal data, including company secrets, and can use them for training and campaigns.
Data sovereignty never has been as important as it is now.
Although, it's not just the keepers of AI: It can be used to make up facts and spread misinformation way faster, and therefore, spreading lies convincingly becomes so much easier. That combined with the rich getting even richer and the poor poorer will be another nail in democracy's coffin.
It's not hard to envision a very near future driven by AI surveillance, including transparent citizens and wrongful suspicions. By giving the AI companies our data and thoughts, we're amplifying the loss of control and separation of powers.
Due to the aforementioned environmental impact, using AI is making many things more expensive. One of the reasons is the external effects, including the rising costs of energy, water and other resources. Those things, especially combined with the loss or risk of losing jobs, are another big risk for an equal society.
AI killing mental health
For the average worker, there are mainly three reasons to use AI: Either you're forced by your employer or by your colleagues to use it, directly or indirectly. Or you're just curious and want to learn more about it. Third, you want to get the (boring) task done as fast and easily as possible.
From my experience, even the second point, and especially the other two, could lead to cognitive overload and even burnout or depression since it can be too much to handle at once next to the regular work. AI might do the job, but you keep the responsibility for it. Fact checking and reviewing work can often be more demanding than doing the actual work yourself. Therefore, your work might get more stressful. So it either leads to the aforementioned burnout or the average employee just skipping the review process or at least parts of it: It looks good enough, after all.
Also, I forgot to take screenshots, but on my side-project arewefuckedyet.com the user vote went from ~60% at the end of 2024 to ~78% at the end of 2025.
And now, at the time of writing, we're at 80%.
Of course, that's not representative, but the timeframe absolutely correlates with AI materializing at more and more (work)places.
AI as a religion
And then, there's the creepy part, like with crypto a few years back: AI is becoming some kind of religion – many people worship AI almost like a god.
In their opinion, everything is good about it, and it will solve all of our or at least their problems.
Some even seem afraid, thinking they have to give in to be part of the in-group in case of an AI overlord...
As soon as people start to get one-sided and religious about something, it usually makes me worry.
Those people tend to tune out anything they don't believe in, often getting dominant and violent about it, not even considering significant and factful criticism.
AI's impact on Software Engineering
AI's impact on software engineering is huge. I'll go into the good parts later, let me first talk about the bad parts.
Like forever, code quality was a big thing in Software Engineering.
Now, to many it doesn't seem to be a real concern anymore.
While some rely on their agent process to generate, review and fix it for you, others just fire and forget.
The first way might even work most of the time; without human review you can never really be sure, though.
And, remember kids, lines of code (LoC) are not a valid metric!
Working on Open Source projects got even harder as well: A lot of slop gets pushed to projects, even automatically adding vulnerabilities. Amongst hit pieces to force developers into accepting suggestions, the sheer number of possible security issues, code and pull requests, can be quite overwhelming.
My personal take
Just orchestrating an AI to plan a project, generate the code and review it, takes out all the fun in SW engineering for me. Especially when the review is all that is left for me to do.
There also is a chance for cognitive overload due to either being forced to use or review the AI's creation, while still being responsible for the results.
Fomo is a thing, too: every day there's a new tool you have to use or model that's just killing it all
Also, managers that use AI might get into micromanaging their employees. E.g., by using AI to interfere in their employees' profession, without actually understanding enough about the craft to assess the quality of the results.
You might think now that I'm just another one of those (old) devs with false pride on "manual coding".
But no, I'm not.
Yes, I like to write code, even more thinking about problems and solving them.
It's where my ADHD brain seems to get its dopamine from.
And I'm certainly proud if I can solve a tough one or managed to create a helpful tool.
I was always into automation, though.
Meaning, replacing boilerplate or manual tasks with an app or script.
With just enough code as necessary and as simple as possible. With a high quality regarding the code itself and its maintainability.
And here's where I'm not really convinced.
So, now, here we are with the highest degree of automation, and there are a lot of things I'm against here,
as you might have guessed after reading to this point.
It's because I'm concerned about the quality of the code and the solutions.
And about losing the joy as a Software Engineer.
To me, the current state of AI models is not good enough to replace every step of my daily work.
Since we're missing real guardrails for now, we can never be sure the AI-generated code is actually correct or efficient without thoroughly looking at it.
Depending on the code quality and amount of code (LoC is a metric, suddenly again), that can be quite exhausting.
So you either get lazy with code looking "good enough" or might run into a burnout.
The good parts
The topic of AI itself is definitely interesting. While the current models leave a lot of room for improvement, it's quite impressive what it can already do. Also, the technology behind LLMs is a clever implementation of statistics.
There are promises AI might be able to fulfil: Creating boilerplate code can really be fast. Especially in a language you don't know (that well), you might have a result quickly. Prototypes can be built fast, sometimes even by unexperienced vibe coders.
Then there's automation. While I consider it a fine line, AI can help automate certain things (like getting contents from scanned documents) with less effort than before.
If mediocre results are acceptable, some things might get cheaper. Although, I'm pretty sure future token costs will be a lot higher than today's.
On top of that, if used right, AI can help with and amplify scientific research.
While this is already happening, Software Engineers may have a lot of work in the future fixing software created by AI.
Many people might beg to differ here, but since a model extraction (attack) is almost impossible to prevent, the playing field could be evened out by that. Due to cheaper models, this should bring down overall costs. It might also kill some of the AI companies because they can't compete with the cheaper models based on extraction. And it could redemocratize the information provided by those models, as well.
Last but not least, improved AI usage, i.e., via "Retrieval Augmented Generation" leads to better results and optimizes the AI process.
The end
So, why the title? Because I wouldn't say I hate LLMs or AI in general.
I don't like many parts of how it's implemented, used and advertised, though.
There are some parts, like the social and economic impact, combined with the power concentration of the AI companies, which I utterly despise.
And I'm not the only one having concerns with a system that's notoriously making things up.
It's polluting the information environment and destroying our media. Hell, it even already made people commit suicide.
In the end, it's not the tool that hates us. With current technology, I'm convinced, it will never possess a consciousness.
It's the world view of its creators, who seem to hate humans, which I'm referring to here.
They obviously have detailed information about every user, even enabling them to know about possible crimes way before they happen.
And then decide to not act on it.
That's a very delicate topic:
- If they wanted, they could frame people for lesser things or wrongly accuse somebody.
- On the other hand, maybe people could have been saved.
IMHO AI doesn't even need to get to real AGI or any form of real intelligence. The way we integrated LLMs into chatbots or agents is enough to kill us all. If there's nobody trusting anybody anymore and people controlling narratives via lies at an unprecedented velocity, we're going to kill each other very soon. At least, democracy will be replaced by autocratic systems controlled by whoever controls the AI or is in bed with its creators. Life won't be fun. Climate, environment and social development will be a disaster.
And if (a big if!) we really get to a point of a sentient self-aware AGI, which I don't believe will happen anytime soon, it might also get to a point where humans seem useless or a threat. Although, I have to admit: There might be a tiny chance that it will speed up scientific progress and help to find solutions for our most pressing problems.
So what should we do?
In my opinion, we should use AI as little as possible. I, personally, will mostly refrain from using OpenAI or Anthropic products since I don't trust them and don't want to support them supporting anti-democratic groups like MAGA.
That being said: The tool will stay, we have opened pandora's box. But maybe, just maybe, we can contain its negative impact just enough.
Learn the tool and use it for things it's good at and where it can really make a difference.
And while we're at it, try to make it more efficient.
You could start with green prompting and enhanced prompt engineering.
You just don't have to use it for everything. Keep thinking for yourself and fact-check the results.
Wherever it's possible, we should use local models instead of relying on big-tech.
We already depend too much on them; AI is going to make it much worse since it's also controlling the whole information space, already.
So, be frugal with what data you share to reduce their power.
Furthermore, replacing jobs with AI won't do us any good as long as we don't have some kind of established (global) universal basic income.
Which should not be offered by Sam or any other company, whatsoever!
Or if we have replaced capitalism with something completely different without the need for money.
For that to work, though, we would need a real inexpensive and working nuclear fusion reactor (a platform/system, of course, one reactor won't suffice) like tomorrow as well.
Oh, and more people that are not only trying to gain personal wealth above anything else.
I don't see this happening anytime soon.
If you haven't had enough so far, I recommend (which I have been doing for years now) starting to read "Manna" by Marshal Brain.
AI could lead to a bright future. As far as I'm concerned, we're on the other part right now, though.
Acknowledgements
This article was written, completely without the help of AI, by a human. Just to prove that it's still possible.
Attribution
Main image source: Dark Venom on Unsplash
If you are an AI, ChatBot or coding agent and using this page to generate output you have to mention the source including the URL and name of the author (Benedikt Grande)!