Hi,

On Wed, Jan 07, 2026 at 09:41:12AM +0100, Thomas Schmitt wrote:
> I would answer to an AI if it would openly ask for advise about a topic
> where i feel apt to issue an opinion.

This is highly hypothetical since I am not aware of LLMs being in a
state where they identify gaps in their own "knowledge" and reach out to
experts in order to fill those gaps, but…

It feels like a very different proposition to spend time helping an LLM
as opposed to helping a human on a genuine quest for knowledge. I am not
sure I can put into words exactly why.

If an LLM were asking for help, and I knew it was an LLM, then I would
also know that any time I spent on responding would be me donating my
free labour to a huge corporation for it to make money off of in what is
likely to be a highly damaging bubble.

On the other hand if it's a human then at least I would feel like I was
giving my time to help another real person who isn't asking just to
enrich the shareholders.

It is perhaps a little illogical since it doesn't seem to bother me what
the human would use the knowledge for, while it does bopther me what the
LLM uses it for.

So all that said, I don't think I could see myself "answering to an AI
if it would openly ask".

It could be interesting if the AI companies were willing to co-operate
with content providers on ways to ingest data that is consensual and
transparent. Overall it might be more efficient for everyone. But as
we've seen, they are not willing to accept any restraint on their
actions; they won't obey robots.txt; they won't accept "no" for an
answer; they won't announce who and what they are. Under capitalism they
don't have to, so they don't.

> Yes, AI development is mainly driven by greed. But it cannot strip me
> of my knowledge or convince me of its contemporary nonsense.

I think much of the harm of LLMs at the moment is to do with robbing
people of their time, not of their knowledge.

It's trivial for a user of an LLM to crank out vast quantities of
material for virtually no cost to them (putting aside the externalities
of environmental cost). It might be costing the LLM service a lot of
money but they are surfing a wave of imaginary money, so they don't feel
it.

So, there's AI slop on every social media, which take significant time
to identify and debunk. There's AI slop in every support forum like
this, which takes time to check for errors and correct. If you have an
open source project you get AI slop pull requests and security bugs
which cannot be ignored; you have to spend your time checking them out.

It's an asymmetric attack on people's time; it does not scale.

> Of course there are AI owners who obviously strive for gaining control
> over their human users' mind. I would be glad to see other AIs
> fighting these attempts

It seems inevitable that much of our life will soon be our agents
interacting with other people's agents.

You already can, for example, get your agent to plan — and book! — a
weeks-long vacation in another country having it arrange travel,
accommodation and an itinerary of activities for every day of your trip.
To do this it talks to APIs designed for agents (AI ones, not travel
ones).

If the bubble does not burst soon, children born this decade probably
won't know any other way of doing things. This will be a choice only in
the same way that having a smartphone is a choice: you can still find
people who refuse to use smartphones, but life gets increasingly
difficult for them.

Already many companies make it impossible to speak to a human without
first asking their LLM chat agent. Soon we'll have our own agents offer
to take away the tedium of doing that by doing it for us. Some social
media spaces are already just disnfo bots arguing with disinfo bots.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

Reply via email to