On 2/8/26 3:38 PM, Steve Litt wrote:
P.S. One can change one anniversary date by cancelling and
resubscribing. I don't know what context might be lost in the process.
I know that ChatGPT has no facilities for backup, so you have to do
manual copy and paste, and their CSS use makes those pastes pretty hard
to read. This is not unintentional, and I'll bet you 2 to 1 that if not
now, then very soon, when you stop paying the money, all your info
disappears. I'll bet Claude will be the same.

Claude in a browser certainly has limitations along the lines you describe. But Claude Code, though it has (pretty much?) the same LLM backend as the browser version, also has a binary installed on my computer. (Well, in a rather isolated VM.) It has read-write permissions in the source directory tree I am working on, and in its own ~claude directory, but no where else*. When I fire up a new instance of the LLM—which I do frequently, because managing how big the context is and what is in it is important, a lot of what I am learning—it can see what has been created up to that point, make changes, run tests, commit the changes to git (locally only), etc.. In a sense the local software does all that copying and pasting (and grepping, and much more) on behalf of the LLM. It makes it practical to do real work.

Anthropic would like me to use Claude Code for everything, let it lose as user "kentborg", on my real machine, on all my real data, doing everything! No. I won't do that. I am not as negative on LLMs as you are, but I can go on at great length and depth about the problems and limitations they have, more so every day. Don't mistake me for one of the AI bros.


One more thing: Put everything you learn into one or many text files.
Then, when you get a computer with 64GB RAM or 128GB RAM and a big
processor, you can install a SLM (Small Language Model) on your
computer […]

And don't forget to buy some powerful hardware for doing lots of matrix multiplication very quickly and a special memory architecture to keep it fed with new matrices to be multiplied (an Nvidia card, for example). I periodically muse in that direction, and were I dealing with more personal data I would have to do that.

(I do have another VM with Ollama installed on it, BTW.)

As for looking through my own content, I recently realized that a vector database, and enough of AI to do embeddings, is maybe all I need for that, so maybe no AI layer on top of it. In which case no need for special hardware.

Again, I am *not* part of the AI hype. It is a crazy, unsustainable bubble based on an innovation that is a dead-end on any road to "general artificial intelligence". But I am discovering that they do have value. My current project is to better understand it in practical terms.


-kb, the Kent who was never on the bitcoin bandwagon, either, because once he actually learned about it he concluded it was stupid.


* Running Claude Code as its own user is decidedly non-standard. That is me being conservative.
_______________________________________________
Discuss mailing list
[email protected]
https://lists.blu.org/mailman/listinfo/discuss

Reply via email to