Sure. But tell me: What is faster? A tiny Picolisp interpreter binary, that
entirely fits into 1st/2nd/3rd level cache, accesses memory without
waitstaites - or a huge, multi gigabyte JIT engine, that, in itself, is a
pure memory monster?

My measurements show, that small, tiny interpreters - especially for lambda
microservices - are much faster than any Microsoft/Oracle/Apple (LLVM is
heavily sponsored by Apple!) technology.

And then you will also notice, that your "cloud memory footprint" (tens of
thousands of micoservices running at the same time with different customer
data, each) will tremendously go down, when you simply don't use any
"Wintel Alliance" technology: "We make slower software for you make faster
hardware!" (where Apple and Oracle certainly belong to!)

It saves you plenty of money, when you simply don't use U.S. technology
(neither Closed Source nor Open Source), using a sledgehammer to crack a
nut.

Tiny interpreters, like Picolisp, here have tremendous advantages. Also
don't forget to activate KSM (Kernel Same-page Merging) in Linux. Same
binaries (4K memory pages) get consolidated, only use one, single binary
instance in DRAM.

Remember: *Picolisp is a genius-strike!*

Most people simply don't understand why, because they simply got victim of
long-term U.S. advertising strategies selling more and more hardware to
host bigger and bigger software packages. That nonsense has kept up going
Silicon Valley for two decades now, pulling billions from our pockets.

Have fun!

Guido Stepken


Am Donnerstag, 26. März 2020 schrieb <[email protected]>:

> Does anyone realize that there's an LLVM-based port of picolisp being
> worked on by Alex? :)
>
> --
> UNSUBSCRIBE: mailto:[email protected]?subject=Unsubscribe
>

Reply via email to