When you have a short-running task that completes in 0.01 sec, a startup
delay of 1.3 seconds (or more) *is* the total execution time.
On Fri, Nov 29, 2019 at 10:32 AM david hoyt wrote:
> This kind of think is really only interesting for shell piping in bash. It
> won’t help numerical, tensor,
This kind of think is really only interesting for shell piping in bash. It
won’t help numerical, tensor, neural, simulation, nor business codes. Good
FORTRAN environments can do hard numerical faster. To a lesser amount, so can
C. In supercomputing, that advantage is reduced. I/O bandwidth
> They are probably much less interested in using these methods for
long-running server processes.
At work, we were quite interested in making a native-image of our API
server recently. Having a fast boot would have opened a lot more
possibilities around where we could run it. Serverless
I believe at least some of the people working on this, and interested in
these results, would like to use Clojure for command line utilities and
such, which tend to have quite short run times when implemented in
C/C++/Python/etc. They are probably much less interested in using these
methods for
Hello world is fun, but doesn't say much. I would like to see benchmarks on the
actual application. Ideally it would take several jvm's so also Graal and J9
and also use the commercial version of making a native image, asses how much
memory is needed when run on the JVM and limit that, since
A quick comparison with python:
> time python -c 'print("Hello world!")'
Hello world!
0.03s user 0.01s system 80% cpu 0.048 total
> /usr/bin/time -l python -c 'print("Hello world!")'
Hello world!
0.04 real 0.02 user 0.01 sys
6 maximum resident set size (MB)
Another way to achieve fast startup is to compile clojurescript to a nodejs
target and then use the nodejs library called pkg to bundle the nodejs
binary with the script. I haven't timed it but it's an interesting
alternative.
On Tue, Nov 12, 2019 at 12:46 PM Colin Yates wrote:
> Do we have any
Do we have any idea how that memory saving scales?
I know a bunch of meta data isn’t needed as it is hotspot specific, but are
there any other memory savings?
Sent from my iPhone
> On 12 Nov 2019, at 18:42, Alan Thompson wrote:
>
> In my initial post, I failed to mention the huge memory
In my initial post, I failed to mention the huge memory savings achieved by
the standalone executable (in addition to the startup time savings).
Note that using *time* at the command line resolves to a shell built-in
command. We can get more information from the standard Unix version of time:
#
Might be worth mentioning that lread and I are collecting information about
GraalVM here:
https://github.com/lread/clj-graal-docs
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note
You probably aware of this, but in case you don't. Running JVM as native image
does start up faster, but throughput is less and latency is higher than running
on the JVM.
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send
I have found the same improvements with graalVM binaries! It is amazing!
I was just testing zprint before releasing it, and here are the numbers for
a moderately sized program to start on a 2012 MacBook Air:
>java -jar zprint-filter-0.5.3 same as above using appcds
12 matches
Mail list logo