Hello,
Just an update about performance. After more work with the
1.4M4-SNAPSHOT (as of 5/11/2013) on the Raspberry, I noticed using htop
that this version consumes more CPU comparing to the 1.4M3 version in
the same context (don't be afraid by startup time of M3 on the PI,
remember: it is indeed a slow machine ...).
1) initial startup time (fresh install, time to go to the alpacas
banner):
- 5mn 47s with 1.4M3
- 14mn 23s with 1.4M4-SNAPSHOT
2) when Archiva is not used, htop shows that :
- the 1.4M3 version consumes no CPU
- the 1.4M4 snapshot version has a thread consuming roughly 88%
of the ARM V6 CPU
So I made the same test on a OpenSuse 12.2 vmware-ized on a i7 + SSD
(using OpenJDK 7 1.7.0_21, 64 bits) and I obtained the following results:
1) initial startup time (fresh install, time to go to the alpacas
banner):
- 20s with 1.4M3,
- 20s with 1.4M4-SNAPSHOT
2) when Archiva is not used, htop shows that :
- the 1.4M3 version consumes no CPU (just like the 1.3.6 version)
- the 1.4M4 snapshot version has a thread consuming roughly 10%
of the i7 CPU
Archiva on the Raspberry and its preview JDK8 is definitely not a common
use case for Archiva (maybe it will be in one or two years with 2x or 4x
more powerful boards ?).
But still, the difference exists between the M3 and M4 version on a more
conventional Archiva target.
Is it something new or did I do something wrong ?
Regards
Patrice
PS: using jstack and the thread pid(dec)/nid(hex) identifier, it seems
that the thread involved is the same in both cases:
Raspbian/Oracle JDK8:
"pool-2-thread-1" #28 prio=5 os_prio=0 tid=0x00ec62c0 nid=0x1434
runnable [0x9e644000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:349)
at
com.lmax.disruptor.SleepingWaitStrategy.applyWaitMethod(SleepingWaitStrategy.java:66)
at
com.lmax.disruptor.SleepingWaitStrategy.waitFor(SleepingWaitStrategy.java:39)
at
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55)
at
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:115)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
OpenSuse/OpenJDK7:
"pool-2-thread-1" prio=10 tid=0x00007faa28c6c000 nid=0x6082 runnable
[0x00007faa76bcb000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:349)
at
com.lmax.disruptor.SleepingWaitStrategy.applyWaitMethod(SleepingWaitStrategy.java:66)
at
com.lmax.disruptor.SleepingWaitStrategy.waitFor(SleepingWaitStrategy.java:39)
at
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55)
at
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:115)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Le 12/05/2013 09:21, Olivier Lamy a écrit :
2013/5/12 Patrice Ringot <[email protected]>:
Hello
Just for information: the latest snapshot of 1.4-M4 I have downloaded today
is able to run decently on a Raspberry PI model B (512MB) under Raspbian
Wheezy and
the Nov 12 preview of the Oracle JDK 8 for ARM processors. The UI is
responsive, and the utilization of Maven is very acceptable for a home
network (I have just modified the
-Xmx parameter to 256M instead of the 512MB default).
Good to know thanks for sharing !
The only thing that do not work out of the box is the Tanuki wrapper as it
is packaged in the current distribution; it is too old to contain the
required ARM files
(the latest one from SourceForge is OK).
Due to license reasons we cannot upgrade.
License has changed to GPL see
http://wrapper.tanukisoftware.com/doc/english/licenseOverview.html
Another thing I have noted (unrelated to the Raspberry or this particular
version of Archiva) concerns the content of the wrapper.conf file: it
contains the version of each jar in the classpath. As I initially used it as
a template in my Archiva puppet module, I had a problem when I made the
1.4-M3 -> 1.4-M4 update "in place" (class not found since the jars filenames
have naturally changed between these two versions).
Extract from wrapper.conf:
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=lib/wrapper.jar
wrapper.java.classpath.2=%REPO_DIR%/archiva-jetty-1.4-M4-SNAPSHOT.pom
wrapper.java.classpath.3=%REPO_DIR%/jetty-server-8.1.9.v20130131.jar
wrapper.java.classpath.4=%REPO_DIR%/javax.servlet-3.0.0.v201112011016.jar
wrapper.java.classpath.5=%REPO_DIR%/jetty-continuation-8.1.9.v20130131.jar
...
So I have changed my way of dealing with this file, but I remembered that I
did not encountered this problem with the
ElasticSearch service wrapper; actually they manage the classpath
differently:
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=%ES_HOME%/bin/service/lib/wrapper.jar
wrapper.java.classpath.2=%ES_HOME%/lib/elasticsearch*.jar
wrapper.java.classpath.3=%ES_HOME%/lib/*.jar
wrapper.java.classpath.4=%ES_HOME%/lib/sigar/*.jar
Using the * wildcard makes wrapper.conf and Archiva versions much more
independant of each other.
Good idea that's something to test.
Maybe only one line with
wrapper.java.classpath.2=%REPO_DIR%/*.jar
Any time to propose a patch ?
Regards
Patrice
PS: I have also tested artifacts deletion in the snapshot repository and it
was OK for me. This is very convenient.
--
Olivier Lamy
Ecetera: http://ecetera.com.au
http://twitter.com/olamy | http://linkedin.com/in/olamy