Re: Nexus vs rsync-repository releases

2011-08-03 Thread Jochen Wiedmann
On Wed, Aug 3, 2011 at 7:51 AM, Phil Steitz phil.ste...@gmail.com wrote:

 I am not so sure about that.  I guess it depends on what kind of
 simple scripts you are willing to use to automate things and how
 much you care about being certain that that things are working and
 you know exactly what is going on.  Personally, I would prefer that
 this machinery be visible and not require web gui games or plaintext
 passwords to work.  I like the way tomcat does this and I will
 volunteer to develop and document a similar set of Ant tasks for
 Commons if others are willing to move back to replicating o.a.c
 groupIds from p.a.o.

Phil, my simple script is the POM. With Nexus, my manual steps are
typically reduced to copying the -src and -bin files out of the
repository (wget's good for that and it copies the whole directory at
one go) and copying these to the dist directory. Honestly, there are
some things remaining like setting the -current links, updating the
README and perhaps the KEYS file and fixing the permissions. But these
are left to do anyways.

What's wrong with that procedure?

Jochen

-- 
Capitalism is the astounding belief that the most wickedest of men
will do the most wickedest of things for the greatest good of
everyone.

John Maynard Keynes (http://en.wikiquote.org/wiki/Keynes)

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Phil Steitz
On 8/2/11 11:09 PM, Jochen Wiedmann wrote:
 On Wed, Aug 3, 2011 at 7:51 AM, Phil Steitz phil.ste...@gmail.com wrote:

 I am not so sure about that.  I guess it depends on what kind of
 simple scripts you are willing to use to automate things and how
 much you care about being certain that that things are working and
 you know exactly what is going on.  Personally, I would prefer that
 this machinery be visible and not require web gui games or plaintext
 passwords to work.  I like the way tomcat does this and I will
 volunteer to develop and document a similar set of Ant tasks for
 Commons if others are willing to move back to replicating o.a.c
 groupIds from p.a.o.
 Phil, my simple script is the POM. With Nexus, my manual steps are
 typically reduced to copying the -src and -bin files out of the
 repository (wget's good for that and it copies the whole directory at
 one go) and copying these to the dist directory. Honestly, there are
 some things remaining like setting the -current links, updating the
 README and perhaps the KEYS file and fixing the permissions. But these
 are left to do anyways.

 What's wrong with that procedure?

Nothing, other than that is exactly what I do and I know exactly
what is going on.  I don't see any real saving, that's all and I see
no need to bring in proprietary gui-based software into the mix or
to wget stuff from remote hosts.  Nobody else seems to care about
this, but you actually should perform a step that I don't - which is
to verify the hashes after you move the stuff to dist/

Phil

 Jochen



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Jochen Wiedmann
On Wed, Aug 3, 2011 at 8:28 AM, Phil Steitz phil.ste...@gmail.com wrote:

 Nothing, other than that is exactly what I do and I know exactly
 what is going on.  I don't see any real saving, that's all and I see
 no need to bring in proprietary gui-based software into the mix or
 to wget stuff from remote hosts.  Nobody else seems to care about
 this, but you actually should perform a step that I don't - which is
 to verify the hashes after you move the stuff to dist/

Why should I? They have been built by Maven or Nexus automatically?


-- 
Capitalism is the astounding belief that the most wickedest of men
will do the most wickedest of things for the greatest good of
everyone.

John Maynard Keynes (http://en.wikiquote.org/wiki/Keynes)

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[continuum] BUILD FAILURE: Apache Commons - Commons CLI - Default Maven 2 Build Definition (Java 1.5)

2011-08-03 Thread Continuum@vmbuild
Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=10810projectId=67

Build statistics:
  State: Failed
  Previous State: Failed
  Started at: Wed 3 Aug 2011 06:38:40 +
  Finished at: Wed 3 Aug 2011 06:38:52 +
  Total time: 12s
  Build Trigger: Forced
  Build Number: 23
  Exit code: 1
  Building machine hostname: vmbuild
  Operating system : Linux(unknown)
  Java Home version : 
  java version 1.6.0_24
  Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
  Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

  Builder version :
  Apache Maven 2.2.1 (r801777; 2009-08-06 19:16:01+)
  Java version: 1.6.0_24
  Java home: /usr/lib/jvm/java-6-sun-1.6.0.24/jre
  Default locale: en_US, platform encoding: ANSI_X3.4-1968
  OS name: linux version: 2.6.32-31-server arch: amd64 Family: 
unix


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Definition:

POM filename: pom.xml
Goals: clean install deploy   
Arguments: --batch-mode -Pjava-1.5
Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: COMMONS_SCHEDULE
Profile Name: Maven 2.2.1
Description: Default Maven 2 Build Definition (Java 1.5)


Test Summary:

Tests: 361
Failures: 0
Errors: 0
Success Rate: 100
Total time: 0.3438





-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Phil Steitz
On 8/2/11 11:30 PM, Jochen Wiedmann wrote:
 On Wed, Aug 3, 2011 at 8:28 AM, Phil Steitz phil.ste...@gmail.com wrote:

 Nothing, other than that is exactly what I do and I know exactly
 what is going on.  I don't see any real saving, that's all and I see
 no need to bring in proprietary gui-based software into the mix or
 to wget stuff from remote hosts.  Nobody else seems to care about
 this, but you actually should perform a step that I don't - which is
 to verify the hashes after you move the stuff to dist/
 Why should I? They have been built by Maven or Nexus automatically?

When you transfer files on the Internet, they can get corrupted in
transit.  This is why we *have* hashes on the files.   When you put
the files on p.a.o and we VOTE on the bits there, you can just move
them to /dist without worrying about integrity.

And I seriously hope nothing is being built by nexus :)

Phil




-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Luc Maisonobe

Le 03/08/2011 08:40, Phil Steitz a écrit :

On 8/2/11 11:30 PM, Jochen Wiedmann wrote:

On Wed, Aug 3, 2011 at 8:28 AM, Phil Steitzphil.ste...@gmail.com  wrote:


Nothing, other than that is exactly what I do and I know exactly
what is going on.  I don't see any real saving, that's all and I see
no need to bring in proprietary gui-based software into the mix or
to wget stuff from remote hosts.  Nobody else seems to care about
this, but you actually should perform a step that I don't - which is
to verify the hashes after you move the stuff to dist/

Why should I? They have been built by Maven or Nexus automatically?


When you transfer files on the Internet, they can get corrupted in
transit.  This is why we *have* hashes on the files.   When you put
the files on p.a.o and we VOTE on the bits there, you can just move
them to /dist without worrying about integrity.

And I seriously hope nothing is being built by nexus :)


I am also worried about a closed tool like Nexus being used to publish 
Apache stuff.


Luc



Phil






-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Simone Tripodi

 I am also worried about a closed tool like Nexus being used to publish
 Apache stuff.

 Luc


This is curious, indeed. I always wondered why we use Nexus instead of
eating our own dog food Apache Archiva[1], but worried to receive a
reply why didn't you RTFM on http://xyz...; :P
Have a nice day!!!
Simo

[1] http://archiva.apache.org/


http://people.apache.org/~simonetripodi/
http://www.99soft.org/

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Improving tests and performance of RandomGenerator implementations

2011-08-03 Thread Luc Maisonobe

Le 01/08/2011 22:40, Luc Maisonobe a écrit :

Hi Phil,

Le 01/08/2011 20:39, Phil Steitz a écrit :

On 8/1/11 1:31 AM, luc.maison...@free.fr wrote:

Hi Phil,

- Mail original -

In my own applications, I noticed what appears to be poor
performance in the nextInt(int) method of the Mersenne twister,
which I was using to *improve* speed. I think that for small n, the
default implementation in BistreamGenerator may be running too many
iterations.

Mersenne twister uses a quite large pool. It creates pseudo-random bits
by twisting it and creates large bunches at a time (624 words at a
time).
Hence when you ask for large sets, you should have several calls that
return fast, and one call that takes a longer time to generate another
large pool.

So good performances are obtained for generating large sets, not
small sets.

Well generators should be faster and are preferred over Mersenne
twister now,
which is now an old generator. Well generators also have large pools,
but they
don't generate bits in large batches in advance, they do generate a
few words
at a time.


Yeah, I know. Both are faster than the JDK, though, even for just
32-bit chunks in my tests at least.

One thing I have been thinking about is exposing nextInt[],
nextDouble[], nextGaussian[] etc methods that take advantage of the
pools. So you basically generate a large block of bits use this to
fill the output arrays.


Seems a very good idea. Most of the time, people generate only one kind
of numbers several times, so it really does make sense.




I am still figuring out how the code works, but I
thought it would be good to run some benchmarks - using Gilles' new
stuff - against the Harmony implementation in java.util.Random of
this method. That led me to notice that there are no unit tests for
BitstreamGenerator. I propose that we add
0) RandomGeneratorAbstractTest with an abstract makeGenerator
method including fixed seed tests for all RandomGenerator methods
1) BitstreamGeneratorTest extending RandomGeneratorAbstractTest
implementing makeGenerator with a BitStreamGenerator that uses the
JDK generator for next(int)
2) Make the test classes for Mersenne and Weil generators extend
RandomGeneratorAbstractTest, moving redundant tests up into the base
class

Sound reasonable?

+1


Also, any recollection why we are using a
different implementation in BitStreamGenerator for next(int) than
Harmony and the JDK use?

I don't understand what you mean. next(int) is used to generate the raw
bits and is the heart of each generator. Each generator has its own
implementation. Replacing next(int) by the JDK generation would imply
dropping completely Mersenne twister and Well generators.


I am sorry. I meant nextInt(int). It is that code that seems to be
slow in BitStreamGenerator and different from the JDK and Harmony.


Could you point me at some code ? There are many pitfalls in netInt(int)
if one wants to make sure the generator is uniform, which explain the
strange implementation, with the mask computation and the loop. By the
way, even this implementation would benefit from your proposed array
generation, as the mask could be computed only once.


I have looked at the implementation for JDK and Harmony and am a little 
puzzled.


The trick for the power of two (i.e. if ((n  -n) == n)) is not useful 
for the very elaborate generators like Mersenne twister or Well. Both 
are proven to be equidistributed even for the low order bits. They are 
based on linear recurrences but not linear congruences and do not suffer 
from the drawbacks of the latter.


What puzzles me more is the loop. It is documented as avoiding the 
uneven distributions, but at first glance the modulo operation bothers 
me. As documentation explicitly states it is designed for this, it is 
most probably true, I simply don't understand how yet.


So our current implementation is slow, then go ahead and change it to 
the one you showed me. I would simply suggest to get rid of the ((n  
-n) == n) test. I'll try to understand the condition in the while loop 
to understand how it rejects uneven distributions, just out of curiosity 
for myself.


Luc



Luc




Phil


Mersenne twister and Well should be fast for generating large sets, but
most importantly they have very good and *proven* properties
(equidistribution
on large dimensions, null correlation, maximal period ...). These
properties
are essential for example in Monte-Carlo simulations with lots of
variables that
must be independent or have controlled correlations.

Luc


The Harmony impl is almost identical to
what is documented in the JDK javadoc.

Phil

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: 

Re: [math] Improving tests and performance of RandomGenerator implementations

2011-08-03 Thread Luc Maisonobe

Le 03/08/2011 09:38, Luc Maisonobe a écrit :

Le 01/08/2011 22:40, Luc Maisonobe a écrit :

Hi Phil,

Le 01/08/2011 20:39, Phil Steitz a écrit :

On 8/1/11 1:31 AM, luc.maison...@free.fr wrote:

Hi Phil,

- Mail original -

In my own applications, I noticed what appears to be poor
performance in the nextInt(int) method of the Mersenne twister,
which I was using to *improve* speed. I think that for small n, the
default implementation in BistreamGenerator may be running too many
iterations.

Mersenne twister uses a quite large pool. It creates pseudo-random bits
by twisting it and creates large bunches at a time (624 words at a
time).
Hence when you ask for large sets, you should have several calls that
return fast, and one call that takes a longer time to generate another
large pool.

So good performances are obtained for generating large sets, not
small sets.

Well generators should be faster and are preferred over Mersenne
twister now,
which is now an old generator. Well generators also have large pools,
but they
don't generate bits in large batches in advance, they do generate a
few words
at a time.


Yeah, I know. Both are faster than the JDK, though, even for just
32-bit chunks in my tests at least.

One thing I have been thinking about is exposing nextInt[],
nextDouble[], nextGaussian[] etc methods that take advantage of the
pools. So you basically generate a large block of bits use this to
fill the output arrays.


Seems a very good idea. Most of the time, people generate only one kind
of numbers several times, so it really does make sense.




I am still figuring out how the code works, but I
thought it would be good to run some benchmarks - using Gilles' new
stuff - against the Harmony implementation in java.util.Random of
this method. That led me to notice that there are no unit tests for
BitstreamGenerator. I propose that we add
0) RandomGeneratorAbstractTest with an abstract makeGenerator
method including fixed seed tests for all RandomGenerator methods
1) BitstreamGeneratorTest extending RandomGeneratorAbstractTest
implementing makeGenerator with a BitStreamGenerator that uses the
JDK generator for next(int)
2) Make the test classes for Mersenne and Weil generators extend
RandomGeneratorAbstractTest, moving redundant tests up into the base
class

Sound reasonable?

+1


Also, any recollection why we are using a
different implementation in BitStreamGenerator for next(int) than
Harmony and the JDK use?

I don't understand what you mean. next(int) is used to generate the raw
bits and is the heart of each generator. Each generator has its own
implementation. Replacing next(int) by the JDK generation would imply
dropping completely Mersenne twister and Well generators.


I am sorry. I meant nextInt(int). It is that code that seems to be
slow in BitStreamGenerator and different from the JDK and Harmony.


Could you point me at some code ? There are many pitfalls in netInt(int)
if one wants to make sure the generator is uniform, which explain the
strange implementation, with the mask computation and the loop. By the
way, even this implementation would benefit from your proposed array
generation, as the mask could be computed only once.


I have looked at the implementation for JDK and Harmony and am a little
puzzled.

The trick for the power of two (i.e. if ((n  -n) == n)) is not useful
for the very elaborate generators like Mersenne twister or Well. Both
are proven to be equidistributed even for the low order bits. They are
based on linear recurrences but not linear congruences and do not suffer
from the drawbacks of the latter.

What puzzles me more is the loop. It is documented as avoiding the
uneven distributions, but at first glance the modulo operation bothers
me. As documentation explicitly states it is designed for this, it is
most probably true, I simply don't understand how yet.

So our current implementation is slow, then go ahead and change it to
the one you showed me. I would simply suggest to get rid of the ((n 
-n) == n) test. I'll try to understand the condition in the while loop
to understand how it rejects uneven distributions, just out of curiosity
for myself.


OK, I finally understood the algorithm and how it rejects the largest 
incomplete numbers from k*n to (2^31)-1 where k*n is the largest 
multiple of n that fits in a positive integer. The trick lies in the 
addition of (n-1) which overflows the integer and wraps the result back 
to negative values. It is smart.


+1 to use it.

Luc



Luc



Luc




Phil


Mersenne twister and Well should be fast for generating large sets, but
most importantly they have very good and *proven* properties
(equidistribution
on large dimensions, null correlation, maximal period ...). These
properties
are essential for example in Monte-Carlo simulations with lots of
variables that
must be independent or have controlled correlations.

Luc


The Harmony impl is almost identical to
what is documented in the JDK javadoc.

Phil


Re: Nexus vs rsync-repository releases

2011-08-03 Thread Jochen Wiedmann
On Wed, Aug 3, 2011 at 8:45 AM, Luc Maisonobe luc.maison...@free.fr wrote:

 I am also worried about a closed tool like Nexus being used to publish
 Apache stuff.

That discussion was made years ago, when Nexus was introduced. And,
for the record, I strongly opposed Nexus at that time in favour of
other solutions, in particular Archiva, for that very reason.

Now that we have Nexus, I won't fight such wars again and again. It's
there and it's gonna stay. Just like Jira, or Confluence, for that
matter, and I can't remember similar discussions in that area.

Jochen



-- 
Capitalism is the astounding belief that the most wickedest of men
will do the most wickedest of things for the greatest good of
everyone.

John Maynard Keynes (http://en.wikiquote.org/wiki/Keynes)

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [GUMP@vmgump]: Project jakarta-turbine-jcs (in module apache-commons) failed

2011-08-03 Thread Thomas Vandahl
On 01.08.11 09:08, Stefan Bodewig wrote:
 I've added a dependency on velocity-tools-view to the Gump descriptor
 which may or may not fix the problem (velocity-tools-view currently
 doesn't build itself in Gump and unless I managed to fix that as well,
 JCS will simply not be built at all).

Thanks for the fix.

 JCS is using a special Ant build file just for Gump.  do you intend to
 keep it that way or would you prefer to switch to building with
 mvn(2|3)?  Here I assume mvn is the devs' build tool of choice and the
 Gump build file just an artifact of the times where Gump didn't support
 mvn properly.

I'm planning to switch to Maven 2/3 completely, just keeping the old
build files for reference for a while. I need to find the time to clean
this up, though.

Bye, Thomas.

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Mark Thomas
On 03/08/2011 06:36, Jochen Wiedmann wrote:
 On Tue, Aug 2, 2011 at 4:29 PM, Mark Thomas ma...@apache.org wrote:
 
 What benefits do we get by using Nexus
 
 A real lot of work saved. (Beleave me. I've been RM for the same
 projects with and without Nexus and it's really a difference.)

I fail to see why this is the case. Tomcat releases to the Maven repos
and to /dist and doesn't go anywhere near Nexus yet is still minimal
effort. An entire release takes 2-3 minutes of actual effort (and about
90 minutes of waiting for stuff to upload).

Mark

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Stephen Colebourne
My thought is that there might be some java.util.logging helpers that
could be written, and perhaps they might go in [lang] if there are 5
or fewer classes.

I assume that slf4j and log4j have their own j.u.logging connections,
so that end is dealt with.

The time of [logging] has probably passed.

Stephen


On 3 August 2011 06:50, Henri Yandell flame...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 1:59 AM, Emmanuel Bourg ebo...@apache.org wrote:
 Le 28/07/2011 22:01, Henri Yandell a écrit :

 Personally I'm happy for commons-logging to die. :)

 Yeah let's use java.util.logging instead :)

 Primarily that I don't get the feeling we have a major community of
 developers on c-logging. We implemented it because we needed something
 for our other components (though many simply chose not to log), but it
 was never the passion of anybody here (hopefully not an incorrect
 statement). Robert, Simon and others put in tons of good work, but I
 feel that was duty not passion.

 So happy to see it die because it's something that's headed to
 dormancy (be it stable or not).

 Hen

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[GUMP@vmgump]: Project commons-proxy-test (in module apache-commons) failed

2011-08-03 Thread Gump
To whom it may engage...

This is an automated request, but not an unsolicited one. For 
more information please visit http://gump.apache.org/nagged.html, 
and/or contact the folk at gene...@gump.apache.org.

Project commons-proxy-test has an issue affecting its community integration.
This issue affects 1 projects,
 and has been outstanding for 14 runs.
The current state of this project is 'Failed', with reason 'Build Failed'.
For reference only, the following projects are affected by this:
- commons-proxy-test :  Apache Commons


Full details are available at:

http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/index.html

That said, some information snippets are provided here.

The following annotations (debug/informational/warning/error messages) were 
provided:
 -WARNING- Overriding Maven settings: 
[/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml]
 -DEBUG- (Apache Gump generated) Apache Maven Settings in: 
/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml
 -INFO- Failed with reason build failed
 -DEBUG- Maven POM in: /srv/gump/public/workspace/apache-commons/proxy/pom.xml
 -INFO- Project Reports in: 
/srv/gump/public/workspace/apache-commons/proxy/target/surefire-reports



The following work was performed:
http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/gump_work/build_apache-commons_commons-proxy-test.html
Work Name: build_apache-commons_commons-proxy-test (Type: Build)
Work ended in a state of : Failed
Elapsed: 15 secs
Command Line: /opt/maven2/bin/mvn --batch-mode --settings 
/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml test 
[Working Directory: /srv/gump/public/workspace/apache-commons/proxy]
M2_HOME: /opt/maven2
-
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.factory.util.TestMethodSignature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running org.apache.commons.proxy.provider.TestConstantProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec
Running org.apache.commons.proxy.interceptor.TestFilteredInterceptor
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec
Running org.apache.commons.proxy.interceptor.filter.TestPatternFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.interceptor.TestSerializingInterceptor
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec
Running org.apache.commons.proxy.interceptor.TestInterceptorChain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec
Running org.apache.commons.proxy.invoker.TestNullInvoker
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec
Running org.apache.commons.proxy.provider.remoting.TestBurlapProvider
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec
Running org.apache.commons.proxy.exception.TestDelegateProviderException
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.invoker.TestChainInvoker
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec
Running org.apache.commons.proxy.factory.javassist.TestJavassistProxyFactory
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.commons.proxy.exception.TestProxyFactoryException
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.interceptor.filter.TestReturnTypeFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.provider.TestBeanProvider
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec

Results :

Tests in error: 
  testInvalidHandlerName(org.apache.commons.proxy.invoker.TestXmlRpcInvoker)

Tests run: 179, Failures: 0, Errors: 1, Skipped: 0

[INFO] 
[ERROR] BUILD FAILURE
[INFO] 
[INFO] There are test failures.

Please refer to 
/srv/gump/public/workspace/apache-commons/proxy/target/surefire-reports for the 
individual test results.
[INFO] 
[INFO] For more information, run Maven with the -e switch
[INFO] 
[INFO] Total time: 13 seconds
[INFO] Finished at: Wed Aug 03 11:21:46 UTC 2011
[INFO] Final Memory: 24M/58M
[INFO] 
-

To subscribe to this information via syndicated feeds:
- RSS: 
http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/rss.xml
- Atom: 

Re: [logging] logging vs slf4j

2011-08-03 Thread David Karlsen
Hasn't the time for both CL and log4j passed by? The trend nowadays seems to
be slf4j/logback.
Den 3. aug. 2011 15:03 skrev Gary Gregory garydgreg...@gmail.com
følgende:
 Or maybe Log4j 2 could replace [logging].

 Gary

 On Wed, Aug 3, 2011 at 5:33 AM, Stephen Colebourne scolebou...@joda.org
wrote:
 My thought is that there might be some java.util.logging helpers that
 could be written, and perhaps they might go in [lang] if there are 5
 or fewer classes.

 I assume that slf4j and log4j have their own j.u.logging connections,
 so that end is dealt with.

 The time of [logging] has probably passed.

 Stephen


 On 3 August 2011 06:50, Henri Yandell flame...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 1:59 AM, Emmanuel Bourg ebo...@apache.org
wrote:
 Le 28/07/2011 22:01, Henri Yandell a écrit :

 Personally I'm happy for commons-logging to die. :)

 Yeah let's use java.util.logging instead :)

 Primarily that I don't get the feeling we have a major community of
 developers on c-logging. We implemented it because we needed something
 for our other components (though many simply chose not to log), but it
 was never the passion of anybody here (hopefully not an incorrect
 statement). Robert, Simon and others put in tons of good work, but I
 feel that was duty not passion.

 So happy to see it die because it's something that's headed to
 dormancy (be it stable or not).

 Hen

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





 --
 Thank you,
 Gary

 http://garygregory.wordpress.com/
 http://garygregory.com/
 http://people.apache.org/~ggregory/
 http://twitter.com/GaryGregory

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



[collections] 4.0 release path

2011-08-03 Thread Simone Tripodi
Hi all guys,
I'm (re)starting having a good slot of spare time, I volunteered to
help Matt on finalizing the [collections] release, but after had a
look at the open issues I think we should agree on what including and
what not.
Does anyone already have a good overview/idea of collections roadmap?
Many thanks in advance, have a nice day!!!
Simo

http://people.apache.org/~simonetripodi/
http://www.99soft.org/

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Ralph Goers

On Aug 3, 2011, at 6:07 AM, David Karlsen wrote:

 Hasn't the time for both CL and log4j passed by? The trend nowadays seems to
 be slf4j/logback.

If you read further back in this thread you will see where I highlighted the 
problems in Logback as well as difficulties with SLF4J. Plus, every time Ceki 
goes on vacation everything stops.

Ralph


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Ralph Goers

On Aug 2, 2011, at 11:45 PM, Luc Maisonobe wrote:

 I am also worried about a closed tool like Nexus being used to publish Apache 
 stuff.
 

Nexus is not completely closed. There is a community version that contains 90% 
of the functionality. That said, I really don't want to go look at the source 
code for any of the repository managers.

Ralph


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Paul Benedict
I prefer Apache driven projects when possible. If LOG4J2 takes off,
feature requests would be implemented quicker, I hope.

On Wed, Aug 3, 2011 at 9:27 AM, Ralph Goers ralph.go...@dslextreme.com wrote:

 On Aug 3, 2011, at 6:07 AM, David Karlsen wrote:

 Hasn't the time for both CL and log4j passed by? The trend nowadays seems to
 be slf4j/logback.

 If you read further back in this thread you will see where I highlighted the 
 problems in Logback as well as difficulties with SLF4J. Plus, every time Ceki 
 goes on vacation everything stops.

 Ralph


 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [collections] 4.0 release path

2011-08-03 Thread Gary Gregory
The most important theme IMO is generics. That's what has come up at
work recently in fact. Everything else except showstopper bugs can
wait IMO.

Gary

On Wed, Aug 3, 2011 at 9:16 AM, Simone Tripodi simonetrip...@apache.org wrote:
 Hi all guys,
 I'm (re)starting having a good slot of spare time, I volunteered to
 help Matt on finalizing the [collections] release, but after had a
 look at the open issues I think we should agree on what including and
 what not.
 Does anyone already have a good overview/idea of collections roadmap?
 Many thanks in advance, have a nice day!!!
 Simo

 http://people.apache.org/~simonetripodi/
 http://www.99soft.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





-- 
Thank you,
Gary

http://garygregory.wordpress.com/
http://garygregory.com/
http://people.apache.org/~ggregory/
http://twitter.com/GaryGregory

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [LANG] Proposing 3.0.1

2011-08-03 Thread Gary Gregory
On Wed, Aug 3, 2011 at 10:02 AM, Jörg Schaible
joerg.schai...@scalaris.com wrote:
 Hi Hen,

 Henri Yandell wrote:

 I'd like to release 3.0.1 of Lang.

 Do we have any policies regarding Serializable types? I'd like to make
 StrMatcher, StrLookup and StrSubstitutor serializable. One step further to a
 serializable Configuration.

I do not recall such a thing. We should document somewhere the fact
that we do or do not have one though! In an overview.html or package
html perhaps.

Gary


 Cheers,
 Jörg



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





-- 
Thank you,
Gary

http://garygregory.wordpress.com/
http://garygregory.com/
http://people.apache.org/~ggregory/
http://twitter.com/GaryGregory

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Paul Benedict
On Wed, Aug 3, 2011 at 9:51 AM, Gary Gregory garydgreg...@gmail.com wrote:

 I like Log4J just fine thank you very much :)

 I'm looking forward to 2.0.

 Gary


I concur with Gary. All my apps use LOG4J, not JCL or SLF4J. My
dependencies do, however, but LOG4J works great minus a few
enhancements I'd like to see.

BTW, in terms of swelling community development, if LOG4J+JCL were to
merge and just become JCL2, it could have the visibility of all
Commons committers. Isn't it much more of a common component than a
separate project? I think the logging project is dysfunctional anyway
-- make it a common component if possible.

Paul

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [collections] 4.0 release path

2011-08-03 Thread Paul Benedict
Or do a pure generics release as 3.5 to satisfy that need... which
allows 4.0 to have generics plus the benefit of major refactoring if
necessary (could also be called 4.0 and 5.0).

On Wed, Aug 3, 2011 at 9:55 AM, Matt Benson gudnabr...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 9:48 AM, Gary Gregory garydgreg...@gmail.com wrote:
 The most important theme IMO is generics. That's what has come up at
 work recently in fact. Everything else except showstopper bugs can
 wait IMO.

 Indeed, this seems to resonate with Hen's recent treatise on
 (paraphrased) why the hell we take so long.

 Matt


 Gary

 On Wed, Aug 3, 2011 at 9:16 AM, Simone Tripodi simonetrip...@apache.org 
 wrote:
 Hi all guys,
 I'm (re)starting having a good slot of spare time, I volunteered to
 help Matt on finalizing the [collections] release, but after had a
 look at the open issues I think we should agree on what including and
 what not.
 Does anyone already have a good overview/idea of collections roadmap?
 Many thanks in advance, have a nice day!!!
 Simo

 http://people.apache.org/~simonetripodi/
 http://www.99soft.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





 --
 Thank you,
 Gary

 http://garygregory.wordpress.com/
 http://garygregory.com/
 http://people.apache.org/~ggregory/
 http://twitter.com/GaryGregory

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Antonio Petrelli
First of all, sorry to jump in at this point of the discussion.

2011/7/28 Simone Tripodi simonetrip...@apache.org

 Hi all guys,
 I remember I raw a thread - not sure if I did it here at commons or
 somewhere else here at apache - where specified we prefer adding
 [logging] as components dependency instead of slf4j...
 Did I just get crazy or someone can point me to the right direction please?
 :)


I think that we should start from the fact that many application developers
use Log4j 1.2 for their purposes, simply because it is *enough* configurable
and *enough* useful.
Sincerely I think that we don't need anything more than what Log4j 1.2
already provides.
And I don't think that j.u.logging is useful enough, the configuration is
simply not there.

Stated this, obviously it is out of the question to adopt Log4j, simply
because Commons is made of libraries, and other libraries use other logging
frameworks. So we need a wrapper.
I would choose SLF4J for a simple consideration: there is a Commons Logging
substitute in SLF4J (jcl-over-slf4j) but not a SLF4J substitute in Commons
Logging.
From a Maven perspective, If Commons Logging is chosen, when a common
library is included, SLF4J users must exclude commons-logging dependency to
add jcl-over-slf4j, for all libraries.

However in my experience SLF4J has a big drawback: when used in a shared
classloader (JBoss Portal anyone?) it is needed to have the same stinky old
version of SLF4J in all applications during compile time, and the library
should be excluded from the package.

Antonio

P.S. The world was better when there was only Log4j :-D


Re: [logging] logging vs slf4j

2011-08-03 Thread Christian Grobmeier
Paul,

 BTW, in terms of swelling community development, if LOG4J+JCL were to
 merge and just become JCL2, it could have the visibility of all
 Commons committers. Isn't it much more of a common component than a
 separate project? I think the logging project is dysfunctional anyway
 -- make it a common component if possible.

in logging.apache.org are other sub projects doing similar stuff, like
log4php, log4c and such. In addition there is the companions
subproject.

And there is some activity in the logging project. We have had a new
addition to the PMC recently and log4php has released a new version
before a short while. Together with the log4j2 efforts and the efforts
put from time to time into cmpanions and chainsaw, I would say the
project is a bit silent, but not dead.

If you take out log4j, everything else in logging would become instable.


 Paul

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





-- 
http://www.grobmeier.de

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [collections] 4.0 release path

2011-08-03 Thread Simone Tripodi
Thanks all for the feedbacks!!!
I think that having generics (already done for what I can see) plus
COLLECTIONS-310/351/372/377 + checking all issues where requesting
generics would be reasonable to publish the 4.0 release.

I don't know if someone already has a complete list of COLLECTIONS-310
related deprecable/droppable features

TIA, all the best!!!
Simo

http://people.apache.org/~simonetripodi/
http://www.99soft.org/



On Wed, Aug 3, 2011 at 5:01 PM, Paul Benedict pbened...@apache.org wrote:
 Or do a pure generics release as 3.5 to satisfy that need... which
 allows 4.0 to have generics plus the benefit of major refactoring if
 necessary (could also be called 4.0 and 5.0).

 On Wed, Aug 3, 2011 at 9:55 AM, Matt Benson gudnabr...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 9:48 AM, Gary Gregory garydgreg...@gmail.com wrote:
 The most important theme IMO is generics. That's what has come up at
 work recently in fact. Everything else except showstopper bugs can
 wait IMO.

 Indeed, this seems to resonate with Hen's recent treatise on
 (paraphrased) why the hell we take so long.

 Matt


 Gary

 On Wed, Aug 3, 2011 at 9:16 AM, Simone Tripodi simonetrip...@apache.org 
 wrote:
 Hi all guys,
 I'm (re)starting having a good slot of spare time, I volunteered to
 help Matt on finalizing the [collections] release, but after had a
 look at the open issues I think we should agree on what including and
 what not.
 Does anyone already have a good overview/idea of collections roadmap?
 Many thanks in advance, have a nice day!!!
 Simo

 http://people.apache.org/~simonetripodi/
 http://www.99soft.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





 --
 Thank you,
 Gary

 http://garygregory.wordpress.com/
 http://garygregory.com/
 http://people.apache.org/~ggregory/
 http://twitter.com/GaryGregory

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [collections] 4.0 release path

2011-08-03 Thread Stephen Colebourne
I think that a key mistake was trying to do both generics and
refactoring. I'd suggest that quite a few users would simply like a
generified [collections] 3.5 that is fully backwards compatible (as
the JDK was) and with no refactoring.

Now, some of the API cannot be generified correctly, so  for a v3.5,
those should simply be left as raw types.

Of course doing the above isn't fun, as it involves going back
(again), but it it probably the right approach.

Stephen



On 3 August 2011 16:01, Paul Benedict pbened...@apache.org wrote:
 Or do a pure generics release as 3.5 to satisfy that need... which
 allows 4.0 to have generics plus the benefit of major refactoring if
 necessary (could also be called 4.0 and 5.0).

 On Wed, Aug 3, 2011 at 9:55 AM, Matt Benson gudnabr...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 9:48 AM, Gary Gregory garydgreg...@gmail.com wrote:
 The most important theme IMO is generics. That's what has come up at
 work recently in fact. Everything else except showstopper bugs can
 wait IMO.

 Indeed, this seems to resonate with Hen's recent treatise on
 (paraphrased) why the hell we take so long.

 Matt


 Gary

 On Wed, Aug 3, 2011 at 9:16 AM, Simone Tripodi simonetrip...@apache.org 
 wrote:
 Hi all guys,
 I'm (re)starting having a good slot of spare time, I volunteered to
 help Matt on finalizing the [collections] release, but after had a
 look at the open issues I think we should agree on what including and
 what not.
 Does anyone already have a good overview/idea of collections roadmap?
 Many thanks in advance, have a nice day!!!
 Simo

 http://people.apache.org/~simonetripodi/
 http://www.99soft.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org





 --
 Thank you,
 Gary

 http://garygregory.wordpress.com/
 http://garygregory.com/
 http://people.apache.org/~ggregory/
 http://twitter.com/GaryGregory

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Improving tests and performance of RandomGenerator implementations

2011-08-03 Thread sebb
On 3 August 2011 09:06, Luc Maisonobe luc.maison...@free.fr wrote:
 Le 03/08/2011 09:38, Luc Maisonobe a écrit :

 Le 01/08/2011 22:40, Luc Maisonobe a écrit :

 Hi Phil,

 Le 01/08/2011 20:39, Phil Steitz a écrit :

 On 8/1/11 1:31 AM, luc.maison...@free.fr wrote:

 Hi Phil,

 - Mail original -

 In my own applications, I noticed what appears to be poor
 performance in the nextInt(int) method of the Mersenne twister,
 which I was using to *improve* speed. I think that for small n, the
 default implementation in BistreamGenerator may be running too many
 iterations.

 Mersenne twister uses a quite large pool. It creates pseudo-random bits
 by twisting it and creates large bunches at a time (624 words at a
 time).
 Hence when you ask for large sets, you should have several calls that
 return fast, and one call that takes a longer time to generate another
 large pool.

 So good performances are obtained for generating large sets, not
 small sets.

 Well generators should be faster and are preferred over Mersenne
 twister now,
 which is now an old generator. Well generators also have large pools,
 but they
 don't generate bits in large batches in advance, they do generate a
 few words
 at a time.

 Yeah, I know. Both are faster than the JDK, though, even for just
 32-bit chunks in my tests at least.

 One thing I have been thinking about is exposing nextInt[],
 nextDouble[], nextGaussian[] etc methods that take advantage of the
 pools. So you basically generate a large block of bits use this to
 fill the output arrays.

 Seems a very good idea. Most of the time, people generate only one kind
 of numbers several times, so it really does make sense.


 I am still figuring out how the code works, but I
 thought it would be good to run some benchmarks - using Gilles' new
 stuff - against the Harmony implementation in java.util.Random of
 this method. That led me to notice that there are no unit tests for
 BitstreamGenerator. I propose that we add
 0) RandomGeneratorAbstractTest with an abstract makeGenerator
 method including fixed seed tests for all RandomGenerator methods
 1) BitstreamGeneratorTest extending RandomGeneratorAbstractTest
 implementing makeGenerator with a BitStreamGenerator that uses the
 JDK generator for next(int)
 2) Make the test classes for Mersenne and Weil generators extend
 RandomGeneratorAbstractTest, moving redundant tests up into the base
 class

 Sound reasonable?

 +1

 Also, any recollection why we are using a
 different implementation in BitStreamGenerator for next(int) than
 Harmony and the JDK use?

 I don't understand what you mean. next(int) is used to generate the raw
 bits and is the heart of each generator. Each generator has its own
 implementation. Replacing next(int) by the JDK generation would imply
 dropping completely Mersenne twister and Well generators.

 I am sorry. I meant nextInt(int). It is that code that seems to be
 slow in BitStreamGenerator and different from the JDK and Harmony.

 Could you point me at some code ? There are many pitfalls in netInt(int)
 if one wants to make sure the generator is uniform, which explain the
 strange implementation, with the mask computation and the loop. By the
 way, even this implementation would benefit from your proposed array
 generation, as the mask could be computed only once.

 I have looked at the implementation for JDK and Harmony and am a little
 puzzled.

 The trick for the power of two (i.e. if ((n  -n) == n)) is not useful
 for the very elaborate generators like Mersenne twister or Well. Both
 are proven to be equidistributed even for the low order bits. They are
 based on linear recurrences but not linear congruences and do not suffer
 from the drawbacks of the latter.

 What puzzles me more is the loop. It is documented as avoiding the
 uneven distributions, but at first glance the modulo operation bothers
 me. As documentation explicitly states it is designed for this, it is
 most probably true, I simply don't understand how yet.

 So our current implementation is slow, then go ahead and change it to
 the one you showed me. I would simply suggest to get rid of the ((n 
 -n) == n) test. I'll try to understand the condition in the while loop
 to understand how it rejects uneven distributions, just out of curiosity
 for myself.

 OK, I finally understood the algorithm and how it rejects the largest
 incomplete numbers from k*n to (2^31)-1 where k*n is the largest multiple of
 n that fits in a positive integer. The trick lies in the addition of (n-1)
 which overflows the integer and wraps the result back to negative values. It
 is smart.

 +1 to use it.

Provided that the algorithm is documented ...

 Luc


 Luc


 Luc



 Phil

 Mersenne twister and Well should be fast for generating large sets, but
 most importantly they have very good and *proven* properties
 (equidistribution
 on large dimensions, null correlation, maximal period ...). These
 properties
 are essential for example in Monte-Carlo 

Re: [math] Improving tests and performance of RandomGenerator implementations

2011-08-03 Thread Phil Steitz
On 8/3/11 9:02 AM, sebb wrote:
 On 3 August 2011 09:06, Luc Maisonobe luc.maison...@free.fr wrote:
 Le 03/08/2011 09:38, Luc Maisonobe a écrit :
 Le 01/08/2011 22:40, Luc Maisonobe a écrit :
 Hi Phil,

 Le 01/08/2011 20:39, Phil Steitz a écrit :
 On 8/1/11 1:31 AM, luc.maison...@free.fr wrote:
 Hi Phil,

 - Mail original -
 In my own applications, I noticed what appears to be poor
 performance in the nextInt(int) method of the Mersenne twister,
 which I was using to *improve* speed. I think that for small n, the
 default implementation in BistreamGenerator may be running too many
 iterations.
 Mersenne twister uses a quite large pool. It creates pseudo-random bits
 by twisting it and creates large bunches at a time (624 words at a
 time).
 Hence when you ask for large sets, you should have several calls that
 return fast, and one call that takes a longer time to generate another
 large pool.

 So good performances are obtained for generating large sets, not
 small sets.

 Well generators should be faster and are preferred over Mersenne
 twister now,
 which is now an old generator. Well generators also have large pools,
 but they
 don't generate bits in large batches in advance, they do generate a
 few words
 at a time.
 Yeah, I know. Both are faster than the JDK, though, even for just
 32-bit chunks in my tests at least.

 One thing I have been thinking about is exposing nextInt[],
 nextDouble[], nextGaussian[] etc methods that take advantage of the
 pools. So you basically generate a large block of bits use this to
 fill the output arrays.
 Seems a very good idea. Most of the time, people generate only one kind
 of numbers several times, so it really does make sense.

 I am still figuring out how the code works, but I
 thought it would be good to run some benchmarks - using Gilles' new
 stuff - against the Harmony implementation in java.util.Random of
 this method. That led me to notice that there are no unit tests for
 BitstreamGenerator. I propose that we add
 0) RandomGeneratorAbstractTest with an abstract makeGenerator
 method including fixed seed tests for all RandomGenerator methods
 1) BitstreamGeneratorTest extending RandomGeneratorAbstractTest
 implementing makeGenerator with a BitStreamGenerator that uses the
 JDK generator for next(int)
 2) Make the test classes for Mersenne and Weil generators extend
 RandomGeneratorAbstractTest, moving redundant tests up into the base
 class

 Sound reasonable?
 +1

 Also, any recollection why we are using a
 different implementation in BitStreamGenerator for next(int) than
 Harmony and the JDK use?
 I don't understand what you mean. next(int) is used to generate the raw
 bits and is the heart of each generator. Each generator has its own
 implementation. Replacing next(int) by the JDK generation would imply
 dropping completely Mersenne twister and Well generators.
 I am sorry. I meant nextInt(int). It is that code that seems to be
 slow in BitStreamGenerator and different from the JDK and Harmony.
 Could you point me at some code ? There are many pitfalls in netInt(int)
 if one wants to make sure the generator is uniform, which explain the
 strange implementation, with the mask computation and the loop. By the
 way, even this implementation would benefit from your proposed array
 generation, as the mask could be computed only once.
 I have looked at the implementation for JDK and Harmony and am a little
 puzzled.

 The trick for the power of two (i.e. if ((n  -n) == n)) is not useful
 for the very elaborate generators like Mersenne twister or Well. Both
 are proven to be equidistributed even for the low order bits. They are
 based on linear recurrences but not linear congruences and do not suffer
 from the drawbacks of the latter.

 What puzzles me more is the loop. It is documented as avoiding the
 uneven distributions, but at first glance the modulo operation bothers
 me. As documentation explicitly states it is designed for this, it is
 most probably true, I simply don't understand how yet.

 So our current implementation is slow, then go ahead and change it to
 the one you showed me. I would simply suggest to get rid of the ((n 
 -n) == n) test. I'll try to understand the condition in the while loop
 to understand how it rejects uneven distributions, just out of curiosity
 for myself.
 OK, I finally understood the algorithm and how it rejects the largest
 incomplete numbers from k*n to (2^31)-1 where k*n is the largest multiple of
 n that fits in a positive integer. The trick lies in the addition of (n-1)
 which overflows the integer and wraps the result back to negative values. It
 is smart.

 +1 to use it.
 Provided that the algorithm is documented ...

Yeah, I was going to try to decipher it (and the current impl) and
provide some doc.  One other thing to consider in this decision is
do we have to worry about encumberance.  The Harmony impl looks very
similar to what is described in the JDK javadoc.  I wonder if
SunOracle might have claim to 

Re: [math] Improving tests and performance of RandomGenerator implementations

2011-08-03 Thread Luc Maisonobe

Le 03/08/2011 18:15, Phil Steitz a écrit :

On 8/3/11 9:02 AM, sebb wrote:

On 3 August 2011 09:06, Luc Maisonobeluc.maison...@free.fr  wrote:

Le 03/08/2011 09:38, Luc Maisonobe a écrit :

Le 01/08/2011 22:40, Luc Maisonobe a écrit :

Hi Phil,

Le 01/08/2011 20:39, Phil Steitz a écrit :

On 8/1/11 1:31 AM, luc.maison...@free.fr wrote:

Hi Phil,

- Mail original -

In my own applications, I noticed what appears to be poor
performance in the nextInt(int) method of the Mersenne twister,
which I was using to *improve* speed. I think that for small n, the
default implementation in BistreamGenerator may be running too many
iterations.

Mersenne twister uses a quite large pool. It creates pseudo-random bits
by twisting it and creates large bunches at a time (624 words at a
time).
Hence when you ask for large sets, you should have several calls that
return fast, and one call that takes a longer time to generate another
large pool.

So good performances are obtained for generating large sets, not
small sets.

Well generators should be faster and are preferred over Mersenne
twister now,
which is now an old generator. Well generators also have large pools,
but they
don't generate bits in large batches in advance, they do generate a
few words
at a time.

Yeah, I know. Both are faster than the JDK, though, even for just
32-bit chunks in my tests at least.

One thing I have been thinking about is exposing nextInt[],
nextDouble[], nextGaussian[] etc methods that take advantage of the
pools. So you basically generate a large block of bits use this to
fill the output arrays.

Seems a very good idea. Most of the time, people generate only one kind
of numbers several times, so it really does make sense.


I am still figuring out how the code works, but I
thought it would be good to run some benchmarks - using Gilles' new
stuff - against the Harmony implementation in java.util.Random of
this method. That led me to notice that there are no unit tests for
BitstreamGenerator. I propose that we add
0) RandomGeneratorAbstractTest with an abstract makeGenerator
method including fixed seed tests for all RandomGenerator methods
1) BitstreamGeneratorTest extending RandomGeneratorAbstractTest
implementing makeGenerator with a BitStreamGenerator that uses the
JDK generator for next(int)
2) Make the test classes for Mersenne and Weil generators extend
RandomGeneratorAbstractTest, moving redundant tests up into the base
class

Sound reasonable?

+1


Also, any recollection why we are using a
different implementation in BitStreamGenerator for next(int) than
Harmony and the JDK use?

I don't understand what you mean. next(int) is used to generate the raw
bits and is the heart of each generator. Each generator has its own
implementation. Replacing next(int) by the JDK generation would imply
dropping completely Mersenne twister and Well generators.

I am sorry. I meant nextInt(int). It is that code that seems to be
slow in BitStreamGenerator and different from the JDK and Harmony.

Could you point me at some code ? There are many pitfalls in netInt(int)
if one wants to make sure the generator is uniform, which explain the
strange implementation, with the mask computation and the loop. By the
way, even this implementation would benefit from your proposed array
generation, as the mask could be computed only once.

I have looked at the implementation for JDK and Harmony and am a little
puzzled.

The trick for the power of two (i.e. if ((n  -n) == n)) is not useful
for the very elaborate generators like Mersenne twister or Well. Both
are proven to be equidistributed even for the low order bits. They are
based on linear recurrences but not linear congruences and do not suffer
from the drawbacks of the latter.

What puzzles me more is the loop. It is documented as avoiding the
uneven distributions, but at first glance the modulo operation bothers
me. As documentation explicitly states it is designed for this, it is
most probably true, I simply don't understand how yet.

So our current implementation is slow, then go ahead and change it to
the one you showed me. I would simply suggest to get rid of the ((n
-n) == n) test. I'll try to understand the condition in the while loop
to understand how it rejects uneven distributions, just out of curiosity
for myself.

OK, I finally understood the algorithm and how it rejects the largest
incomplete numbers from k*n to (2^31)-1 where k*n is the largest multiple of
n that fits in a positive integer. The trick lies in the addition of (n-1)
which overflows the integer and wraps the result back to negative values. It
is smart.

+1 to use it.

Provided that the algorithm is documented ...


Yeah, I was going to try to decipher it (and the current impl) and
provide some doc.  One other thing to consider in this decision is
do we have to worry about encumberance.  The Harmony impl looks very
similar to what is described in the JDK javadoc.  I wonder if
SunOracle might have claim to it.

Where did you get 

Re: [logging] logging vs slf4j

2011-08-03 Thread Ceki Gülcü

Antonio Petrelli wrote:

 However in my experience SLF4J has a big drawback: when used in a
 shared classloader (JBoss Portal anyone?) it is needed to have the
 same stinky old version of SLF4J in all applications during compile
 time, and the library should be excluded from the package.

Hello Antonio,

Since version 1.0 released about 5 years ago, all versions of SLF4J
are compile-time compatible. For example, you can compile code with
slf4j-api version 1.4.3 and run it just fine with any other version of
slf4j-api, including 1.0.x, 1.1.x, 1.2.x, 1.3.x, 1.4.x., 1.5.x and
1.6.x.

On the other hand, the version of slf4j binding that you select at
runtime needs to match the slf4j-api. For example, if you have
slf4j-api-1.6.1.jar on your classpath and you wish to use log4j, then
you need slf4j-log4j12-1.6.1.jar on your class path,
slf4j-log4j12-1.4.3.jar will not work.

HTH,
--
QOS.ch, main sponsor of cal10n, logback and slf4j open source projects, 
is looking to hire talented software developers. For further details, 
see http://logback.qos.ch/job.html


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Antonio Petrelli
Hi Ceki

2011/8/3 Ceki Gülcü c...@qos.ch

 Antonio Petrelli wrote:
 On the other hand, the version of slf4j binding that you select at
 runtime needs to match the slf4j-api. For example, if you have
 slf4j-api-1.6.1.jar on your classpath and you wish to use log4j, then
 you need slf4j-log4j12-1.6.1.jar on your class path,
 slf4j-log4j12-1.4.3.jar will not work.


I think this is the JBoss Portal case, in which slf4j 1.4.3 (both api and
binding) is installed and all the portlets are forced to use it.
So I think it is a problem with JBoss and JBoss Portal, or even portlet
specification itself that are flawed.
However the main difference between SLF4J and JCL is that JCL is stuck to
1.1.1 so there is less opportunity for a version mismatch :-D
Thanks, I will reply to my own email in the Commons ML about it.

Antonio


Re: [logging] logging vs slf4j

2011-08-03 Thread Antonio Petrelli
2011/8/3 Antonio Petrelli antonio.petre...@gmail.com

 However in my experience SLF4J has a big drawback: when used in a shared
 classloader (JBoss Portal anyone?) it is needed to have the same stinky old
 version of SLF4J in all applications during compile time, and the library
 should be excluded from the package.


Correcting myself, this is a flaw in JBoss Portal and/or the portlet
specification, because they need a shared classloader to work, the same
problem would happen using commons-logging.

Antonio


Re: [compress] XZ support and inconsistencies in the existing compressors

2011-08-03 Thread Simone Tripodi
Hi Lasse!
I'd personally like if you could fill an Issue on Jira and submit your
XZ implementation as a patch that naturally fits in the
org.apache.commons.compress package and you continue contributing on
maintaining it - maybe depending on an external package would be more
difficult since commons components generally are self contained and
don't depend from any part library - unless are commons components
themselves.

Keep what I said strictly as a personal suggestion, I'm not involved
in [compress] development so I let maintainers taking decisions.

Have a nice day, all the best!
Simo

http://people.apache.org/~simonetripodi/
http://www.99soft.org/



On Wed, Aug 3, 2011 at 9:22 PM, Lasse Collin lasse.col...@tukaani.org wrote:
 Hi!

 I have been working on XZ data compression implementation in Java
 http://tukaani.org/xz/java.html. I was told that it could be nice
 to get XZ support into Commons Compress.

 I looked at the APIs and code in Commons Compress to see how XZ
 support could be added. I was especially looking for details where
 one would need to be careful to make different compressors behave
 consistently compared to each other. I found a few possible problems
 in the existing code:

 (1) CompressorOutputStream should have finish(). Now
    BZip2CompressorOutputStream has finish() but
    GzipCompressorOutputStream doesn't. This should be easy to
    fix because java.util.zip.GZIPOutputStream supports finish().

 (2) BZip2CompressorOutputStream.flush() calls out.flush() but it
    doesn't flush data buffered by BZip2CompressorOutputStream.
    Thus not all data written to the Bzip2 stream will be available
    in the underlying output stream after flushing. This kind of
    flush() implementation doesn't seem very useful.

    GzipCompressorOutputStream.flush() is the default version
    from InputStream and thus does nothing. Adding flush()
    into GzipCompressorOutputStream is hard because
    java.util.zip.GZIPOutputStream and java.util.zip.Deflater don't
    support sync flushing before Java 7. To get Gzip flushing in
    older Java versions one might need a complete reimplementation
    of the Deflate algorithm which isn't necessarily practical.

 (3) BZip2CompressorOutputStream has finalize() that finishes a stream
    that hasn't been explicitly finished or closed. This doesn't seem
    useful. GzipCompressorOutputStream doesn't have an equivalent
    finalize().

 (4) The decompressor streams don't support concatenated .gz and .bz2
    files. This can be OK when compressed data is used inside another
    file format or protocol, but with regular (standalone) .gz and
    .bz2 files it is bad to stop after the first compressed stream
    and silently ignore the remaining compressed data.

    Fixing this in BZip2CompressorInputStream should be relatively
    easy because it stops right after the last byte of the compressed
    stream. Fixing GzipCompressorInputStream is harder because the
    problem is inherited from java.util.zip.GZIPInputStream
    which reads input past the end of the first stream. One
    might need to reimplement .gz container support on top of
    java.util.zip.InflaterInputStream or java.util.zip.Inflater.

 The XZ compressor supports finish() and flush(). The XZ decompressor
 supports concatenated .xz files, but there is also a single-stream
 version that behaves similarly to the current version of
 BZip2CompressorInputStream.

 Assuming that there will be some interest in adding XZ support into
 Commons Compress, is it OK make Commons Compress depend on the XZ
 package org.tukaani.xz, or should the XZ code be modified so that
 it could be included as an internal part in Commons Compress? I
 would prefer depending on org.tukaani.xz because then there is
 just one code base to keep up to date.

 --
 Lasse Collin  |  IRC: Larhzu @ IRCnet  Freenode

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Niall Pemberton
On Wed, Aug 3, 2011 at 6:32 AM, Henri Yandell flame...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 9:30 PM, Phil Steitz phil.ste...@gmail.com wrote:
 On 8/2/11 5:22 PM, Henri Yandell wrote:
 On Tue, Aug 2, 2011 at 7:23 AM, Emmanuel Bourg ebo...@apache.org wrote:
 Are we still going to deploy Maven artifacts by SCPing to
 m2-ibiblio-rsync-repository on people.apache.org?

 I've been discussing the permissions issue for the old groupIds on Nexus
 with the Infra team (i.e components using the old groupId
 commons-componentname need a specific configuration on the snapshot and
 release repositories).

 Brian Demers kindly proposed to allow the deployment of all commons-* 
 groups
 on Nexus, but it would require to block the deployment through
 people.apache.org to avoid metadata conflicts.

 Do we all agree on using exclusively Nexus for our releases?

 I would prefer to leave the door open to release more point releases
 of [pool] and [dbcp] 1.x using the working scripts that i have, so
 pls do not nuke those.  The new versions are in the org.apache
 groupId tree, so we will have no choice but to use nexus for those.

 I will have agree to use nexus for the maven repo bits, but prefer
 to create the actual release artifacts locally, examine them, call a
 VOTE on them and move the same bits to /dist.  Personally, I would
 rather see us move back the other way (use the Tomcat setup, with
 Ant tasks pushing to p.a.o/rysnch), but I can see I am in the
 minority here, so will not stand in the way.

 Well, I'm in the same minority and I'm not aware it's a minority (have
 we taken a vote from those release managing?).

Me too. When maven works, its great - but when it doesn't it makes you
want to tear your hair out. The only releases I've done using the
release plugin were commons-parent  the build plugin. That was a
while ago, but more often than not there were issues.

Niall


 Nexus, Maven release-plugin, they all fire off my keep-it-simple
 warning flags. They mistake magic for simplicity.

 Hen

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Phil Steitz
On 8/3/11 1:37 PM, Niall Pemberton wrote:
 On Wed, Aug 3, 2011 at 6:32 AM, Henri Yandell flame...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 9:30 PM, Phil Steitz phil.ste...@gmail.com wrote:
 On 8/2/11 5:22 PM, Henri Yandell wrote:
 On Tue, Aug 2, 2011 at 7:23 AM, Emmanuel Bourg ebo...@apache.org wrote:
 Are we still going to deploy Maven artifacts by SCPing to
 m2-ibiblio-rsync-repository on people.apache.org?

 I've been discussing the permissions issue for the old groupIds on Nexus
 with the Infra team (i.e components using the old groupId
 commons-componentname need a specific configuration on the snapshot and
 release repositories).

 Brian Demers kindly proposed to allow the deployment of all commons-* 
 groups
 on Nexus, but it would require to block the deployment through
 people.apache.org to avoid metadata conflicts.

 Do we all agree on using exclusively Nexus for our releases?
 I would prefer to leave the door open to release more point releases
 of [pool] and [dbcp] 1.x using the working scripts that i have, so
 pls do not nuke those.  The new versions are in the org.apache
 groupId tree, so we will have no choice but to use nexus for those.

 I will have agree to use nexus for the maven repo bits, but prefer
 to create the actual release artifacts locally, examine them, call a
 VOTE on them and move the same bits to /dist.  Personally, I would
 rather see us move back the other way (use the Tomcat setup, with
 Ant tasks pushing to p.a.o/rysnch), but I can see I am in the
 minority here, so will not stand in the way.
 Well, I'm in the same minority and I'm not aware it's a minority (have
 we taken a vote from those release managing?).
 Me too. When maven works, its great - but when it doesn't it makes you
 want to tear your hair out. The only releases I've done using the
 release plugin were commons-parent  the build plugin. That was a
 while ago, but more often than not there were issues.

Hmm  Based on comments on this thread, looks like I may not be
in the minority on this.

I am willing to do the following:

0) Either a) create an Ant deployment script similar to what Tomcat
uses [1] or b) create a generic bash script that can be run from
p.a.o to deploy from ~rm/foo-x.y.x-RCw
1) Update the web site docs so we have full and working documentation

Either of the options in 0) require that we move back to publishing
from ibiblio-rsync from p.a.o (and we have to be able to get this
reverted - not sure this is even possible at this point, but we
could ask).  If we decide to start publishing snapshots, we can use
the same approach for these. 

Phil




 Niall


 Nexus, Maven release-plugin, they all fire off my keep-it-simple
 warning flags. They mistake magic for simplicity.

 Hen

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Nexus vs rsync-repository releases

2011-08-03 Thread Phil Steitz
On 8/3/11 2:14 PM, Phil Steitz wrote:
 On 8/3/11 1:37 PM, Niall Pemberton wrote:
 On Wed, Aug 3, 2011 at 6:32 AM, Henri Yandell flame...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 9:30 PM, Phil Steitz phil.ste...@gmail.com wrote:
 On 8/2/11 5:22 PM, Henri Yandell wrote:
 On Tue, Aug 2, 2011 at 7:23 AM, Emmanuel Bourg ebo...@apache.org wrote:
 Are we still going to deploy Maven artifacts by SCPing to
 m2-ibiblio-rsync-repository on people.apache.org?

 I've been discussing the permissions issue for the old groupIds on Nexus
 with the Infra team (i.e components using the old groupId
 commons-componentname need a specific configuration on the snapshot and
 release repositories).

 Brian Demers kindly proposed to allow the deployment of all commons-* 
 groups
 on Nexus, but it would require to block the deployment through
 people.apache.org to avoid metadata conflicts.

 Do we all agree on using exclusively Nexus for our releases?
 I would prefer to leave the door open to release more point releases
 of [pool] and [dbcp] 1.x using the working scripts that i have, so
 pls do not nuke those.  The new versions are in the org.apache
 groupId tree, so we will have no choice but to use nexus for those.

 I will have agree to use nexus for the maven repo bits, but prefer
 to create the actual release artifacts locally, examine them, call a
 VOTE on them and move the same bits to /dist.  Personally, I would
 rather see us move back the other way (use the Tomcat setup, with
 Ant tasks pushing to p.a.o/rysnch), but I can see I am in the
 minority here, so will not stand in the way.
 Well, I'm in the same minority and I'm not aware it's a minority (have
 we taken a vote from those release managing?).
 Me too. When maven works, its great - but when it doesn't it makes you
 want to tear your hair out. The only releases I've done using the
 release plugin were commons-parent  the build plugin. That was a
 while ago, but more often than not there were issues.
 Hmm  Based on comments on this thread, looks like I may not be
 in the minority on this.

 I am willing to do the following:

 0) Either a) create an Ant deployment script similar to what Tomcat
 uses [1] or b) create a generic bash script that can be run from
 p.a.o to deploy from ~rm/foo-x.y.x-RCw
 1) Update the web site docs so we have full and working documentation

 Either of the options in 0) require that we move back to publishing
 from ibiblio-rsync from p.a.o (and we have to be able to get this
 reverted - not sure this is even possible at this point, but we
 could ask).  If we decide to start publishing snapshots, we can use
 the same approach for these. 

Forgot the link:
[1] http://svn.apache.org/repos/asf/tomcat/trunk/res/maven/mvn-pub.xml

 Phil




 Niall


 Nexus, Maven release-plugin, they all fire off my keep-it-simple
 warning flags. They mistake magic for simplicity.

 Hen

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [JCS] Long standing update: Switched to JDK 5 and Maven 2

2011-08-03 Thread Rahul Akolkar
On Wed, Jul 27, 2011 at 3:54 PM, Rahul Akolkar rahul.akol...@gmail.com wrote:
 On Wed, Jul 27, 2011 at 2:25 PM, Thomas Vandahl t...@apache.org wrote:
 Hi folks,

 I finished the updates to JDK 5 generics and concurrent and updated the
 maven-2 build. Still some tests fail, others should never have passed. I
 would like to ask for close review because I basically touched
 everything. I tried to fix some obvious problems and typos on the way.


 Separately, would you be able to publish the JCS site here:

  http://commons.apache.org/jcs

 Above 404s from Commons homepage link.

snip/

I cp -R'ed the site over, but it should be republished more gracefully.


 Once its posted, I can redirect from the Jakarta site to the Commons
 counterpart (for BCEL as well).

snap/

Redirects now in place.

-Rahul

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [logging] logging vs slf4j

2011-08-03 Thread Ralph Goers
The flaw would be in JBoss Portal, not the portlet spec. The spec doesn't have 
anything to do with logging.

Ralph

On Aug 3, 2011, at 11:18 AM, Antonio Petrelli wrote:

 2011/8/3 Antonio Petrelli antonio.petre...@gmail.com
 
 However in my experience SLF4J has a big drawback: when used in a shared
 classloader (JBoss Portal anyone?) it is needed to have the same stinky old
 version of SLF4J in all applications during compile time, and the library
 should be excluded from the package.
 
 
 Correcting myself, this is a flaw in JBoss Portal and/or the portlet
 specification, because they need a shared classloader to work, the same
 problem would happen using commons-logging.
 
 Antonio


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[math] Implementation of Conjugate Gradient (MATH-581)

2011-08-03 Thread Sebastien Brisard
Hi,
please review a proposal for the definition of general iterative linear
solvers, as well as the implementation of the conjugate gradient method. This
is file MATH-581-06.zip attached to the JIRA MATH-581 ticket.
Thanks for your comments!

Actually, I *do* have a comment. For the time being,
new AbstractIterativeLinearSolver(a, monitor)
throws a NonSquareMatrixException when a is... not square. However, a is not a
matrix, it is a RealLinearOperator. Should we
1. create a new exception, called NonSquareRealLinearOperatorException?
2. rename NonSquareMatrixException (as this exception does not really need to
be specialized to matrices?

Also, I see that the current implementation of NonSquareMatrixException does
not allow one to recover the offending matrix/linear operator. This might be
handy.

Best regards for now,
Sebastien

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org