Noel J. Bergman schrieb:
Stefano Bagnara wrote:
+1 for the dist: it is where the real check has to be forced.
In other scenario it won't help forcing something.
Agreed.
+1
The long-running tests issues is solved using a decent CI environment.
We do have nightly builds. And the nightly build pointed out the failed
test.
Well it run once the night.. Continuum runs every time a commit is done.
I have continuum monitoring the james repositories every 5 minutes
I doubt that anyone wants for me to be posting automated results many times
daily, but if that's what people want, I can adjust the cron job
accordingly, although I'd only upload the builds once per day.
Does ASF provide infrastructure to put there our continuum?
We already have GUMP, remember? And, yes, there is continuum running
somewhere on the infrastructure.
Well, If i rememeber right the problem with gump is that it always use
the "newest" libraries which are aviable. I don't think its good to have
a failing build only caused by an incompatiblity of a other library
ant run-unit-test -Dtest=org.apache.james.smtpserver.SMTPServerTest
I think that generally speaking it is not a good idea to complicate the
build.xml with unused targets
And why is this bad, as opposed to the `mvn test -Dtest=SMTPServerTest`,
which you just suggested? And why would it be unused?
Sorry what you want to tell us ?
it would be interesting to have an overview of tools every james
committer use to work on james projects (platform, ide, compilers,
debuggers, profilers, svn tools and so on).
ant, emacs, java, javac, svn
ant, maven, eclipse, java , svn
1) It was not a memory-leak: Wikipedia has a simple "Is it a memory
leak?" explanation: http://en.wikipedia.org/wiki/Memory_leak
<<sigh>> From your link:
a memory leak is a particular kind of unintentional memory
consumption by a computer program where the program fails
to release memory when no longer needed
Memory consumed permanently for transient conditions meets that definition.
We simply said that you had no enough information to say "confirmed
memory leak in james": facts proved that there was no memory-leak,
and that the problem was not in james server code.
The FACT is that there was an error (using InetAddress for resolution, which
we already knew that we cannot use) that caused the heap to be consumed
until JAMES crashed due to lack of memory. I would call that a problem.
:-) Fortunately, I always keep my own counsel.
As I observed earlier, we could consider configuring JAMES to use a subclass
of the dnsjava resolver for the JRE that is instrumented to log a warning if
it is called. That would call out offending code, even in third party code.
6) If you used a real profiler (netbeans profiler, jprofiler or another
"real" tool and not a simple heap dump) it would have been much more
easy to find the problem
The problem was very easy to find once I managed to keep the JVM from
crashing, which ended up just meaning cranking the heap size to 256MB (and
also limiting the dnsjava cache). And it was necessary to do it in a
production environment, not some toy lab testbench.
--- Noel
bye
Norman
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]