Re: [DRLVM] MMTK porting issues, take one

2006-06-02 Thread Rodrigo Kumpera

On 6/2/06, Weldon Washburn [EMAIL PROTECTED] wrote:

All,

Perhaps the following is already covered in documentation.  If this is
the case, please tell me where to find it.  Below are some initial
questions regarding porting MMTK to DRLVM.

A question about org.vmmagic.pragma.InlinePragma class.  The comments
in the code says, This pragma indicates that a particular method
should always be inlined by the optimizing compiler.  Just to be
clear, will there be any correctness issues if a non-optimizing
compiler does not do any inlining?


AFAIK the inline pragmas are used to clearly define the alocation fast
path for the optimizing compiler.


It looks like we have to get Jitrino.JET to generate funny binaries
for all the classes in the org.vmmagic.unboxed package.  Is this
correct?  Are there any other packages that bend the type safety rules
we need to worry about?


The magic types are how the MMTK does pointer operations and unsigned
math, the compiler must emit special code for the methods and statics
of these classes. Actually it's pretty easy to emit such code from a
non-optimizing compiler.


It looks like org.vmmagic.unboxed.Extent needs the JIT to specifically
emit instructions that do unsigned arithmetic.  Is this correct?

A question on org.vmmagic.unboxed.ObjectReference class --- there is a
comment that says, Note: this is a JikesRVM specific extension to
vmmagic.  But a grep of MMTK source shows hundreds of uses of this
class.  Does the comment mean that MMTK should not use ObjectReference
class?  Or maybe in the future MMTK will not use ObjectReference
class?

Should I ignore
/rvm/src/vm/arch/intel/compilers/baseline/writeBarrier/VM_Barriers.java?
 Just guessing that this code is intended to be called by a
not-so-optimizing compiler.  This class has a method called
compilePutfieldBarrier that emits binary code that calls a vm entry
point that apparently does the write barrier.


This is JikesRVM specific code.


Thanks
--
Weldon Washburn
Intel Middleware Products Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: {DRLVM] MMTK bootstrap design issues

2006-06-02 Thread Rodrigo Kumpera

On 6/2/06, Weldon Washburn [EMAIL PROTECTED] wrote:

All,

The conventional approach is to ahead of time compile MMTK to an
executable image.  Then load this image during JVM initialization.
This means building some sort of AOT infrastructure.  I would like to
avoid this for initial bring up if at all possible.  Instead, I am
thinking of forcing Jitrino.JET to jit all MMTK classes during JVM
bootstrap.  My guess is that this will slow down bootup by 1-2
seconds.  In other words, no biggie because we can always go back and
clean this up once some else installs AOT infrastructure.

One gotcha is that there can be no real garbage collection before MMTK
has been JITed.  Its not likely to be a big problem since
bootstrapping a JVM does not burn up gigabytes of java heap.

Thoughts?

--
Weldon Washburn
Intel Middleware Products Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




If your´re not using MMTK as an allocator, I don't think this would be an issue.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DRLVM] proposal to port MMTK to drlvm

2006-05-24 Thread Rodrigo Kumpera

Note that read barriers are also needed if you want to implement a GC
like Baker's real time copying collector that uses incremental
forwarding.

Rodrigo


On 5/24/06, Weldon Washburn [EMAIL PROTECTED] wrote:

On 5/24/06, Ivan Volosyuk [EMAIL PROTECTED] wrote:
 I have a patch for drlvm which enables use of write barriers. This
 works in interpreter mode only yet. I can put it on jira if somebody
 is interested.
This is helpful.  Please post the patch.  I will take a look at it
sometime soon.
Thanks

The write barriers are tested with an algorithm which
 does per-slot validation and should work fine.
 --
 Ivan

 2006/5/24, Daniel Feinberg [EMAIL PROTECTED]:
   My understanding of write barriers is as an optimization.
   That fits with my understanding of write barriers also.   I do not
   know for certain but suspect that MMTK can somehow be configured such
   that write barriers are not required for correctness.  Maybe Dan
   Feinberg can tell the mailing list.
 
  So MMTK is a toolkit for building GCs. When doing generational
  collection the write barrier is used to keep track of pointers that go
  from older generations to yonger generations. You must have a way to
  track these objects because when you do a partial heap collection (aka
  just the nursery or nursery and old1) you need to build a root set of
  all things that point into that space. Then you trace this root set to
  find all live objects that need to be moved to an older generation. In
  other methods of collecting the write barrier is not as important.
  Here unless you can find all of these pointers that point into a space
  from an older space you must use a write barrier.
 
 
  Daniel

 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




--
Weldon Washburn
Intel Middleware Products Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [jira] Commented: (HARMONY-40) FileChannel assotiated with FileOutputStream not closed after closing output stream

2006-01-25 Thread Rodrigo Kumpera
Aren't channels supposed to hold references to the original streams?

On 1/24/06, Paulex Yang [EMAIL PROTECTED] wrote:
 The patch does work in some case, but it is not enough.

 First, when the channel is closed, the relevant stream(FileInputStream,
 FileOutputStream or RandomAccessFile) also needs to closed. So only add
 codes to the FileOutputStream is not enough.

 Second, the FileOutputStream/FileInputStream will close itself in the
 finalize() method(as Java spec says), and with your patch, current
 implementation in Harmony will close the channel at that time, too. This
 is very dangerous, because if someone writes code like below, the fos
 may be garbage collected and closed, and channel will also be closed, so
 that the following operation on channel will throw unexpected exception.
 code
 .
 FileChannel channel = null;
 try{
 FileOutputStream fos = new FileOutputStream(somefile.txt, false);
 channel = fos.getChannel();
 }catch(Exception e){
 }
 /*continue operate on channel*/
 .
 /code

 Third, the native close operation should only be executed once, so that
 some synchronization mechanism on the channel and stream should be
 introduced, which should also avoid deadlock when one thread is calling
 fos.close() while the other is calling channel.close().

 As a conclusion, the close issue is yet another reason that the three
 classes IO package need to be refactored to based on same JNI interface
 with FileChannel. Pls. refer to my work on JIRA issue #42.

 Vladimir Strigun (JIRA)
  [ 
  http://issues.apache.org/jira/browse/HARMONY-40?page=comments#action_12363705
   ]
 
  Vladimir Strigun commented on HARMONY-40:
  -
 
  Forced close of file current file channel in file output stream can be 
  added (diff for current FileOutputStream)
  173a174,177
 
if (channel != null) {
channel.close();
channel = null;
}
 
 
 
  FileChannel assotiated with FileOutputStream not closed after closing 
  output stream
  ---
 
   Key: HARMONY-40
   URL: http://issues.apache.org/jira/browse/HARMONY-40
   Project: Harmony
  Type: Bug
Components: Classlib
  Reporter: Vladimir Strigun
 
 
 
  When I receive FileChannel from file output stream, write something to 
  stream and then close it, channel, assotiated with the stream is still 
  open. I'll attach unit test for the issue.
 
 
 


 --
 Paulex Yang
 China Software Development Lab
 IBM





Re: [jira] Commented: (HARMONY-40) FileChannel assotiated with FileOutputStream not closed after closing output stream

2006-01-25 Thread Rodrigo Kumpera
On 1/25/06, Paulex Yang [EMAIL PROTECTED] wrote:
 Agree, it should, but currently not in Harmony.

 Rodrigo Kumpera wrote:
  Aren't channels supposed to hold references to the original streams?
 
  On 1/24/06, Paulex Yang [EMAIL PROTECTED] wrote:
 
  The patch does work in some case, but it is not enough.
 
  First, when the channel is closed, the relevant stream(FileInputStream,
  FileOutputStream or RandomAccessFile) also needs to closed. So only add
  codes to the FileOutputStream is not enough.
 
  Second, the FileOutputStream/FileInputStream will close itself in the
  finalize() method(as Java spec says), and with your patch, current
  implementation in Harmony will close the channel at that time, too. This
  is very dangerous, because if someone writes code like below, the fos
  may be garbage collected and closed, and channel will also be closed, so
  that the following operation on channel will throw unexpected exception.
  code
  .
  FileChannel channel = null;
  try{
  FileOutputStream fos = new FileOutputStream(somefile.txt, false);
  channel = fos.getChannel();
  }catch(Exception e){
  }
  /*continue operate on channel*/
  .
  /code
 
  Third, the native close operation should only be executed once, so that
  some synchronization mechanism on the channel and stream should be
  introduced, which should also avoid deadlock when one thread is calling
  fos.close() while the other is calling channel.close().
 
  As a conclusion, the close issue is yet another reason that the three
  classes IO package need to be refactored to based on same JNI interface
  with FileChannel. Pls. refer to my work on JIRA issue #42.
 
  Vladimir Strigun (JIRA)
 
  [ 
  http://issues.apache.org/jira/browse/HARMONY-40?page=comments#action_12363705
   ]
 
  Vladimir Strigun commented on HARMONY-40:
  -
 
  Forced close of file current file channel in file output stream can be 
  added (diff for current FileOutputStream)
  173a174,177
 
 
if (channel != null) {
channel.close();
channel = null;
}
 
 
 
  FileChannel assotiated with FileOutputStream not closed after closing 
  output stream
  ---
 
   Key: HARMONY-40
   URL: http://issues.apache.org/jira/browse/HARMONY-40
   Project: Harmony
  Type: Bug
Components: Classlib
  Reporter: Vladimir Strigun
 
 
 
  When I receive FileChannel from file output stream, write something to 
  stream and then close it, channel, assotiated with the stream is still 
  open. I'll attach unit test for the issue.
 
 
 
  --
  Paulex Yang
  China Software Development Lab
  IBM
 
 
 
 
 
 


 --
 Paulex Yang
 China Software Development Lab
 IBM





Re: [jira] Commented: (HARMONY-40) FileChannel assotiated with FileOutputStream not closed after closing output stream

2006-01-25 Thread Rodrigo Kumpera
Isn't the idea to have a shared implementation that both FileChannel
and java.io stream. Can't we just use some sort of reference counting
to close it on finalize?

On 1/25/06, Rodrigo Kumpera [EMAIL PROTECTED] wrote:
 On 1/25/06, Paulex Yang [EMAIL PROTECTED] wrote:
  Agree, it should, but currently not in Harmony.
 
  Rodrigo Kumpera wrote:
   Aren't channels supposed to hold references to the original streams?
  
   On 1/24/06, Paulex Yang [EMAIL PROTECTED] wrote:
  
   The patch does work in some case, but it is not enough.
  
   First, when the channel is closed, the relevant stream(FileInputStream,
   FileOutputStream or RandomAccessFile) also needs to closed. So only add
   codes to the FileOutputStream is not enough.
  
   Second, the FileOutputStream/FileInputStream will close itself in the
   finalize() method(as Java spec says), and with your patch, current
   implementation in Harmony will close the channel at that time, too. This
   is very dangerous, because if someone writes code like below, the fos
   may be garbage collected and closed, and channel will also be closed, so
   that the following operation on channel will throw unexpected exception.
   code
   .
   FileChannel channel = null;
   try{
   FileOutputStream fos = new FileOutputStream(somefile.txt, false);
   channel = fos.getChannel();
   }catch(Exception e){
   }
   /*continue operate on channel*/
   .
   /code
  
   Third, the native close operation should only be executed once, so that
   some synchronization mechanism on the channel and stream should be
   introduced, which should also avoid deadlock when one thread is calling
   fos.close() while the other is calling channel.close().
  
   As a conclusion, the close issue is yet another reason that the three
   classes IO package need to be refactored to based on same JNI interface
   with FileChannel. Pls. refer to my work on JIRA issue #42.
  
   Vladimir Strigun (JIRA)
  
   [ 
   http://issues.apache.org/jira/browse/HARMONY-40?page=comments#action_12363705
]
  
   Vladimir Strigun commented on HARMONY-40:
   -
  
   Forced close of file current file channel in file output stream can be 
   added (diff for current FileOutputStream)
   173a174,177
  
  
 if (channel != null) {
 channel.close();
 channel = null;
 }
  
  
  
   FileChannel assotiated with FileOutputStream not closed after closing 
   output stream
   ---
  
Key: HARMONY-40
URL: http://issues.apache.org/jira/browse/HARMONY-40
Project: Harmony
   Type: Bug
 Components: Classlib
   Reporter: Vladimir Strigun
  
  
  
   When I receive FileChannel from file output stream, write something to 
   stream and then close it, channel, assotiated with the stream is still 
   open. I'll attach unit test for the issue.
  
  
  
   --
   Paulex Yang
   China Software Development Lab
   IBM
  
  
  
  
  
  
 
 
  --
  Paulex Yang
  China Software Development Lab
  IBM
 
 
 



Re: java.util.concurrent implementation

2006-01-23 Thread Rodrigo Kumpera
Some of the concurrent collections have code from the JDK, but these
can be just discarded. The pedigree can be asserted by Doug Lea
himself, I think he is on the list.

It was the intent of the expert group to allow the reference
implementation be used by FOSS jvms.

Rodrigo,

On 1/23/06, Davanum Srinivas [EMAIL PROTECTED] wrote:
 http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/readme?annotate=1.1

 On 1/23/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
  The question I'd have is who wrote it - is there any exposure risk?
  public domain doesn't mean clean pedigree...
 
  But yes, certainly worth investigating.  Can you look into it?
 
  Rodrigo Kumpera wrote:
   Can we import the backport of jsr-166 as the starting point for
   implementing this package? It's released as public domain, so there
   should be not license issue IFAIK.
  
   There are only a few things required make it work, like removing
   references to com.sun.Unsafe.
  
  
 


 --
 Davanum Srinivas : http://wso2.com/blogs/



Re: Test framework

2006-01-16 Thread Rodrigo Kumpera
I think allowing tests to be fully executable in Java (i.e. a
certified jvm) would be really tricky. Some black-magic to rename all
classes would be required, and testing some core functionality would
be really hard - think synchronization and threading.

But for most classes this is perfectly doable. Just rename everything
but some core classes (Object, String and a few more final ones) to be
in the test.* package. For example, java.util.ArrayList would be
test.java.util.ArrayList. I think this could work, most of the times,
and allow testing harmony classlib inside a jvm.

But then, the real advantage of doing this would be if we could
compare the results between a certified jvm and harmony and sport
mismatched results. I just can't see many bugs been caught by this
approach only.

Rodrigo


On 1/16/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 One thing that's popped up on the Test suite layout thread is the
 thought that we need to b0rk the canonical package and naming
 conventions for unit tests in order to be able to run things on the boot
 classpath of the VM.  I think this issue is important enough and
 fundamental enough to warrant it's own thread.

 First, do we really need to do this?  I thought that we (Tim and I) had
 informally discussed this at ApacheCon and came to some good conclusion
 where we were able to figure out a trick.

 Second, shouldn't we think about providing a test environment in which
 we can completely control the environment  - we can test the class
 library in a container that can be run in any VM so we have full control
 over security and other issues?

 Of course, I'd like both.  If we do have the trick that we talked
 about, then we can use canonical JUnit (or TestNG) naming and package
 conventions, which I think is important.

 geir




Re: Bootstrapping the classlibrary builds

2005-12-29 Thread Rodrigo Kumpera
One thing is not clear to me with this modularity thing is where the
SPIs/AWT Peers fit in. Will the default implementation live within the
bundle that containts the API or on it own?

BTW, as these are interfaces published by Sun/JCP, we could use classpath's.

Rodrigo

On 12/29/05, Tim Ellison [EMAIL PROTECTED] wrote:
 Good stuff Rodrigo.

 We did a similar experiment, then went through the list and made some
 'gut feel' groupings/splits of these dependencies.

 i.e., it probably wouldn't be useful to have multiple components for,
 say, JNDI (would somebody really want an alternative implementation of
 javax.naming.event independently of the remainder of JNDI ?)  and it
 probably would be helpful to split apart, say, Swing and AWT into
 separate components and manage the dependencies between them.

 The set of components we ended up with was proposed a while ago on the
 wiki[1], but it's certainly open for debate.

 [1] http://wiki.apache.org/harmony/ClassLibrary

 Regards,
 Tim


 Rodrigo Kumpera wrote:
  Just for curiosity I've written a small program that enumerate all
  graph cycles of packages dependencies in Java 1.4 (counting only
  fields, methods and super types). This shows that for most packages
  this won't be an issue and a packaging that have no cyclic dependencis
  is possible.
 
  Given the criteria that dependencies are: fields, super class,
  interfaces and method return/exception/parameters, one could have the
  following bundles:
 
  [java.applet]
  [java.awt.color]
  [java.awt.datatransfer]
  [java.awt.im.spi]
  [java.awt.print]
  [java.math]
  [java.nio]
  [java.rmi, java.rmi.registry]
  [java.rmi.activation]
  [java.rmi.dgc]
  [java.rmi.server]
  [java.security.acl]
  [java.sql]
  [java.io, java.lang, java.lang.ref, java.lang.reflect, java.net,
  java.nio.channels, java.nio.channels.spi, java.nio.charset,
  java.nio.charset.spi, java.security, java.security.cert,
  java.security.interfaces, java.security.spec, java.text, java.util,
  java.util.jar, javax.security.auth.x500]
  [java.util.logging]
  [java.util.prefs]
  [java.util.regex]
  [java.util.zip]
  [javax.crypto]
  [javax.crypto.interfaces]
  [javax.crypto.spec]
  [javax.imageio, javax.imageio.event, javax.imageio.metadata, 
  javax.imageio.spi]
  [javax.imageio.plugins.jpeg]
  [javax.imageio.stream]
  [javax.naming]
  [javax.naming.directory]
  [javax.naming.event]
  [javax.naming.ldap]
  [javax.naming.spi]
  [javax.net]
  [javax.net.ssl]
  [javax.print, javax.print.event]
  [javax.print.attribute]
  [javax.print.attribute.standard]
  [javax.rmi]
  [javax.rmi.CORBA]
  [javax.security.auth]
  [javax.security.auth.callback]
  [javax.security.auth.kerberos]
  [javax.security.auth.login]
  [javax.security.auth.spi]
  [javax.security.cert]
  [javax.sound.midi, javax.sound.midi.spi]
  [javax.sound.sampled, javax.sound.sampled.spi]
  [javax.sql]
  [java.awt, java.awt.dnd, java.awt.dnd.peer, java.awt.event,
  java.awt.font, java.awt.geom, java.awt.im, java.awt.image,
  java.awt.image.renderable, java.awt.peer, java.beans,
  java.beans.beancontext, javax.accessibility, javax.swing,
  javax.swing.border, javax.swing.colorchooser, javax.swing.event,
  javax.swing.filechooser, javax.swing.plaf, javax.swing.plaf.basic,
  javax.swing.table, javax.swing.text, javax.swing.tree,
  javax.swing.undo]
  [javax.swing.plaf.metal]
  [javax.swing.plaf.multi]
  [javax.swing.text.html]
  [javax.swing.text.html.parser]
  [javax.swing.text.rtf]
  [javax.transaction]
  [javax.transaction.xa]
  [javax.xml.parsers]
  [javax.xml.transform]
  [javax.xml.transform.dom]
  [javax.xml.transform.sax]
  [javax.xml.transform.stream]
 
 From that we can see that most of the GUI stuff should live in the
  same package and the minimum set of classes for java.lang is not that
  huge.
 
  Rodrigo
 
  On 12/28/05, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 
 
 Geir Magnusson Jr wrote:
 
 
 Tim Ellison wrote:
 
 
 Geir Magnusson Jr wrote:
 
 
 Tim Ellison wrote:
 
 
 Sure, if you don't want the runtime effects of OSGi then you have
 flexibility to package the classes into any shape, including an rt.jar.
 However, if we want to support runtime modularity including component
 versioning etc. then we will have to have a number of discrete bundles.
 If OSGi has the ability to put multiple bundles into a single JAR ...
 
 
 I thnk you are missing my point.  Sorry.  What I'm saying/asking is
 about OSGi being one [of many possible] delivery packagings of the
 class libraries.
 
 
 
 Can you think of any other runtime modularity systems that we should
 consider supporting?
 
 
 Sadly rt.jar because I hope that other VMs will support our VM/lib
 interface, and thus our classlib, and manybe not yet do OSGi.
 
 
 Clearly I didn't read Tim's question.  Or if I did, I didn't answer it.
  I don't consider rt.jar a runtime modularity system.  I was
 just thinking of packagings of the library...
 
 geir
 
 
 

 --

 Tim Ellison ([EMAIL PROTECTED])
 IBM Java technology centre

Re: Bootstrapping the classlibrary builds

2005-12-29 Thread Rodrigo Kumpera
The source is attached. I've done some changes to compute
dependencies. So the result is a bit diferent.


On 12/28/05, Stefano Mazzocchi [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:
  Just for curiosity I've written a small program that enumerate all
  graph cycles of packages dependencies in Java 1.4 (counting only
  fields, methods and super types). This shows that for most packages
  this won't be an issue and a packaging that have no cyclic dependencis
  is possible.
 
  Given the criteria that dependencies are: fields, super class,
  interfaces and method return/exception/parameters, one could have the
  following bundles:
 
  [java.applet]
  [java.awt.color]
  [java.awt.datatransfer]
  [java.awt.im.spi]
  [java.awt.print]
  [java.math]
  [java.nio]
  [java.rmi, java.rmi.registry]
  [java.rmi.activation]
  [java.rmi.dgc]
  [java.rmi.server]
  [java.security.acl]
  [java.sql]
  [java.io, java.lang, java.lang.ref, java.lang.reflect, java.net,
  java.nio.channels, java.nio.channels.spi, java.nio.charset,
  java.nio.charset.spi, java.security, java.security.cert,
  java.security.interfaces, java.security.spec, java.text, java.util,
  java.util.jar, javax.security.auth.x500]
  [java.util.logging]
  [java.util.prefs]
  [java.util.regex]
  [java.util.zip]
  [javax.crypto]
  [javax.crypto.interfaces]
  [javax.crypto.spec]
  [javax.imageio, javax.imageio.event, javax.imageio.metadata, 
  javax.imageio.spi]
  [javax.imageio.plugins.jpeg]
  [javax.imageio.stream]
  [javax.naming]
  [javax.naming.directory]
  [javax.naming.event]
  [javax.naming.ldap]
  [javax.naming.spi]
  [javax.net]
  [javax.net.ssl]
  [javax.print, javax.print.event]
  [javax.print.attribute]
  [javax.print.attribute.standard]
  [javax.rmi]
  [javax.rmi.CORBA]
  [javax.security.auth]
  [javax.security.auth.callback]
  [javax.security.auth.kerberos]
  [javax.security.auth.login]
  [javax.security.auth.spi]
  [javax.security.cert]
  [javax.sound.midi, javax.sound.midi.spi]
  [javax.sound.sampled, javax.sound.sampled.spi]
  [javax.sql]
  [java.awt, java.awt.dnd, java.awt.dnd.peer, java.awt.event,
  java.awt.font, java.awt.geom, java.awt.im, java.awt.image,
  java.awt.image.renderable, java.awt.peer, java.beans,
  java.beans.beancontext, javax.accessibility, javax.swing,
  javax.swing.border, javax.swing.colorchooser, javax.swing.event,
  javax.swing.filechooser, javax.swing.plaf, javax.swing.plaf.basic,
  javax.swing.table, javax.swing.text, javax.swing.tree,
  javax.swing.undo]
  [javax.swing.plaf.metal]
  [javax.swing.plaf.multi]
  [javax.swing.text.html]
  [javax.swing.text.html.parser]
  [javax.swing.text.rtf]
  [javax.transaction]
  [javax.transaction.xa]
  [javax.xml.parsers]
  [javax.xml.transform]
  [javax.xml.transform.dom]
  [javax.xml.transform.sax]
  [javax.xml.transform.stream]
 
  From that we can see that most of the GUI stuff should live in the
  same package and the minimum set of classes for java.lang is not that
  huge.

 Nice! awesome job!

 (is the source-code of this program available?)

 --
 Stefano.


import java.io.File;
import java.io.IOException;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.util.Enumeration;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import java.util.TreeSet;
import java.util.zip.ZipEntry;
import java.util.zip.ZipException;
import java.util.zip.ZipFile;

public class Deps {

static class Group implements Comparable {
String name;
Set packages = new TreeSet();
Set deps = new TreeSet();

public Group(Package p) {
this.name = mkGroup();
add(p);
}

public int compareTo(Object o) {
return this.name.compareTo(((Group) o).name);
}

public void remark(Group from) {
for (Iterator i = from.packages.iterator(); i.hasNext();)
add((Package) i.next());
groups.remove(from);
}

public void add(Package p) {
p.group = this;
this.packages.add(p);
}

public String toString() {
StringBuffer buff = new StringBuffer();
buff.append(group [).append(this.name).append(] )
.append(this.packages).append( - [ );
for(Iterator i = this.deps.iterator(); i.hasNext(); ){
Group g = (Group) i.next();
buff.append(g.name);
if(i.hasNext())
buff.append(, );
}
buff.append( ]);
return buff.toString();
}

public void computeDeps() {
this.deps.clear();
for (Iterator i = this.packages.iterator(); i.hasNext();) {
final Package p = (Package) i.next();
for (Iterator j = p.deps.iterator(); j.hasNext();) {
Package dep = (Package) pkg.get(j.next());
if (dep.group != null

Re: JVM spec interpretation questions

2005-12-28 Thread Rodrigo Kumpera
On 12/28/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 All,

 For you gurus out there, I have some questions about
 interpretation of the JVM 2.0 spec.  I am trying to clarify
 certain operational details of array and exception handling.
 I would appreciate the collective wisdom of you lurkers out
 there on The List who know about these things.  Thanks in
 advance for your advice:


 Question 1:
 -

 JVM 2.0 spec, section 2.15 states:

 Arrays are objects, are dynamically created, and may
 be assigned to variables of type Object (§2.4.7). All
 methods on arrays are inherited from class Object
 except the clone method, which arrays override. All
 arrays implement the interfaces Cloneable and
 java.io.Serializable.

 Based on my experience, I find two possible interpretations
 of this statement.  Using an example,

 public class X {};
 public class Y extends X {};

 methodZ() { Y yArray[3]; }

 Does this mean that,

 (A) Array objects of type Y (such as yArray[3]) will inherit everything
 from X, which in turn inherits from java.lang.Object?  And does
 Y need to implement a 'Y.clone()' method, or perhaps is an 'X.clone()'
 method applicable or appropriate?

 (B) Or does this mean that Y effectively inherits directly and ONLY
 from java.lang.Object, and must also implement a 'Y.clone()'
 method?  Or possibly skip 'Y.clone()' if it never intended to be
 used by the code?

 I am debating back and forth as to which was is correct.  Is
 it (A) or (B) or something else?  What have I missed here?


Arrays are classes that implement both Serializable and Clonable,
single as that. The inherintance tree for arrays mimics of the
component types. Take ArrayList as an example:

ArrayList extends AbstractList  AbstractCollection  Object.
ArrayList implements List, RandomAccess, Cloneable and Serializable.

This  means:

ArrayList[] instanceof AbstractList[]
AbstractList[] instanceof AbstractCollection[]
AbstractCollection[] instanceof Object[]

ArrayList[] instanceof List[]
ArrayList[] instanceof RandomAccess[]
ArrayList[] instanceof Cloneable[]
ArrayList[] instanceof Serializable[]

And this is why there is the ArrayStoreException:

Collection[] col = new ArrayList[1];
col[0] = new HashSet(); //throws ArrayStoreException

 Question 2:
 -


 JVM 2.0 spec, section 2.16 states:

 Every exception is represented by an instance of the class
 Throwable or one of its subclasses; such an object can be
 used to carry information from the point at which an exception
 occurs to the handler that catches it

 JVM 2.0 spec, section 2.16.4 states:

 The class Exception is the superclass of all the standard
 exceptions that ordinary programs may wish to recover
 from

 The class Error and its standard subclasses are exceptions
 from which ordinary programs are not ordinarily expected
 to recover. The class Error is a separate subclass of Throwable,
 distinct from Exception in the class hierarchy, in order to allow
 programs to use the idiom

 } catch (Exception e) {

 to catch all exceptions from which recovery may be possible
 without catching errors from which recovery is typically not
 possible.

 Seeing as any Java program can throw a java.lang.Throwable or
 a subclass, in addition to its subclasses java.lang.Exception and
 java.lang.Error, I need some clarification of the spec.  An 'Exception'
 is something that a program should recover from, that is, once it
 is finished processing, the program should continue with its 'finally()'
 clause, if any, and carry on.  On the other hand, an 'Error' is something
 where 'ordinary programs are not ordinarily expected to recover' from
 and which the handler is typically not defined in the application.
 I have some questions here:

 (A)  If an 'Error' is caught by an application handler and does not
 exit() the program or some other drastic action, I presume that
 the application just keeps running.  But if it is not caught, meaning
 that none of the methods in the stack have a handler, then it must
 be handled by the handler provided by the JVM.  At this point, does
 the thread quit, just like an 'Exception' does, or should the JVM
 shut down completely?

 (B)  In the API spec 1.5.0, there are only two known direct subclasses
 of java.lang.Throwable.  Is there any reason at all to EVER expect an
 application to define some class that is a direct subclass of 
 java.lang.Throwable
 that is, not also a subclass of 'Exception' or 'Error'?  If so, then I 
 have two more
 questions, which seems to also arise in java.lang.Object anyway:

 (C)  What about a 'Throwable' that is neither an 'Exception' or an
 'Error'?  This situation arises in 'java.lang.Object.finalize() throws 
 Throwable {}'.
 How is this handled?  From the following statements in the API doc:

 If an uncaught exception is thrown 

Re: ASF has been shipping GPL exception stuff for years and still is ;)

2005-12-05 Thread Rodrigo Kumpera
I wonder if the classpath vm interface classes where public domain
that issue would be solved. After all, there isn't much value, I
believe, on these classes only.



On 12/5/05, Davanum Srinivas [EMAIL PROTECTED] wrote:
 Yes, sublicensing. I believe the terms are not clear on how third
 parties can sublicense a composite of ASF-licensed works and GPL
 licensed works. IANAL and i don't understand it fully. But i was told
 that this is a problem and that problem is mitigated by the fact that
 Classpath is under GPL+Exception and a firewall can be set up by
 standard
 interfaces. That's why the VM Interface stuff is important.

 But even then, there is no guarantee that people will want to do it
 because they can't make a closed fork if they want to for whatever
 reason. (Which ASL allows and if people wanted to do that, they would
 already be participating in one of the existing VM's in the classpath
 galaxy).  Yes, i do want to enable people to download and use
 Harmony+Classpath together but in my mind that cannot be the only
 choice.

 thanks,
 dims

 On 12/4/05, Dalibor Topic [EMAIL PROTECTED] wrote:
  On Sun, Dec 04, 2005 at 02:13:30PM -0500, Geir Magnusson Jr. wrote:
  
   On Dec 4, 2005, at 12:38 PM, Anthony Green wrote:
  
   On Sun, 2005-12-04 at 11:14 -0500, Geir Magnusson Jr. wrote:
   That said, I think that to be fair, we need to distinguish between
   using in the sense of what GCC is doing  - a tool outside the scope
   of effort of the project enabling some behavior in a standard and
   non-
   intrusive way (just like we don't care about the license of the OS we
   run on), and using in the sense of developers of a project making a
   conscious decision to design and implement software with a
   dependency.
   
   This is wrong thinking.  You aren't simply using the libgcc
   routines,
   as you would OS resources.  You are linking your application to the
   libgcc library and redistributing the resulting combined binary.
   This
   is precisely what the license talks about and enables.
  
   Ok - while it's not exactly the same, the fundamental point I was
   trying to make is sound, I think, in that in writing my program, I am
   not at all thinking hey, I'll use stuff from libgcc.  I'm just
   writing a C program.  After that, compiling and creating the
   executable is a second independent step - the receiver of the
   software has no burden to switch compilers wrt libgcc.
 
  He is talking about the binary, you're talking about the source. Reread
  what he said with that in mind, and it should become obvious that you
  are both right, since you are talking past him ;) But with respect to
  ASF's (legally fine, just aparently ruffling a few feathers among less
  C-aware members) usage of GPL+linked exception licensed code from gcc,
  Anthony is correct, there is no doubt about it. Check out the gcc
  changelogs, and you will find that he knows very well what he's
  talking about with respect to gcc.
 
  
   The license needs to allow this,  or using it would be a non-starter.
  
   
   Whether or not you make a distinction between this kind of GPL
   +exception
   usage and libstdc++ or GNU Classpath usage hardly matters, since the
   licenses themselves don't make a distinction.
  
   That would only be true if there is a standard interface / component
   model for the classlibrary so that there can be competing
   implementations and users have the ability to switch from one
   implementation to another without significant burden in the event
   they wish to make changes, additions or enhancements, and have the
   freedom to choose what they do with their work.
  
   That's why I think that the our componentization efforts are so
   important.
 
  You seem to have narrowly missed what Anthony said, and went on
  a defensive tangent instead ;)
 
  You don't have to defend the usage of GPL+linking
  exception licensed code by the Apache Software Foundation, all of us
  non-Luddites here agree that the GPL+linking exception works as it
  should and the binaries shipped by the ASF are fine.
 
  This stuff is easy, and pretty obvious to anyone with a dissasembler,
  and/or insight about C compilers, so let's have the same rules that
  allow httpd to ship their binaries using/incorporating
  GPL+linking exception licensed code, ASF's flagship product, after
  all, be officially ratified, as they'd allow us to do the same.
 
  Is there something left that would speak against using GNU Classpath
  in Harmony, after we have established as a fact that the ASF is indeed
  happily distributing code using code under the same sort of licenses
  and has been doing so for years?
 
  If not, then let's do it.
 
  cheers,
  dalibor topic
 
  
   geir
  
   --
   Geir Magnusson Jr  +1-203-665-6437
   [EMAIL PROTECTED]
  
  
  
  
   --
   Geir Magnusson Jr  +1-203-665-6437
   [EMAIL PROTECTED]
  
  
 


 --
 Davanum Srinivas : 

Re: Call for Contributions (was Re: 4 Months and...)

2005-11-23 Thread Rodrigo Kumpera
Hi David,

I haven´t dropped the idea, my current free time is really small and
there are a few problems that I'm facing right know:

-A Java JVM must have a JITer, I have one working for windows/x86
except for synchronization and String loading. I don't have even
thought about multi-threading issues.

-Bootstrapping is really hard, I've been studying how both joeq and
jikesRVM work - it's troublesome and fragile. Taken from the mailing
list archives I've dropped their approach of mmaping an image and
decided for a more conventional approach. Right now I'm generating a
COFF object file and linking it as a library, the missing parts are
external method linking and making the JITer calling convention aware
(java method invocations are diferent from C method invocations).

-I want to make the JVM to be debugable as a Java application, this
seens to be easier than to generate enouth debug information to make
gdb happy. Maybe someone with DWARF-2 or COFF knowledge can say the
oposite.

-The JVM needs magic types for raw memory access, I've modeled then
like MMtk but haven't implemented the magic code generation.

I'm not sure that releasing code that perform just some random parts
is worth the problem.

[]'s
Rodrigo


On 11/21/05, David Tanzer [EMAIL PROTECTED] wrote:
 Hi Rodrigo,

 You wrote the email I'm answering to some time ago on harmony-dev. IMHO
 it would be cool if we had a JVM in Java to compare against BootJVM and
 JCHEVM. Are you still willing to contribute your JVM?

 I reply to you directly because I'm not sure if you still want to
 contribute your JVM. You can also answer to this mail on harmony-dev if
 you want your answer to be public.

 Regards, David.

 On Tue, 2005-09-20 at 11:39 -0300, Rodrigo Kumpera wrote:
  I've written a pet JVM in Java, it includes a very simple JITer, no GC
  (but it is starting to use MMtk magic, so should be doable to use it),
  no self-hosting and no support for native code. The code have never
  left my machine but I'm willing to donate if is desirable.
 
 
  []'s
  Rodrigo
 
 
  On 9/20/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
  
   On Sep 20, 2005, at 8:52 AM, [EMAIL PROTECTED] wrote:
  
This is not likely to actually attract code.  Opening up SVN to
committership would.  You've described a reverse of how most
projects work if you will such that the barrier is to initial
commit rather than lazy veto/etc.
  
   Most projects give committership to people that have offered code and
   patches, don't they?
  
   geir
  
   
-Andy
   
Geir Magnusson Jr. wrote:
   
I'd like to restate that we are always looking for code
contributions.  I do know of some in preparation, but it should
be  clear that if you have anything to offer (hey, Dan!) please
post a  note to dev list to discuss. :)
geir
On Sep 19, 2005, at 5:35 PM, [EMAIL PROTECTED] wrote:
   
Four months and no code.  Open up the repository and let the
willing start committing.  The discussion has gotten so verbose
that there are already people publishing edited digests.  Code
will  reduce the discussion :-)
   
-Andy
   
   
   
   
   
   
  
   --
   Geir Magnusson Jr  +1-203-665-6437
   [EMAIL PROTECTED]
  
  
  
 --
 David Tanzer, Haghofstr. 29, A-3352 St. Peter/Au, Austria/Europe
 http://deltalabs.at -- http://dev.guglhupf.net -- http://guglhupf.net
 My PGP Public Key: http://guglhupf.net/david/david.asc
 --
 Real programmers don't draw flowcharts.  Flowcharts are, after all, the
 illiterate's form of documentation.  Cavemen drew flowcharts; look how
 much good it did them.





Re: half-baked idea? j2me

2005-11-01 Thread Rodrigo Kumpera
On 11/1/05, Robin Garner [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:

 AFAIK IKVM, sablevm and jamvm all run on portable devices.
 
 Developing a j2me jvm is not as easier as it seens, first, the
 footprint and execution performance must be really optimized, so
 expect a LOT of assembly coding.
 
 
 Back to the language wars again :)  This does not necessarily follow.
 Try googling for the 'squawk' VM - they had a poster at OOPSLA last
 week.  This is a java-in-java virtual machine targetted at embedded
 devices.  The core VM runs in 80KB of memory.  Device drivers are all
 written in Java.


Robin,

With a java-in-java VM even if you don't write directly in assembly
you still need to generate machine code with java anyway, and that
will look a lot like asm (JikesRVM baseline JITer for example). With
C, for example, you can get away using just an interpreter.


Re: half-baked idea? j2me

2005-11-01 Thread Rodrigo Kumpera
On 11/1/05, Robin Garner [EMAIL PROTECTED] wrote:
  On 11/1/05, Robin Garner [EMAIL PROTECTED] wrote:
  Rodrigo Kumpera wrote:
 
  AFAIK IKVM, sablevm and jamvm all run on portable devices.
  
  Developing a j2me jvm is not as easier as it seens, first, the
  footprint and execution performance must be really optimized, so
  expect a LOT of assembly coding.
  
  
  Back to the language wars again :)  This does not necessarily follow.
  Try googling for the 'squawk' VM - they had a poster at OOPSLA last
  week.  This is a java-in-java virtual machine targetted at embedded
  devices.  The core VM runs in 80KB of memory.  Device drivers are all
  written in Java.
 
 
  Robin,
 
  With a java-in-java VM even if you don't write directly in assembly
  you still need to generate machine code with java anyway, and that
  will look a lot like asm (JikesRVM baseline JITer for example). With
  C, for example, you can get away using just an interpreter.

 My mistake, obviously.  When you said performance must be really
 optimized, so expect a LOT of assembly coding, I assumed you were saying
 that large chunks of the VM would need to be written in assembler in order
 to get adequate performance.

 So what _was_ the point you were making ?

 cheers



I was just trying to say that a decent j2me VM is not as simple as
David suggested. Not that C or Java would be more suited to implement
it. As a matter of fact, I think that java-in-java VMs can be as good
as C/C++ based JVMs or better.

But one thing is hard to deny, a simple JVM, like bootJVM, is a lot
easier to write in C than in java (not using an AOT compiler). And
that was my point, C/C++ sounds to be the easy path to start with.


Re: half-baked idea? j2me

2005-10-31 Thread Rodrigo Kumpera
AFAIK IKVM, sablevm and jamvm all run on portable devices.

Developing a j2me jvm is not as easier as it seens, first, the
footprint and execution performance must be really optimized, so
expect a LOT of assembly coding.

After that, a jvm that runs in no device is pretty much useless, then
we would need to test in the many devices we would support. Developing
for a smartphone platform, like Symbian, BREW or Nokia Series 60
(Symbian based) would make it easier, but not easy.

But I cannot say that the idea is bad, as the j2me implementations out
there are really bad comparing to the j2se offerings. Maybe with a
FOSS jvm that is pretty solid, many vendors would stop bundling buggy
software and more convergence of optional capabilities and bugs would
happen.

I'm just not sure that such project would fit the Harmony proposal, as
the idea is to implement a j2se compatible jvm.



On 10/31/05, David N. Welton [EMAIL PROTECTED] wrote:
 Hello,

 I'm interested in having a freely available Java system, which seemed as
 good a reason as any to start lurking on this list lately.

 I've been mulling over what I've seen in the archives here, what I know
 of the free java world, free software, communities, marketing, and
 various and sundry other things and an idea popped into my head.
 Perhaps, like many others, it's a dumb one, but I thought I'd lob it out
 there just the same:

 Why not start out with j2me?

 *) It'd be breaking new ground - something no one has done before in the
 'free' world (to my knowledge at least).  That, to me at least, would
 increase the fun quotient.

 *) It's small, which would make it easier to get running - or at least
 easier to make it complete.

 *) It's simple.  To my knowledge, in order to stay small, most
 implementations are interpreters (possibly assisted in hardware through
 things like Jazelle).

 *) Modulo space saving optimizations, it could then be used as a launch
 pad for bigger and better things should we so desire.

 *) Perhaps there are some financial incentives for corporations to get
 involved.  J2SE is free beer, so the impetus to work on a free version
 mostly comes from a desire to have an open source Java available,
 without the practical incentives of a free beer system that other open
 source projects have had working in their favor.

 Just a thought...
 --
 David N. Welton
 - http://www.dedasys.com/davidw/

 Linux, Open Source Consulting
 - http://www.dedasys.com/



Re: MSVC support, was: Compilers and configuration tools

2005-10-26 Thread Rodrigo Kumpera
+1 for makefiles

On 10/26/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 please, just check them in... don't worry too much about the polish :)

 geir

 On Oct 25, 2005, at 11:42 PM, [EMAIL PROTECTED] wrote:

 
  All,
 
  From his posting below:
 
 
   it will ensure that the project sticks to writing portable
   code as far as possible.
 
 
 
- As for the logistical problems, I believe they will be
   kept to a minimum if we develop keeping multiple compilers
   in mind from the beginning itself.
 
 
  Tanuj has several good points about multiple compiler
  support.  As to the numerous viewpoints being expressed,
  I think we are probably in a bit of a wait and see mode
  as everyone weighs in and as we decide what direction to
  move in.
 
  However, my main purpose in this posting is that several
  people have expressed interest in using a standard build
  tool such as GNU make or Ant or the like.  I have written
  up some small Makefiles for BootJVM that will do full and
  incremental compilations and produce the same exact results
  as the current /bin/sh build scripts.  They were fairly
  simple.  One advantage is that they could be adapted to
  handle multiple compilation environments when and if the
  need arose without the complexity of modifying the current
  scripts  (the long-term price of short-term expediency).
  This would ease the project more into maintainable position
  before we all got used to using the current scripts.
  (Sorry I didn't think to put the effort into this in
  the first place, as I deemed getting the code base done
  first the more important item.)
 
  Would The List be interested in me replacing these simple
  shell scripts (namely, '*/*.sh', being 'build.sh' and
  'clean.sh' and 'common.sh') with these simple but _much_
  smarter Makefiles (which run GNU make)?  I'd be glad to
  polish up these files and stick them out on SVN if folks
  are interested.  I am pretty sure that Rodrigo Kumpera and
  Robin Garner would be happy if I did so...  ;-)
 
 
  Dan Lydick
 
 
 
  [Original Message]
  From: Tanuj Mathur [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Date: 10/25/05 9:29:49 AM
  Subject: Re: MSVC support, was: Compilers and configuration tools
 
  The Boost project [http://www.boost.org] could probably serve as a
  knowledge source on how difficult it is to support multiple compilers
  for the same codebase.
  For example, this document
 http://www.boost.org/libs/config/config.htm
  describes the configuration options and build process they use to
  support the various compilers.
  Some points I'd like to make:
-  I believe multiple compiler support is desirous as we look to
  support multiple platforms. First of all, it will ensure that the
  project sticks to writing portable code as far as possible. Secondly,
  it will give users an option to optimize the compiled code in the
  best
  way possible for their platform. For example, while GCC is an
  excellent multiplatform compiler, at least on Windows it is certainly
  not the best optimizing compiler available. and people would
  appreciate it if the project provided them the option of using Intel
  or MSVC to produce a better optimized JVM.
- As for the logistical problems, I believe they will be kept to a
  minimum if we develop keeping multiple compilers in mind from the
  beginning itself. Adding compiler support after the project has a
  sizeable existing codebase would be quite painful.
 
  As Boost shows, multi compiler support is doable with some effort.
  Anyone out there with real life experiences they care to contribute?
 
  - tanuj
 
 
  ...snip...
 
 
 
 

 --
 Geir Magnusson Jr  +1-203-665-6437
 [EMAIL PROTECTED]





Re: MSVC support, was: Compilers and configuration tools

2005-10-24 Thread Rodrigo Kumpera
Supporting many compilers have a few problems, the three I can think
of right now are, assembly sintax (intel x att), compiler extensions
(gcc's computed goto can speed interpreters a lot) and c++ libraries
nuanses (iff c++ is used).



On 10/24/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Tanuj,

 Welcome!  Thank you for your observations about compiler support.
 I tried to write my code so as to be as independent of a particular
 operating system and a particular compiler as possible, so I hope
 that compiling for MSVC is a simple matter.  As far as whether we
 should move in that direction, I have stated an opinion that it would
 provide easy access to a large base of developers who are familiar
 with that compiler and its IDE.  Others have stated concerns about
 supporting multiple compilers being a potential source of logistical
 problems.

 I'd like to ask The List for more opinions for weighing this issue.

 (1) Is MSVC support a good move?  It is necessary?  Is it a problem?
 Is it prudent?

 (2) If we look into supporting several compilers more generally,
 do we widen our horizons as to what platforms we can run Harmony on?
 Do we create logistical problems by doing so?

 How about you folks from projects that have done this sort of thing
 in the past?  What do you say?

 My opinion-- Itis rather unusual for me that we should be here
 because _my_ experience has typically been the inverse:  Support
 a single compiler on multiple platforms per architectural
 requirements, not support multiple compilers on one platform.
 This is why I would ask for the collective wisdom of The List.

 Tanuj, thanks for your interest in MSVC support.  Let's see what
 people say concerning strategic issues of supporting MSVC.


 Dan Lydick


  [Original Message]
  From: Tanuj Mathur [EMAIL PROTECTED]
  To: Apache Harmony Bootstrap JVM [EMAIL PROTECTED];
 harmony-dev@incubator.apache.org
  Date: 10/24/05 4:44:32 AM
  Subject: Re: Compilers and configuration tools
 
  Hi,

I'd like to help out with supporting the MSVC compiler on Windows.

  I'm tied up with work this week, but can take a look at the task from

  next Monday.

Geir, regarding your concerns about MSVC's commercial nature being a

  barrier to entry, I am sure that wouldn't be a problem, as the MSVC

  optimizing compiler is available as a free download from Microsoft's

  website:

 
 http://www.microsoft.com/downloads/details.aspx?FamilyID=3D272be09d=

  -40bb-49fd-9cb0-4bfa122fa91bdisplaylang=3Den

It is only the actual IDE that is commercial, with the Express

  Editions estimated to cost $49 per copy (although the betas are free,

  as Devanum pointed out).

It would probably be wise to focus most of the group's initial

  efforts on maintaining GCC support, while a few interested people can

  work on maintaining  support for other compilers. I believe that the

  feedback from the work done on adding compiler compatibility would be

  of easier to incorporate if we start early,with the smaller/younger

  code base, instead of waiting till later.

 

  - tanuj

 

 

  On 10/22/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:

  

   I'm with Geir on his comments, but evaluating MSVC

   I think is a good idea because there are so many

   folks who use it-- or is it?  Rodrigo' comments about

   confusion with multiple compiler support make a

   compelling argument about going with _one_

   compiler-- and look at the minor diffs we have

   already experienced!  Rodrigo needs '__int64' on

   hit Linux box, and Robin is arguing with finding

   the correct 'thread.h' (apparently), and I had no

   problems.  All of us are using GCC.  What does

   this tell us?  The less we deal with mechanical

   issues like compiler invocations, the more real

   work we get done.

  

   Bottom line:  Should we just declare one compiler

   for now and branch out later, once we have all of

   our porting done?

  

   Next observation:  There has been an offer of help

   with 'autotools' and some concern about that tool.

   I've seen GNU autoconf work (part of autotools?)

   nicely, and I'm interested in exploring this avenue

   further.

  

   Dan Lydick

  

  

   -Original Message-

   From: Davanum Srinivas [EMAIL PROTECTED]

   Sent: Oct 21, 2005 10:31 AM

   To: harmony-dev@incubator.apache.org

   Subject: Re: Small problems building under cygwin

  

   I believe Express versions are available for download -

   http://lab.msdn.microsoft.com/express/visualc/default.aspx

  

   -- dims

  

   On 10/21/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:

I'd like to be sure that we don't have a barrier to entry by having

to go get commercial software to  build the project - by this I mean

a MSVC requirement.  I'm happy if windows users can use MSVC if they

want - i.e. if someone supports it - but it can't be the only option.

   

geir

   

   ...snip...

  

  

  

  

  

   Dan 

Re: Small problems building under cygwin

2005-10-21 Thread Rodrigo Kumpera
Dan,

To generate a binary that doesn't depend on cygwin shared lib, for
that I use the following combination of flags -mno-cygwin 
-Wl,--add-stdcall-alias, the first one is pretty obvious, the second
is result of google'ing for a working solution, as I could not make it
work without this flag.

I'll look in my cygwin instalation for the header you mentioned.

One thing I'm sure is that if we use C++, mixing compilers on the same
platform is a huge can of worms. There is not C++ ABI for windows and
on linux there are allways some corner cases.


On 10/20/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:

 Rodrigo,

 An eloquent expression of the sentiments of
 many experienced developers!  I selected GCC just
 for this reason, it is ubiquitously used on many
 platforms.  Can we use code compiled with CygWin's
 GCC compiler on a native Windows platform?  I've
 done Win32 apps with GCC and MSVC both, but I've
 not tried a mix and match between CygWin and
 Windows with MSVC and GCC.  Comments?

 Anyone else have some experience with this issue?

 I still think we should see what everyone thinks
 about MSVC in particular.

 Dan Lydick


 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 20, 2005 12:40 PM
 To: harmony-dev@incubator.apache.org
 Subject: Re: Small problems building under cygwin

 Dan,

 Suporting multiple SO and hardware configurations is going to be PITA,
 adding compiler to this mix might be overkill. It's true that many
 specialized compiler generate better code than gcc for their platform,
 f.e. ICC, but does that justify the extra effort?

 I mean, there are a LOT of stuff we'll need to support many compilers:
 libraries have diferent performance problems and bugs; compilers have
 diferent extensions, standards compliance, assembly sintax and bugs.
 Assembly, for one, is going to be a big issue if we start using native
 threads and need to use memory barriers, we will have the exact same
 x86 code in att and intel styles.

 It's doable, but will require a LOT of effort to be done. Anyway, I
 don't see much harm in requiring non-linux developers to have instaled
 the gcc toolchain and a bourne shell interpreter, that's a lot less
 than many complex projects require for the build enviroment.

 But even then, I'm biased on this subject, as I cannot survive a
 windows machine without cygwin and don't care much for anything else
 but linux.

 Have said that, I think having build.sh converted to a .bat script is
 not necessary, only maybe as a subset, that supports only win32/64 on
 MSVC.



 On 10/20/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 
  Rodrigo,
 
  Thanks for your help with these items.  I think that
  it should be a simple matter to have 'config.sh' set
  a 'win32' path.  In fact, there should probably be
  a map function for that include path so that each
  configuration can set that subdirectory name to
  whatever Sun declares it to be for that platform
  instead of depending on the OS platform name.
 
  The '__int64' issue is an interesting one!  That's
  why we're trying out all these porting things.  To
  me, the solution depends partly on a matter of
  build policy, namely, which compilers do we use?
  I think that there is a case to be made for supporting
  MSVC in addition to GCC since it has a large installed
  base, and a Windows version of the build scripts
  should be able to support both.  I suggest that we
  could have the compiler as one of the configuration
  options in 'config.sh' for Windows and CygWin, also
  for the Windows .BAT file equivalent.  What do you
  think?
 
 
  Dan Lydick
 
 
  -Original Message-
  From: Rodrigo Kumpera [EMAIL PROTECTED]
  Sent: Oct 19, 2005 5:42 PM
  To: harmony-dev harmony-dev@incubator.apache.org
  Subject: Small problems building under cygwin
 
  I've found a small issue while building under cygwin.
 
  I'm using j2sdk 1.4 and gcc 3.4.4 (cygwin). The problems are when
  building the jni stuff.
 
  First it included on gcc find patch j2sdk\include\cygwin, but it
  should be j2sdk\include\win32.
 
  Second is when building the included file jni_md.h breaks everything
  as it defines jlong as __int64 and not long long.
 
  Fixing both is pretty easy, either edit config/config_opts_always.gcc
  or rename the directory from win32 to cygwin.
 
  The second you can either edit jni_md.h and change __int64 to long
  long or include a define directive, or something like this, in
  config/config_opts_always.gcc.
 
 
  I'm not sure what would be the best way to fix this on build.sh, as
  the first issue is related to build enviroment and the second about
  incompatible compilers (__int64 works on MSVC and ICC but not gcc)
 
  []s
  Rodrigo
 
 
 
 
  Dan Lydick
 




 Dan Lydick



Re: Some questions about the architecture

2005-10-21 Thread Rodrigo Kumpera
On 10/21/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:

 Comments below.

 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 20, 2005 1:49 PM
 To: harmony-dev@incubator.apache.org
 Subject: Re: Some questions about the architecture

 ...snip...
 
 
  By IP I mean Intruction Pointer, the  EIP register in x86 f.e. What I
  mean was something like this:
 
  void throw_exception(jobject_t *ex) {
  long * ip = (*(ex - 1)); //the return address is after the 
  arguments
  long * sp = (*(ex - 2)); //the old frame pointer is after the 
  return address
  jclass_t * cl = ex-vtable-class_obj;
 
  printf(obj 0x%x ip 0x%x sp 0x%x\n, obj, ip, sp);
 
  printf(--\n);
  //this code performs stack unwinding, it misses synchronized 
  methods .
  while(isNotThreadBaseFunction(ip)) {
  printf(trace element ip 0x%x sp 0x%x\n, ip, sp);
  catch_info_t * info = find_catch_info(ip, cl);
  if(info) restore_to(ip, sp, ex, info);
  ip = (long *)*(sp+ 1);
  sp = (long *)*sp;
  }
  printf(-\n);
  fflush(stdout);
  //uncaught exception, must never happen, this is a JVM bug.
  //in my vm, at least, uncaught exceptions where handled by the
  implementation of Thread.
  }
 
  find_catch_info was implemented in java, but looks something like this
  (don't bother with the linear search for now):
 
  catch_info * find_catch_info(long *ip, jclass_t *ex) {
if(ip  vm -compiledMethodsStart || ip  vm-compiledMethodsEnd)
return 0;
foreach(compiled_method_t * m, vm-compiledMethods)
if(m-owns(ip)) //this instruction pointer belongs to this method
   return m-findCatch(ip, ex); //find a catch block for the exception
return 0;
  }
 
  restore_to is implemented this way:
 
  state void restore_to(long *ip, long *frame, jobject_t *ex, catch_info 
  *info)  {
 asm(movl %0, %%eax;
  movl %1, %%ebx;
  movl %2, %%ecx;
  movl %3, %%edx;
  movl %%ebx, %%ebp;
  movl %%ebp, %%esp;
  subl %%edx, %%esp;
  pushl %%ecx;
  pushl %%eax;
  ret;
  :
  :m(ip), m(frame), m(ex), m(info-stackDelta)
  //stackDelta is local storage + temp storage
  :%eax, %ebx, %ecx, %edx);
  }
 
  This stuff works only in a JIT only enviroment, but only some minor
  tweaks would be required to work in a hybrid enviroment
 
  ---
 
  Thanks for your clarification on the term 'IP address'.  Back to your
  question:
 
   It does, but by stack walking I meant not returning null, but having
   the code analise the call stack for a proper IP address to use.
 
  In this implementation, unprotected exceptions are handled in
  'jvm/src/opcode.c' by references to thread_throw_exception()
  in 'jvm/src/thread.c'.  Stack printing is available through the
  various utilities (esp. jvmutil_print_stack_common())
  in 'jvm/src/jvmutil.c'.  Protected exceptions are handled by the
  exception list found in the 'jvm_pc' field 'excpatridx'.  When an
  exception is found, this list is queried (by the ATHROW opcode,
  which will be available with 0.0.2) and, if found, JVM thread control
  is transferred to that handler.  If it is _not_ found, 
  thread_throw_exception()
  is called and the thread dies at the end of opcode_run().  This 
  functionality
  looks very similar to your code shown above.
 
  ---
  
  ...snip...
  
   Dan Lydick
  
 
 
 
 
  Dan Lydick
 


 Dan,

 I'm confused by all this classification on types of exceptions (from
 the code you make a distiction from caugth/uncaugth and
 Exception/Error/Throwable). My view is that these are not an issue for
 the runtime besides the verifier.

 We could have the following code on java.lang.Thread:

 private void doRun() {
   try {
 if(runnable != null)
  runnable.run();
else
   this.run();
   } catch(Throwable t) {
 this.threadGroup.uncaughtException(t);
   }
   terminate();
 }


 The runtime would only assert that the exception have not fallen out
 and not care about how it would be handled.

 ---

 I agree that the verifier should look into this, but what happens if
 you get a divide by zero error?  Or a null pointer exception?  These
 are not available to the verifier, but are runtime conditions that
 do not arise in advance.  Therefore, they need to be checked at
 run time and exceptions thrown.  These two examples are subclasses
 to java.lang.RuntimeException and may be thrown at any time.

There are no diferences from the exception handling (define the catch
of an exception) perspective between explicit thrown exceptions (no
matter the Throwable subclass) and runtime generated ones (CCE, NPE,
etc). The only diferences are on the verifier (verify

Re: Small problems building under cygwin

2005-10-21 Thread Rodrigo Kumpera
Dalibor,

If autotools have really evolved since last time I checked, start of
2005, then are a really good alternative. They can do some really good
magic with platform incompatibility issues.

My experience with developing with then is really depressing, I need
to have a dozen versions of autoconf instaled because all of then are
bloody incompatible and every project require an unique version.
Debuging autoconf script are really hard (maybe I'm missing
something). Every time I had to generate Makefile.in or Makefile I had
a long session of fixing small problems and googleing for solutions.
Maybe that's because I do suck when  using and developing with
autotools and I hope so.

Anyway, configure scripts are really sweet... when they work and don't
screw up too badly.


On 10/21/05, Dalibor Topic [EMAIL PROTECTED] wrote:
 On Fri, Oct 21, 2005 at 01:08:34PM -0200, Rodrigo Kumpera wrote:
  Dan,
 
  To generate a binary that doesn't depend on cygwin shared lib, for
  that I use the following combination of flags -mno-cygwin
  -Wl,--add-stdcall-alias, the first one is pretty obvious, the second
  is result of google'ing for a working solution, as I could not make it
  work without this flag.

 Is there something that speaks against using autotools for all that?

 I've written autotools setups that work with gcc, MSVC, etc, across
 CygWin, OS X, various Unix variants, GNU/Linux, etc, and found the
 underlying tools to have evolved nicely for that in the last few years,
 offer some pretty neat features (cross-compiling for windows from
 GNU/Linux for example), and most importantly, take care of the whole
 platform specific linker flag zoo accross all sorts of operating system
 and toolchain combinations.

 cheers,
 dalibor topic



Re: Some questions about the architecture

2005-10-21 Thread Rodrigo Kumpera
On 10/21/05, Tom Tromey [EMAIL PROTECTED] wrote:
  Dan == Apache Harmony Bootstrap JVM [EMAIL PROTECTED] writes:

 Dan I agree that the verifier should look into this, but what happens if
 Dan you get a divide by zero error?  Or a null pointer exception?  These
 Dan are not available to the verifier, but are runtime conditions that
 Dan do not arise in advance.

 The bytecode verifier doesn't need to know about exceptions at all.
 Checked exceptions are purely a language thing.  They are not
 checked by the runtime.

 Dan Therefore, they need to be checked at
 Dan run time and exceptions thrown.

 True.

 Dan One question that I have is that in 'jvm/src/opcode.c' there are
 Dan a number of references to thread_throw_exception().  The first
 Dan parameter is the type of event, either a java.lang.Error or a
 Dan java.lang.Exception or a java.lang.Throwable.  Can I get by
 Dan without java.lang.Throwable?

 I read through the exception code a bit.  From my reading, I see a few
 flaws.

 First, there is no need to differentiate between throwable, exception,
 and error in the JVM.  'athrow' merely throws an object whose type is
 a subclass of Throwable.  The catch handlers do the type comparison at
 runtime to see if they should run.

 It isn't clear to me how THREAD_STATUS_THREW_UNCAUGHT is ever set.
 But, it doesn't matter, since I think this isn't needed.  Instead it
 is more typical to set up a base frame in the thread which essentially
 looks like this:

try {
  .. run the thread's code
} catch (Throwable) { // this catches any uncaught exception
  .. forward to ThreadGroup
}

 I.E, you don't need a special flag.

 In thread.h it looks as though the exception being thrown is thrown by
 class name:

 rchar *pThrowableEvent;  /** Exception, Error, or Throwable
   * that was thrown by this thread.
   * @link #rnull [EMAIL PROTECTED]
   * if nothing was thrown.
   */

 Typically it is simpler to unify the exception handling code so that
 internally generated exceptions (e.g., an NPE) are thrown using the
 same mechanism as user-code-generated exceptions.  In other words, I
 think you're going to want an object reference here.

 Tom



I think unless the vm extract the class name of the exception before
stack unwinding, it's required. One can, for example, throw an
exception using reflection:

  throw (Throwable)Class.forName(java.lang.Exception).newInstance();


Re: Some questions about the architecture

2005-10-21 Thread Rodrigo Kumpera
On 10/21/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:


  -Original Message-
  From: Rodrigo Kumpera [EMAIL PROTECTED]
  Sent: Oct 21, 2005 12:08 PM
  To: harmony-dev@incubator.apache.org
  Subject: Re: Some questions about the architecture
 
 ...snip...
   I agree that the verifier should look into this, but what happens if
   you get a divide by zero error?  Or a null pointer exception?  These
   are not available to the verifier, but are runtime conditions that
   do not arise in advance.  Therefore, they need to be checked at
   run time and exceptions thrown.  These two examples are subclasses
   to java.lang.RuntimeException and may be thrown at any time.
 
  There are no diferences from the exception handling (define the catch
  of an exception) perspective between explicit thrown exceptions (no
  matter the Throwable subclass) and runtime generated ones (CCE, NPE,
  etc). The only diferences are on the verifier (verify checked
  exceptions) and how they are trapped (NPE could use the segv signal).
  I don't understand your (un)protected exception classification.
 

 Perhaps I need to use more standard terminology.  In place of
 unprotected, I should say uncaught.  (Sorry about that!  I'll try to
 get my terms straight here.)  This means an exception without
 an explicit handler, which could include a java.lang.RuntimeException
 or a java.lang.Error.  The VM spec sections 2.16, 2.16.2, and 2.17.7
 discuss uncaught exceptions.  This is why I have felt I had to make a
 distinction and so implement my code.

The thing is, there's no need to make such distinction.
What happens when an exception is uncaught is, more or less, the following

-The exception is passed to the thread's uncaughtExceptionHandler
-The exception is passed to the ThreadGroup's uncaughtException method
-The thread is terminated
-If this was the last non-daemon thread, terminate the JVM.

It's easy to have this stuff implemented in the class library, and not
in the runtime, using Java a well smaller code. The class lib just
make sure that no thread will ever have an uncaught exception of any
possible kind.

That's my point, at least.


 
   Try taking a look at the JVM specification, section 2.16.  I have tried
   to write my code to implement the functionality described there.  I
   would appreciate you studying section 2.16 and comparing it against
   my implementation of the exception mechanism to see if you find
   any flaws in my logic.  If so, please let me know what you find.  One
   question that I have is that in 'jvm/src/opcode.c' there are a number
   of references to thread_throw_exception().  The first parameter is the
   type of event, either a java.lang.Error or a java.lang.Exception or
   a java.lang.Throwable.  Can I get by without java.lang.Throwable?
   Everything I throw so far is either an Error or an Exception.  I just
   included Throwable in case I had something else because I think I
   remember something in the spec about something that is not an
   Error or an Exception, so I thought I would try to cover all angles.
   Thanks for your help.
  
  
   Dan Lydick
  
   ---
 
  Reading 2.16.2 Handling an Exception I could not find any diference
  between how Exception and Error exceptions are handled. The section
  only defined how an expression/statement is dynamically enclosed by a
  catch block. And even then, there are no distinction between explicit
  throws, runtime generated exceptions and asynchronous exceptions.
 

 VM spec section 2.16.4 makes that distinction, but I'm still wondering if I
 can get by without distinguishing java.lang.Throwable here.



Yes, 2.16.4 makes some distinctions, but I cannot see how they are
important to the runtime unless for bytecode verification purpoises.


Re: Some questions about the architecture

2005-10-20 Thread Rodrigo Kumpera
On 10/20/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 Robin, Rodrigo,

 Perhaps the two of you could get your heads together
 on GC issues?  I think both of you have been thinking
 along related lines on the structure of GC for this JVM.
 What do you think?


 Further comments follow...

 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 19, 2005 4:49 PM
 To: harmony-dev@incubator.apache.org
 Subject: Re: Some questions about the architecture

 On 10/19/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 
 
  -Original Message-
  From: Rodrigo Kumpera [EMAIL PROTECTED]
  Sent: Oct 19, 2005 1:49 PM
  To: harmony-dev@incubator.apache.org, Apache Harmony Bootstrap JVM [EMAIL 
  PROTECTED]
  Subject: Re: Some questions about the architecture
 
  On 10/19/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
  
 ...snip...
 
  Notice that in 'jvm/src/jvmcfg.h' there is a JVMCFG_GC_THREAD
  that is used in jvm_run() as a regular thread like any other.
  It calls gc_run() on a scheduled basis.  Also, any time an object
  finalize() is done, gc_run() is possible.  Yes, I treat GC as a
  stop-the-world process, but here is the key:  Due to the lack
  of asynchronous native POSIX threads, there are no safe points
  required.  The only thread is the SIGALRM target that sets the
  volatile boolean in timeslice_tick() for use by opcode_run() to
  test.  bThis is the _only_ formally asynchrous data structure in
  the whole machine./b  (Bold if you use an HTML browser, otherwise
  clutter meant for emphasis.)  Objects that contain no references can
  be GC'd since they are merely table entries.  Depending on how the
  GC algorithm is done, gc_run() may or may not even need to look
  at a particular object.
 
  Notice also that classes are treated in the same way by the GC API.
  If a class is no longer referenced by any objects, it may be GC'd also.
  First, its intrinsic class object must be GC'd, then the class itself.  This
  may take more than one pass of gc_run() to make it happen.
 
  ---

 How exactly the java thread stack is scanned for references on this
 scheme? Safepoints are required for 2 reasons, first to allow native
 threads proper pausing and second to make easier for the garbage
 collector identify what on the stack is a reference and what is not.

 The first one is a non-issue in this case, but the second one is, as
 precise java stack scanning is required for any moving collector
 (f.e. semi-space, train or mark-sweep-compact). The solution for the
 second problem is either have a tagged stack (we tag each slot in the
 stack is it's a reference or not), generate gc_maps for all bytecodes
 of a method (memory-wise, this is not pratical, with JIT'ed code even
 worse).

 ---

 That depends on the GC implementation.  Look at 'jvm/src/gc_stub.c'
 for the stub reference implementation.  To see the mechanics of
 how to fit it into the compile environment, look at the GC and heap
 setup in 'config.sh' and at 'jvm/src/heap.h' for how multiple heap
 implementations get configured in.

 The GC interface API that I defined may or may not be adequate
 for everything.  I basically set it up so that any time an object reference
 was added or deleted, I called a GC function.  The same goes for
 class loading and unloading.  For local variables on the JVM stack for each
 thread, the GC functions are slightly different than for fields in an object,
 but the principle is the same.

 ---


 
  For exemple, as I understand, JikesRVM implements gc safepoints (the
  points in the bytecode where gc maps are generated) at loop backedges
  and method calls.
 
   The priorities that I set were (1) get the logic working
   without resorting to design changes such as multi-threading,
   then (2) optimize the implementation and evaluate
   improvements and architectural changes, then (3) implement
   improvements and architectural changes.  The same goes
   for the object model using the OBJECT() macro and the
   'robject' structure in 'jvm/src/object.h'.  And the CLASS()
   macro, and the STACK() macro, and other components
   that I have tried to implement in a modular fashion (see 'README'
   for a discussion of this issue).  Let's get it working, then look into
   design changes, even having more than one option available at
   configuration time, compile time, or even run time, such as is
   now the case with the HEAP_xxx() macros and the GC_xxx()
   macros that Robin Garner has been asking about.
  
   As to the 'jvm/src/timeslice.c' code, notice that each
   time that SIGALRM is received, the handler sets a
   volatile boolean that is read by the JVM inner loop
   in 'while ( ... || (rfalse == pjvm-timeslice_expired))'
   in 'jvm/src/opcode.c' to check if it is time to give the
   next thread some time.  I don't expect this to be the
   most efficient check, but it _should_ work properly
   since I have unit tested the time slicing code, both
   the while() test

Re: Some questions about the architecture

2005-10-19 Thread Rodrigo Kumpera
-machine data types
 with 'r' and Java data types with 'j'.  I was confusing myself
 all the time!)  Obviously, there is no such thing as setjmp/longjmp
 in the OO paradigm, but they do have a better method,
 namely, the concept of the exception.  That is effectively
 what I have tried to implement here in the native 'C' code
 on the real platform, to use OO terms.  Did I misunderstand you?


Not exactly, GC must walk the stack to find the root set;
Serialization needs to find what is the last user class loader on
stack (since it's the one used to lookup classes for deserialization);
Security needs to walk the stack for performing checks on the code
base of each method on on; and JNI needs this as exceptions are queued
for using by the ExceptionOccurred call.

I did look at opcode.c and thread.c but I could not find the stack
unwinding code, could you  point me where it is located?

 Thanks,


 Dan Lydick

 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 19, 2005 11:54 AM
 To: Apache Harmony Bootstrap JVM [EMAIL PROTECTED]
 Subject: Re: Some questions about the architecture

 Dan,

 Green threads are threads implemented by the jvm itself, as is done
 right now by bootJVM. This model is very tricky when it comes to
 implement I/O primitives (you don't want all threads to block while
 one I/O operation is waiting to complete), the only advantage is that
 synchronization inside the jvm code is a non-issue.

 Usually it's better to use one native thread for each started java thread.

 I've been reading the code for the timeslice stuff, why do you start
 an extra thread if it doesn´t perform anything except receiving the
 alarm signal? Why not use the interpreter thread for that?

 []'s
 Rodrigo

 On 10/19/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 
  Rodrigo,
 
  I'm not familiar with the term green threads, so could you
  explain?  Does it mean how I implemented the JVM time
  slice timer in 'jvm/src/timeslice.c' or something else?
  Let me digress a bit to make sure I have properly explained
  how implemented JVM threads.
 
  Notice that I have simply implemented a pair of
  loops, almost _directly_ per the JVM spec, for the JVM threads.
  The outer loop is a while()/for() loop combination, found in
  jvm_run() in 'jvm/src/jvm.c', that monitors the thread table
  for no more user thread, via 'while(rture == no_user_threads)'
  and loops through each thread in the thread table via
  'for(CURRENT_THREAD = ...)', and calls opcode_run()
  in 'jvm/src/opcode.c' through a state table macro expansion
  that resolves to the function threadstate_process_running()
  in 'jvm/src/threadstate.c'.  This opcode_run() function is
  where the virtual instructions are executed in the
  'while (THREAD_STATE_RUNNING == ...)' loop.
 
  In this case, I don't use native threads for anything except
  the inner loop timeout condition.  How does this implementation
  fit into your question?
 
  Thanks,
 
  Dan Lydick
 
 
  -Original Message-
  From: Rodrigo Kumpera [EMAIL PROTECTED]
  Sent: Oct 19, 2005 9:39 AM
  To: Apache Harmony Bootstrap JVM [EMAIL PROTECTED]
  Subject: Some questions about the architecture
 
  Hi Dan,
 
  I'm digging the threading system and I found that the bootJVM is using
  green threads, this performs pretty bad as most platforms have decent
  threading libraries now and suporting green threads will be a
  nightmare when it comes to implementing io primitives.
 
  Ignoring the synchronization requirements, using native threads is
  somewhat simpler as the jvm don´t need to care about context switch.
 
  Then looking at how exceptions are thrown I've got to say that using
  setjmp/longjmp is not the way to go, it´s better to have proper stack
  walking code as this is required by the runtime in many places
  (Serialization, Security, GC and JNI are some examples). Stack walking
  is a non-portable bitch, I know how it works on x68 hardware only.
 
  What I would suject is to use native threads and the hardware stack
  for parameters, locals and stack stuff. It will be a lot easier to
  integrate with JIT'ed code and GC later.
 
  The gc will need some gc_safepoint() calls in method calls and
  backedges of the methods to allow threads to be stopped for
  stop-the-world collections.
 
  Besides that, I'm really looking forward your work on Harmony.
 
  []'s
  Rodrigo
 
 




Re: Some questions about the architecture

2005-10-19 Thread Rodrigo Kumpera
On 10/19/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:


 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 19, 2005 1:49 PM
 To: harmony-dev@incubator.apache.org, Apache Harmony Bootstrap JVM [EMAIL 
 PROTECTED]
 Subject: Re: Some questions about the architecture

 On 10/19/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 
  Rodrigo,
 
  At some point, _somebody_ has to wait on I/O.  I agree
  that this is not the most efficient implementation, but one
  of the advantages it has is that it does not need _any_
  gc_safepoint() type calls for read or write barriers.
  I am _definitely_ interested in your suggestions, and
  I think others will agree with you, but let's get the code
  up and running as it stands so we can try other approaches
  and compare what good things they bring to the table
  instead of, or even in addition to, the existing approach.

 I think I have not been clear enout. safepoints are needed by the
 garbage collector to know when is safe to stop a given thread (in
 bounded time) for a stop-the-world garbage collection. This have
 nothing to do with read/write barriers.

 ---

 Notice that in 'jvm/src/jvmcfg.h' there is a JVMCFG_GC_THREAD
 that is used in jvm_run() as a regular thread like any other.
 It calls gc_run() on a scheduled basis.  Also, any time an object
 finalize() is done, gc_run() is possible.  Yes, I treat GC as a
 stop-the-world process, but here is the key:  Due to the lack
 of asynchronous native POSIX threads, there are no safe points
 required.  The only thread is the SIGALRM target that sets the
 volatile boolean in timeslice_tick() for use by opcode_run() to
 test.  bThis is the _only_ formally asynchrous data structure in
 the whole machine./b  (Bold if you use an HTML browser, otherwise
 clutter meant for emphasis.)  Objects that contain no references can
 be GC'd since they are merely table entries.  Depending on how the
 GC algorithm is done, gc_run() may or may not even need to look
 at a particular object.

 Notice also that classes are treated in the same way by the GC API.
 If a class is no longer referenced by any objects, it may be GC'd also.
 First, its intrinsic class object must be GC'd, then the class itself.  This
 may take more than one pass of gc_run() to make it happen.

 ---

How exactly the java thread stack is scanned for references on this
scheme? Safepoints are required for 2 reasons, first to allow native
threads proper pausing and second to make easier for the garbage
collector identify what on the stack is a reference and what is not.

The first one is a non-issue in this case, but the second one is, as
precise java stack scanning is required for any moving collector
(f.e. semi-space, train or mark-sweep-compact). The solution for the
second problem is either have a tagged stack (we tag each slot in the
stack is it's a reference or not), generate gc_maps for all bytecodes
of a method (memory-wise, this is not pratical, with JIT'ed code even
worse).





 For exemple, as I understand, JikesRVM implements gc safepoints (the
 points in the bytecode where gc maps are generated) at loop backedges
 and method calls.

  The priorities that I set were (1) get the logic working
  without resorting to design changes such as multi-threading,
  then (2) optimize the implementation and evaluate
  improvements and architectural changes, then (3) implement
  improvements and architectural changes.  The same goes
  for the object model using the OBJECT() macro and the
  'robject' structure in 'jvm/src/object.h'.  And the CLASS()
  macro, and the STACK() macro, and other components
  that I have tried to implement in a modular fashion (see 'README'
  for a discussion of this issue).  Let's get it working, then look into
  design changes, even having more than one option available at
  configuration time, compile time, or even run time, such as is
  now the case with the HEAP_xxx() macros and the GC_xxx()
  macros that Robin Garner has been asking about.
 
  As to the 'jvm/src/timeslice.c' code, notice that each
  time that SIGALRM is received, the handler sets a
  volatile boolean that is read by the JVM inner loop
  in 'while ( ... || (rfalse == pjvm-timeslice_expired))'
  in 'jvm/src/opcode.c' to check if it is time to give the
  next thread some time.  I don't expect this to be the
  most efficient check, but it _should_ work properly
  since I have unit tested the time slicing code, both
  the while() test and the setting of the boolean in
  timeslice_tick().  One thing I have heard on this
  list is that one of the implementations, I think it was
  IBM's Jikes (?), was that they chose an interpreter
  over a JIT.  Now that is not directly related to time
  slicing, but it does mean that a mechanism like what I
  implemented does not have to have compile-time
  support.
 
  *** How about you JVM experts out there?  Do you have
any wisdom for me on the subject of time slicing

Re: Changes to bootjvm 0.0.0 coming soon

2005-10-18 Thread Rodrigo Kumpera
This won´t help to find the spots that require memory barriers, as
these are only an issue on SMP systems. But your idea should not be
discarded as it may help with other kinds of problems.


On 10/17/05, Archie Cobbs [EMAIL PROTECTED] wrote:
 Apache Harmony Bootstrap JVM wrote:
  For testing threading, I think the simpler the better.

 If Harmony contains a JIT, then it would probably be easy to
 configure the JIT to insert context switch instruction(s)
 between every two instructions it emits. Then we could write
 tests that attempt to trigger memory model violations and
 run thoses tests in this debug mode. That would go a long
 way towards gaining confidence in correctness.

 -Archie

 __
 Archie Cobbs  *CTO, Awarix*  http://www.awarix.com



Re: Changes to bootjvm 0.0.0 coming soon

2005-10-18 Thread Rodrigo Kumpera
On 10/18/05, Archie Cobbs [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:
  This won´t help to find the spots that require memory barriers, as
  these are only an issue on SMP systems. But your idea should not be
  discarded as it may help with other kinds of problems.

 Good point.. interesting question how you could check that too..
 perhaps for multi-CPU systems you'd want to insert random length
 delay loops (instead of context switches) or something. A context
 swith would probably result in indirect flushing of the CPU state
 anwyay, effectively the same as a barrier between every instruction.

 -Archie

 __
 Archie Cobbs  *CTO, Awarix*  http://www.awarix.com




Well, I think this kind of test requires that the thread observing
mutation be busy-waiting for changes and the test must be repeated a
huge ammout of times. For example, the new JMM says that changes to
volatile variables must be visible in the same order as they happen,
so:

volatile int a = 0, b = 0;

//Thread 1 repeats the following:
a = 0;
b = 0;
a = 1;
b = 2;

//Thread 2 must never see b = 2 and a = 0, so it repeats the following:
int ra = a;
int rb = b;
if(a == 0  b == 2) //signal error

Given it's a SMP machine, both threads will be running concurrently.


Re: Changes to bootjvm 0.0.0 coming soon

2005-10-18 Thread Rodrigo Kumpera
On 10/18/05, Zsejki Sorin Miklós [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:

 On 10/18/05, Archie Cobbs [EMAIL PROTECTED] wrote:
 
 
 Rodrigo Kumpera wrote:
 
 
 This won´t help to find the spots that require memory barriers, as
 these are only an issue on SMP systems. But your idea should not be
 discarded as it may help with other kinds of problems.
 
 
 Good point.. interesting question how you could check that too..
 perhaps for multi-CPU systems you'd want to insert random length
 delay loops (instead of context switches) or something. A context
 swith would probably result in indirect flushing of the CPU state
 anwyay, effectively the same as a barrier between every instruction.
 
 -Archie
 
 __
 Archie Cobbs  *CTO, Awarix*  http://www.awarix.com
 
 
 
 
 
 
 Well, I think this kind of test requires that the thread observing
 mutation be busy-waiting for changes and the test must be repeated a
 huge ammout of times. For example, the new JMM says that changes to
 volatile variables must be visible in the same order as they happen,
 so:
 
 volatile int a = 0, b = 0;
 
 //Thread 1 repeats the following:
 a = 0;
 b = 0;
 a = 1;
 b = 2;
 
 //Thread 2 must never see b = 2 and a = 0, so it repeats the following:
 
 
 Why not? After the a = 0; in the first thread, this is the case, isn't it?

 int ra = a;
 int rb = b;
 if(a == 0  b == 2) //signal error
 
 Given it's a SMP machine, both threads will be running concurrently.
 
 
 


You're right, the following test would test the proper behavior

My fault, the proper way to test this is the following:

//Thread 1
a = 0;
b = 0;
int i = 0;
//repeat this until positive overflow
++i;
a = i;
b = i;

//Thread 2 keep testing this until positive overflow:
int ra = a;
int rb = b;
if(rb  ra ) //signal error, change to 'b' observed before change to 'a'


Re: Changes to bootjvm 0.0.0 coming soon

2005-10-17 Thread Rodrigo Kumpera
These are really good news Dan!

I think we could start writing test code for the runtime functionality
like proper null checks, array bounds, class initialization,
synchronization and such.


On 10/17/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:

 Everyone,

 I've been working hard on changes the bootjvm 0.0.0
 code that I contributed recently.  There have been a
 number of valuable critiques, especially involving
 portability issues.  I have taken a hard look at these
 and found a number of things that need adjustment,
 plus some plain ol' bugs.  Some of these have to do
 with structure packing, some with big-endian versus
 little-endian processing.  Thanks to everyone who
 commented on the list and off on various of these
 issues.  Thanks in particular to Geir for his efforts
 in porting the code to the CygWin environment and
 his continuing effort porting it to the Max OS-X platform.

 When I am done checking in this round of changes, I will
 label it as release 0.0.1.  In addition to Geir's changes, it will
 contain some improvements in the distribution procedures,
 an all-around firming up in the documentation, adjustments
 to the (optional) Eclipse build procedures, a structural change
 to the JVM stack frame layout, portability and architecture
 fixes, improved diagnostic message utilities, more of the
 JVM opcodes are implemented, and some minor bug fixes.

 This will enable me to get back into finishing up the one module
 I was still working on when the initial code was contributed,
 namely the final round of JVM opcodes as found in the source
 'jvm/src/opcode.c'.  When I finish up this module, we will have
 a real, live, working JVM.  It will be labled as release 0.0.2.

 At that point, we will need everyone's help working on regression
 testing, especially with real, live Java class files so we can test
 the strength of the JVM instruction implementation.  We will need
 to add native support to things like exit(), open(), close(), println()
 and other OS calls, plus a number of other items.  A list will be
 forthcoming as to how you can get involved in helping to make
 this JVM into a living, breathing component of Harmony.

 Thanks to everyone who has contributed and commented to date
 on my work.  I look forward to working with all of you!

 Best regards,

 Dan Lydick


 P.S. I'm going to be using this project-specific e-mail address for
 correspondence concerning the bootstrap JVM so that I can hopefully
 organize myself better, keeping BootJVM issues all together and not
 accidentally miss any correspondence.  My SVN commits will have my
 main e-mail address on it so the repository is not aliased, but I hope to
 do all the talking on this one.  Thanks again.




Re: Changes to bootjvm 0.0.0 coming soon

2005-10-17 Thread Rodrigo Kumpera
Testing the java memory model, and hence threading, is really tricky
and all I can think right now is some testing stuff Doug Lea released
regarding to java.util.concurrent. What I know is that a test to be
worth something, it must be done in a SMP machine. HyperThreading
helps a little, but in a UP setup getting the JMM right is a lot
easier.

For the rest I strongly suggest using Mauve, it can test a lot of the
classlib and would be nice to have only one FOSS testsuite for
classlibs.


For conformance of the JVM I think something simpler should be used,
just a way to call static methods of defined classes and gather the
result - no JNI used. Given that we could write very small tests that
could focus on specific implementation details.

I can write a prototype of such tool for bootjvm.

[]'s
Rodrigo



On 10/17/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:

 Robert,

 By all means!  What do you propose?  I need _everything_
 you have just mentioned.  Which areas are you interested in?

 One of the things that I have mentioned in the initial
 action item list (in file 'README') is that someone who
 knows the ins and outs of threading needs to go in and
 write some threading and synchronization test cases
 and exercise this part of the engine.  The source for
 these pieces is in 'jvm/src/thread*.[ch]' .  When I get
 the SVN repository behaving for me (SSH problems
 right now), I will be checking in the changes for the
 0.0.1 release, which will be enough for someone to
 get started evaluating the code in a fairly final form.
 With the 0.0.2 release, the test cases will be able to
 be exercised.

 Dan Lydick

 -Original Message-
 From: Rodrigo Kumpera [EMAIL PROTECTED]
 Sent: Oct 17, 2005 3:26 PM
 To: harmony-dev@incubator.apache.org
 Subject: Re: Changes to bootjvm 0.0.0 coming soon

 These are really good news Dan!

 I think we could start writing test code for the runtime functionality
 like proper null checks, array bounds, class initialization,
 synchronization and such.


 On 10/17/05, Apache Harmony Bootstrap JVM [EMAIL PROTECTED] wrote:
 
  Everyone,
 
  I've been working hard on changes the bootjvm 0.0.0
 ...snip...
 
 




Re: [arch] Interpreter vs. JIT for Harmony VM

2005-09-21 Thread Rodrigo Kumpera
Having a mixed JITed-interpreted enviroment makes things harder.
Writing a baseline, single pass JITer is easy, but there are A LOT
more stuff to make a port that just the code execution part.

JikesRVM have a base JITer class that does the bytecode decoding and
one subclass per platform that does the code generation. Porting is a
matter of creating another subclass and implement a few methods (a LOT
less than the 198 opcodes from the java bytecode).

The hard part is not the code of the JITer subclass, but writing an
assembler for it. Take x86, for example, generating all those
addressing modes is pure and simple PITA. And there are other code
artifacts that need to be generated, like call traps and interface
dispatch functions.

Then we have some platform specific issues, like exception handling,
stack walking, scanning and unwinding, native calls, NPE traps and
many more that I´m missing here.

The GC code should suffer from some plaform issues due to read/write
barriers, card marking methods, how to park all threads, etc.

And this is only just to port from one hardware platform to another.
OS porting is another big source of problems.

All in all, I think that using a JITed only enviroment is easier.


On 9/21/05, Peter Edworthy [EMAIL PROTECTED] wrote:
 Hello,

  Do we need an interpreter or a fast code generating, zero optimizing JIT?

 I'd vote with a zero optimizing JIT. My reasons are not so much based on
 speed, but on code reuse. The structures required to support this would
 also be used by optimizing JITs. In an interpreter and JIT system the two
 tend not to overlap as nicely. Less code  concepts to understand =
 better, IMHO.

  I can think of advantages to each approach. Assuming a C/C++
  implementation, a traditional interpreter is easy to port to a new
  architecture and we can populate Execution Engines (EE) for different
  platforms rather easily.

 (This is at the edge of my knowledge, I've read about it but never tried it)
 If the JIT is a bottom up pattern matching compiler, which seems to fit
 well with the Java Byte Code format, then populating the 'pattern tables'
 especially if not aiming for much if any optimization would be just as
 easy as setting up EEs.

  On the other hand, a fast code-generating JIT can call runtime helpers and
  native methods without additional glue code whereas an interpreter has to
  have special glue code to make it work in a JIT environment. Needless to
  say, if a method is called more than once, the one time cost of JITing
  without optimization may be lower than the cost of running the interpreter
  loop.

 For Magnus an example to explain the above.

 a = sin (b) compiles to something like

 load b
 push
 call sin
 pop
 mov a

 If the sin function is native then for an interpreter the process would be

 read load b; push
 carry out equivalent operation
 read call sin
 Find sin method is native
 call native call routine
   call sin
   return sin return
 read pop; mov a
 carry out equivalent operation

 so the call to the sin function is actually two jumps, which is bad as
 jumps take time and often invalidate the data cache in the processor.

 If compiled then it would be

 load b; push; call sin; pop; move to a

 There is only one call to get to the sin method

  Our experience is that a fast, zero optimizing JIT can yield low-enough
  response time. So, I think at least Harmony has the option of having a
  decent system without an interpreter. Thoughts?
 Again less is more ;-}

 Thanks,
 Peter Edworthy




Re: Call for Contributions (was Re: 4 Months and...)

2005-09-20 Thread Rodrigo Kumpera
I've written a pet JVM in Java, it includes a very simple JITer, no GC
(but it is starting to use MMtk magic, so should be doable to use it),
no self-hosting and no support for native code. The code have never
left my machine but I'm willing to donate if is desirable.


[]'s
Rodrigo


On 9/20/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 
 On Sep 20, 2005, at 8:52 AM, [EMAIL PROTECTED] wrote:
 
  This is not likely to actually attract code.  Opening up SVN to
  committership would.  You've described a reverse of how most
  projects work if you will such that the barrier is to initial
  commit rather than lazy veto/etc.
 
 Most projects give committership to people that have offered code and
 patches, don't they?
 
 geir
 
 
  -Andy
 
  Geir Magnusson Jr. wrote:
 
  I'd like to restate that we are always looking for code
  contributions.  I do know of some in preparation, but it should
  be  clear that if you have anything to offer (hey, Dan!) please
  post a  note to dev list to discuss. :)
  geir
  On Sep 19, 2005, at 5:35 PM, [EMAIL PROTECTED] wrote:
 
  Four months and no code.  Open up the repository and let the
  willing start committing.  The discussion has gotten so verbose
  that there are already people publishing edited digests.  Code
  will  reduce the discussion :-)
 
  -Andy
 
 
 
 
 
 
 
 --
 Geir Magnusson Jr  +1-203-665-6437
 [EMAIL PROTECTED]
 
 



Re: [arch] voluntary vs. preemptive suspension of Java threads

2005-09-10 Thread Rodrigo Kumpera
I was wondering what similarities on-stack replacement of JITed code
have with suspension with code patching.


On 9/9/05, Xiao-Feng Li [EMAIL PROTECTED] wrote:
 Thanks, Rodrigo and Shudo. ORP had a similar approach as code patching
 previously, which we called IP hijacking. We found, as you observed,
 it had some difficulties in maintenance and portability. I classified
 this approach into the preemptive category.
 
 I suspect a given thread suspension algorithm will have different
 performance characteristics depending on processor architecture, and
 garbage collection algorithm as well. It deserves more study. Since
 the two approaches are not much conflicting, I suggest implementing
 voluntary mechanism at first for its cleanness and portability for the
 first few releases of Harmony.
 
 Thanks,
 xiaofeng
 ==
 Intel Managed Runtime Division
 
 On 9/1/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  From: Xiao-Feng Li [EMAIL PROTECTED]
 
   Thread suspension happens in many situations in JVM, such as for GC,
   for java.lang.Thread.suspend(), etc. There are various techniques to
   suspend a thread. Basically we can classify them into two categories:
   preemptive and voluntary.
 
   The preemptive approach requires the suspender, say a GC thread,
   suspend the execution of a target thread asynchronously with IPC
   mechanism or OS APIs. If the suspended thread happened to be in a
   region of code (Java or native) that could be enumerated, the live
   references were collected. This kind of region is called safe-region,
   and the suspended point is a safe-point. If the suspended point is not
   in safe-region, the thread would be resumed and stopped again until it
   ended up in a safe-region randomly or on purpose.
 
  Sun's HotSpot VMs patch compiled native code to stop thread at the
  safe points, on which a stack map is provided. It's smart but prone to
  causes subtle problems, an engineer working on the VM said.
 
Kazuyuki Shudo[EMAIL PROTECTED]  http://www.shudo.net/
 



Re: [arch] voluntary vs. preemptive suspension of Java threads

2005-09-01 Thread Rodrigo Kumpera
Some time ago someone on this list pointed to a paper from Sun folks
about this subject.

From what I remember, they tried the folllowing tecniques:

-Pooling on back-edges and call/return, using ether a boolean flag or
forcing a page-fault at a specific address. They noticed that
page-faulting has a huge latency.

-Dynamic code patching, the thread is stopped and the next reachable
safe-point site is patched to call into the VM instead of following
normal execution. This shown to be very tricky to implement on sparc,
but improves the runtime performance a lot.

Rodrigo

On 8/31/05, Xiao-Feng Li [EMAIL PROTECTED] wrote:
 Thread suspension happens in many situations in JVM, such as for GC,
 for java.lang.Thread.suspend(), etc. There are various techniques to
 suspend a thread. Basically we can classify them into two categories:
 preemptive and voluntary.
 
 The preemptive approach requires the suspender, say a GC thread,
 suspend the execution of a target thread asynchronously with IPC
 mechanism or OS APIs. If the suspended thread happened to be in a
 region of code (Java or native) that could be enumerated, the live
 references were collected. This kind of region is called safe-region,
 and the suspended point is a safe-point. If the suspended point is not
 in safe-region, the thread would be resumed and stopped again until it
 ended up in a safe-region randomly or on purpose.
 
 In the other approach that we are now considering, JIT will insert
 code that polls a boolean. The boolean can be thread-specific and  is
 set true by GC thread or VM if there is a need to prevent the Java
 thread's forward progress. The JIT will put the polling code in such
 places as back-edges, call sites and method returns. Actually we are
 thinking of this mechnism in a more general sense. For example,
 green-threads can be implemented in this way for Java threads to
 downcall into JVM scheduler.
 
 Does anyone have suggestions/data on better approaches?
 
 Thanks,
 xiaofeng



Re: [arch] Modular JVM component diagram

2005-08-30 Thread Rodrigo Kumpera
Using APR doesn´t mean Harmony won´t be posix compliant. Only that it
will be one layer above the posix stuff.



On 8/30/05, Xiao-Feng Li [EMAIL PROTECTED] wrote:
 Hi, Ron, I think your concern is valid. We fully understand POSIX has
 been and is being used widely. That's why we want to have a discussion
 here. APR does have some features a JVM may need in all platforms,
 such as atomic operations, which are lacked in POSIX. And APR is
 available on a couple of different platforms. Yes, POSIX is availabe
 on some non-UNIX systems too, e.g., one can use POSIX on Windows
 through Windows Services for UNIX or Cygwin.
 
 I'd like to hear how other people think. Folks on the mailing list, comments?
 
 Thanks,
 xiaofeng
 
 On 8/29/05, Ron Braithwaite [EMAIL PROTECTED] wrote:
  On Aug 28, 2005, at 6:06 PM, Xiao-Feng Li wrote:
   Posix is popular and widely used across many different platforms.
   Intel had
   implemented ORP basically on top of Posix, and it was easy to use
   Posix to
   wrap Windows APIs.
  
   Now there are more portable API libraries to choose from, such as
   APR, and
   IBM Portability Library. For ease of portability across a broad
   range of
   platforms, I suggest that we use APR as the portability api. Do
   folks on the
   mailing list think that this is a good idea?
  
   Thanks,
   xiaofeng
 
  delurk
 
  As an intensely interested bystander, I'll just kick in my two cents
  here. Posix compliance is a really good idea, since it is so
  frequently specified by so many corporate and governmental policies
  and regulations (e.g., AFIPS, if memory serves).
 
  Yes, there are some portable API libraries out there that are
  superior in certain aspects to Posix. But I think the repercussions
  of not being Posix-compliant are going to cost more than any gains
  from specific gains by a less ubiquitous library.
 
  Just my two cents worth.
 
  Peace,
  -Ron
  /delurk
 
  Ron Braithwaite
  2015 NE 37th Ave
  Portland, OR 97212 USA
  503-267-3250
  [EMAIL PROTECTED]
 
 



Re: AWT/Swing

2005-07-15 Thread Rodrigo Kumpera
On 7/15/05, PJ Cabrera [EMAIL PROTECTED] wrote:
 Sven, Tom,
 
 Thanks for the suggestions. I mentioned SwingWT in the interest of not
 reinventing the wheel. Of course other peers could be written.
 
 Sven, how far along on the Qt peers are you? Which version of Qt are you
 using?
 
 I'm thinking, Qt is definitely a damn fine library, and runs natively on
 Win32  Win64, Linux and X11 on many platforms (not just Intel), and on
 Mac OS X PPC (and perhaps soon, native on Mac OS X Intel).
 
 That definitely would remove the complexity (for the majority of users
 that just want something that just works) of having to install too
 many things just to run a native distribution of Classpath and a FOSS
 VM. They would only have to install Qt for their platform and the
 VM/library distro. The Qt linstall could even be bundled with the
 VM/library for ease of installation.
 
 Hmmm  :-)
 
 PJ Cabrera
  pjcabrera at snapplatform dot org 
 
 http://snapplatform.org
 SNAP Platform - The only totally open
 source integrated Java Dev Environment
 
 http://snap.sourceforge.net
 SNAPPIX - Live Linux CD Distro
 with SNAP Platform pre-installed
 


Harmony or classpath won´t be able to use these peers, as QT is GPL.


Re: Minutes of First Harmony Meeting

2005-07-05 Thread Rodrigo Kumpera
This is insane, if implementing the classlibrary takes half the time
the GNU folks are on the road, by the time Harmony completes 1.5,
2010+, it will be history.

Leveraging ASL compatible software is the only feasible way to have it progress.

On 7/5/05, Weldon Washburn [EMAIL PROTECTED] wrote:
 
 Exactly.  Harmony won't bundle non-Apache code with its download.  The
 goal is a complete Apache licensed JVM stack that works right out of
 the box.  This includes a GC.   If Harmony modularization is
 successful, third parties can plug in non-Apache components such as a
 GC from Jikes, Rotor, Sun, etc.



Re: [modules] classloader/jit interface

2005-06-24 Thread Rodrigo Kumpera
If the API is meant to be language agnostic, it's just a bunch of
constants and a function vector. This should provide enouth to build a
better layer for each language.

The API between a Java JITer and a C JVM should be coarse grained
since cross-language calls are slow. ORP defines a very nice C++ only
interface, but I doubt it would work well for the C++ - Java
scenario.

This might be too 80's, but a good interface can be well defined in
terms of functions, structs and constants. It will work just fine for
C/C++ and would not be that painfull or slow to use from Java (or
whatever comes into our mind).

Or we can use IDL and let it be object oriented.

Rodrigo

On 6/24/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 
 On Jun 24, 2005, at 5:17 PM, Weldon Washburn wrote:
 
  On 6/23/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 
 
  Where is code we can start examining?  What do the the java-on-java
  implementations do?
 
  geir
 
 
 
  Geir,
  I can't talk about code just yet.
 
 
 I know, but I'm going to taunt you anyway!  Taunt!
 
 Seriously, how about referring us to code in ORP at sourceforge?
 
  But I think we can at least get
  started on the basic, low controversy APIs.  For example, the access
  and property flags defined in the class file format are straight
  forward boolean values.  While the access/property flags may be
  extended, it is unlikely that basic notions such as public/private
  access would ever be removed.  Below is a first stab at the API to
  retrieve the values contained in internal classloader data structures.
   Its understood that the JIT won't use all of the below APIs.  For
  example, some of the access properties issues are dealt with during
  resolution and not by the jit.  I used the java 1.5 version of Chapter
  3 of the Java Virtual Machine Spec.
 
 
  Comment/questions are appreciated.
 
  Thanks
  Weldon
 
  Class access and property modifiers (table 4.1)
  bool Class::is_public();// ACC_PUBLIC
  bool Class::is_final();// ACC_FINAL
  bool Class::is_super();// ACC_SUPER
  bool Class::is_interface();// ACC_INTERFACE
  bool Class::is_abstract();// ACC_ABSTRACT
  bool Class::is_synthetic();// ACC_SYNTHETIC
  bool Class::is_annotation();// ACC_ANNOTATION
  bool Class::is_enum();// ACC_ENUM
 
  Field access and property flags (table 4.4)
  bool Field::is_public();// ACC_PUBLIC
  bool Field::is_private();// ACC_PRIVATE
  bool Field::is_protected();// ACC_PROTECTED
  bool Field::is_static();// ACC_STATIC
  bool Field::is_final();// ACC_FINAL
  bool Field::is_volatile();// ACC_VOLATILE
  bool Field::is_transient();// ACC_TRANSIENT
  bool Field::is_synthetic();// ACC_SYNTHETIC
  bool Field::is_enum();// ACC_ENUM
 
  Method access and property flags (table 4.5)
  bool Method::is_public();// ACC_PUBLIC
  bool Method::is_private();// ACC_PRIVATE
  bool Method::is_protected();// ACC_PROTECTED
  bool Method::is_static();// ACC_STATIC
  bool Method::is_final();// ACC_FINAL
  bool Method::is_synchronized(); // ACC_SYNCHRONIZED
  bool Method::is_bridge();// ACC_BRIDGE
  bool Method::is_varargs();// ACC_VARARGS
  bool Method::is_native();// ACC_NATIVE
  bool Method::is_abstract();// ACC_ABSTRACT
  bool Method::is_strict();// ACC_STRICT
  boot Method::is_synthetic();// ACC_SYNTHETIC
 
 
 That's pretty non-controversial :)
 
 So this API is really for the classes Method, Field and Class, rather
 than a bigger C API.  Does this make it harder for other languages to
 use or implement?  (I have to admit it's going to take a few to start
 thinking in C again...)
 
 geir
 
 --
 Geir Magnusson Jr  +1-203-665-6437
 [EMAIL PROTECTED]
 
 



Re: [modules] Packaging Class Libraries

2005-06-10 Thread Rodrigo Kumpera
On 6/10/05, Richard S. Hall [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:
 
 AFAIK the term extension classloader is used for application created
 classloaders. The application classloader handles classpath and
 installed extends (the dreaded /lib/ext directory).
 
 
 
 Well, just writing a simple program that prints the class loaders, I get:
 
loader = [EMAIL PROTECTED]
loader = [EMAIL PROTECTED]
loader = null
 
 Where the null represents the bootstrap class loader, so I still count
 three, but I don't think this adds much to the discussion. :-)
 
 In terms of extensibility, the classlib is mostly a dead-end, an
 end-user should be better using some SPI. But good modularity keeps
 the software entropy low.
 
 
 
 I am not sure what you are referring to with classlib here.

The j2se lib, eg what classpath is implementing.
 
 I think a build time good citizenship test can do just fine, so no
 module mess with other's internals unless it's specified. Having
 export/import controls for the public API (the j2se api) seens allmost
 unreasonable.
 
 
 
 Build-time test? What about code compiled by another compiler? This
 doesn't make much sense to me.

If we are talking about Java, then every compiler should produce the
same stuff, class files. My suggestion is to have a after-compilation
step.


 But if the JVM is build all in java, or big chunks at least, using an
 OSGi like thing is a very good idea for managing the implementation.
 
 
  From my experience, it makes sense, period.
 
 - richard
 

Well, the classpath folks seen to be doing just fine without an OSGi
like container. There are some implications for building such thing,
first is the fact that a big part would need to live in a
container-less enviroment, as it would be hard to read a manifesto
file without the java.io package available. ;)

Big parts of the classlib is implemented throu service providers, may
be they should be developed separated from the lib itself.


Re: [modules] Packaging Class Libraries

2005-06-09 Thread Rodrigo Kumpera
OSGi is really nice to work, but I doubt it can be used within the
class library. Most code except that a JVM will create at most 2
classloader, the bootstrap classloader (which can be null) and the
application classloader.



On 6/9/05, Richard S. Hall [EMAIL PROTECTED] wrote:
 Geir Magnusson Jr. wrote:
 
  Heh.  I agree.  I just was too busy in the VM/class library fire-
  fight :)
 
 
 Yes, perhaps you have just signed the death notice for this discussion
 too. ;-)
 
  So, given that my foray into architecture discussion was such a
  stunning success, would you like to start the discussion of such a
  thing might be approached?  (Please change the subject, of course...)
 
 
 We can try.
 
 I have to admit up front that I know nothing about implementing a VM,
 but I do have some knowledge about class loaders, having worked on an
 OSGi framework implementation for a few years.
 
 My view of the OSGi framework is that it really serves as a nice,
 dynamic module layer for Java in general and its service-oriented
 approach is a good way to structure applications. This has been my
 approach to using/evangelizing the OSGi framework since I started with
 it back in 2000. I recognize that the dynamic and service-oriented
 parts of the OSGi framework are probably of little relevance to a JVM
 implementation, so I will try not to discuss them.
 
 However, having said that, I do think that the dynamic aspects could be
 very interesting, because they would allow the JVM to dynamically deploy
 and update modules as needed. That is all I will say about dynamics, I
 promise. :-)
 
 My view on packaging class libraries is rather simplistic, but it has
 worked for many OSGi users. The approach is to package libraries into
 JAR files, where the JAR file is an explicit module boundary. Each
 module has metadata describing the packages it exports and imports (and
 version information). A module JAR file may include classes, resources,
 and native code.
 
 Module JAR files are not directly made available via something like the
 CLASSPATH, instead the module system reads the metadata for each module
 (i.e., its export and import information) and resolves a given module's
 imports to the available exports from other modules. A class loader is
 created for each module and the module class loaders form a graph, where
 edges represent an import-to-export resolution.
 
 All classes of a given module are loaded by its own class loader. Any
 classes from an imported package are loaded from the class loader of the
 module that exports that package and to which the import was resolved.
 In this fashion, the class delegation pattern among modules is a graph,
 not a hierarchy...although the concept of a hierarchy of class loaders
 still exists since it is built into the ClassLoader class.
 
 Depending on how you wanted to implement the module system, you could
 make the module containing the java.* packages special in the sense
 that its class loader is the parent of all other module class loaders
 and that modules do not need to explicitly import from it (which is the
 approach used by OSGi). However, there is still possibly value in
 requiring that modules do import those packages for version resolution
 purposes.
 
 As an example of all of this, consider this fictitious metadata for
 packaging some libraries as modules (these example will not include all
 of the real packages to keep things short):
 
 Module-Name: Core
 Export-Package: java.lang; exclude:=VM*; version=1.5.0,
java.io; java.util; version=1.5.0
 Native-Code: lib/libfoo.so; // I won't go into any syntax here :-)
 
 This module is the core Java class library. It exports java.lang, but
 excludes exporting any classes from it whose name starts with VM. The
 VM* classes will be visible inside the module, but not to modules that
 import java.lang. Of course, excluding these classes wouldn't be
 necessary if the VM* classes were package private. However, this
 features allows you to make them public if you want. Another approach
 that would be enabled is to just move the VM* classes to a different
 package and make them public and simply not export that package, which
 won't allow other modules to access them either.
 
 [A side note about the syntax above: The syntax differentiates between
 framework directives using the := token, versus importer matching
 attributes using the = token. Matching attributes are used by
 importers to select exporters; see next example.]
 
 Continuing with two more simple module examples:
 
 Module-Name: OMG CORBA
 Export-Package: org.omg.CORBA; org.omg.CORBA.portable; \
version=1.5.0
 
 Module-Name: Javax RMI
 Export-Package: javax.rmi; javax.rmi.CORBA \
version=1.5.0
 Import-Package: org.omg.CORBA; org.omg.CORBA.portable; \
version=[1.5.0,1.6.0)
 
 The first module exports the CORBA packages, whereas the second one
 imports the CORBA packages and exports the javax.rmi packages. The
 import for the CORBA packages specifies a 

Re: [modules] Packaging Class Libraries

2005-06-09 Thread Rodrigo Kumpera
On 6/9/05, Richard S. Hall [EMAIL PROTECTED] wrote:
 Rodrigo Kumpera wrote:
 
 OSGi is really nice to work, but I doubt it can be used within the
 class library. Most code except that a JVM will create at most 2
 classloader, the bootstrap classloader (which can be null) and the
 application classloader.
 
 
 
 I actually thought there were three: Bootstrap, extension, and application.
 
 I am not sure why this is an issue, though. Does the spec require this?
 It sounds like an implementation detail to me.
 
 Also, I am not saying that we directly use OSGi, just something that has
 some of the capabilities that OSGi demonstrates.
 
 - richard
 

AFAIK the term extension classloader is used for application created
classloaders. The application classloader handles classpath and
installed extends (the dreaded /lib/ext directory).

In terms of extensibility, the classlib is mostly a dead-end, an
end-user should be better using some SPI. But good modularity keeps
the software entropy low.

I think a build time good citizenship test can do just fine, so no
module mess with other's internals unless it's specified. Having
export/import controls for the public API (the j2se api) seens allmost
unreasonable.

But if the JVM is build all in java, or big chunks at least, using an
OSGi like thing is a very good idea for managing the implementation.


Re: [arch] VM/Classlibrary interface

2005-05-28 Thread Rodrigo Kumpera
On 5/28/05, Dalibor Topic [EMAIL PROTECTED] wrote:
 
 Rodrigo Kumpera wrote:
  Last time I checked, no one, nether me or you, is developing code agains 
 the
  TCK, but to a real JVM. And as hard as we may try, sometimes we end with
  software that depends on unspecified behavior. So it's better try to be 
 bug
  compatible too.
 
 If you end up with software that depends on unspecified behaviour, then
 it is either
 
 a) your deliberate decision, then you probably have a very, very good
 reason to tie yourself to the particular revision of the particular
 platform, or
 
 b) an accidental mistake, then you fix the small bug in your code, feel
 better about the quality of your code, and move on.


I agree with you about the first one, but the second is where the fine line 
between pragmatic and retoric solutions line. It's easy to say 'just fix it 
then', but I hope that Harmony gets more users that a few hackers. 

The TCK is not the silver bullet for compatibility, A software I wrote for 
1.4.0 suddenly got broken on 1.4.1 because of, I don't know, bug fixes or 
subtle changes on behavior of java.nio.

My point is, testing against just the TCK is just not enouth. Testing 
against real applications is where the real value of Harmony can be 
asserted. Most free JVMs already do that and nobody seens to be complaining.

Rodrigo


Re: Work items

2005-05-27 Thread Rodrigo Kumpera
Latelly I've been playing with a toy JITer written in java (like jikesRVM 
baseline compiler) for x86 on windows. It works on a single pass and perform 
no optimizations at all, but the generated code is correct.
The parts missing are the hard ones, object allocation and exception 
handling.

Next time I'm going to implement something in line with jikesRVM`s vmmagic 
(more like stealing the whole concept) and maybe give a try with 
self-hosting. Steve, do you have some pointers about how jikesRVM or OVM 
does that? 


Rodrigo


On 5/27/05, Steve Blackburn [EMAIL PROTECTED] wrote:
 
 Hi all,
 
 . prototype backend generator [prototype]
 
 Explore Ertl  Gregg's work to develop a backend generator which
 leverages the portability of gcj to automatically generate backends
 for a simple JIT. The semantics of Java bytecodes are expressed
 using Java code, and then gcj is used to generate code fragments
 which are then captured for use by a simple JIT (Ertl  Gregg used C
 and gcc, but with vmmagic support, Java and gcj would be nicer).
 See also
 
 http://www.csc.uvic.ca/~csc586a/papers/ertlgregg04.pdf
 
 
 
 http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200505.mbox/[EMAIL
  PROTECTED]
 



Re: [arch] VM/Classlibrary interface

2005-05-27 Thread Rodrigo Kumpera
On 5/27/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 
 
 On May 27, 2005, at 4:34 PM, Rodrigo Kumpera wrote:
 
  We should provide wrappers to classes for the sake of
  compatibility. Are
  there any legal problems with doing so?
 
 Why would we do that? We don't want to encourage such misbehavior...?
 
 geir


There are two kinds of compatibility: by specification, the TCK buy us that; 
and pragmatic, and this is having software just work.

Look around for all FOSS that are made to be an alternative to proprietary 
software, they all have kludges to be implementation compatible to the 
non-free counterpart.

Last time I checked, no one, nether me or you, is developing code agains the 
TCK, but to a real JVM. And as hard as we may try, sometimes we end with 
software that depends on unspecified behavior. So it's better try to be bug 
compatible too.


Re: Terminology etc

2005-05-24 Thread Rodrigo Kumpera
There isn't much space for plugability of components beyond compile-link 
time. And I see no reason to have it otherwise.

Rodrigo

On 5/24/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote:
 
 
 On May 24, 2005, at 4:49 AM, Dmitry Serebrennikov wrote:
 
 
  * OS interface is perhaps one place where some code can be
  shared. If C version can benefit from an OS abstraction layer for
  portability, then it seems like this layer could be shared between
  the C and the Java implementations.
 
 I'm really hoping we can shelve the idea of having a C implementation
 and a Java implementation. I think we should try to mix them into
 one solution.
 
 
  Steve, if the spokes were in Java but the hub in C, would we then
  lose all of the aggressive inlining benefits that Java-in-Java
  solution can provide?
 
 I'll preface this and state that I have no idea what I'm talking
 about. That said, this reminds me a little of what was done in the
 Geronimo architecture that makes it differ from component
 architectures that use central mechanisms like JMX to let modules/
 components communicate. With Geronimo, the components are given
 references to their dependencies, rather than having them work
 through the central bus to talk between dependencies. This gives a
 big speedup, as the overhead goes away. (Ok, there are 'smart'
 proxies between the components, but that's a detail we can ignore here.
 
 I've guessed (and steve confirmed) that the boundaries between the
 modules in a VM aren't flat APIs, but something thicker, with a
 good bit of interdependency. So I'd guess that while a central hub
 would be used to let the modules resolve dependencies (The JIT needs
 to talk to the GC and vice versa), it would be something on the order
 of asking the 'hub' (won't use 'core') for something, and getting an
 object back that could be as sophisticated as needed.
 
 I'd sure like to see some of the existing thinking on this from the
 JikesVM peeps
 
 As an aside to my aside, Ben and I were musing over ways to do things
 like this, because we thought that in a multi-language-capable VM,
 you'd probably define some minimum interface APIs so that parts in
 different languages would have framework support for
 intercommunication, but also provide a discovery mechanism to so
 that like-minded components could talk in a more private, direct way.
 
 For example, suppose we are able to define the boundary between GC
 and JIT in a nice way, we'd want any object claiming to be a JIT to
 support that standard API and be able to provide an to the hub to be
 given to any other module an implementation of that interface. So
 you could do any kind of JIT implementation or experimentation and
 plug it in.
 
 However, if I wrote both the JIT and GC, I would want my JIT and GC
 to discover that and give each other a private API that took
 advantage of each other's internals. Something like what we used to
 do with COM and QueryInterface, starting with something basic and
 standard, and grubbing around in it to find something better.
 Loosely speaking :
 
 
 interface Discovery {
 Object queryInterface(UUID interfaceID);
 }
 
 interface JIT extends Discovery {
 // JIT API here
 }
 
 So in my GC implementation, an init() call that's involved when it's
 being created :
 
 void init(Discovery hubDiscovery) {
 
 JIT commonJIT = (JIT) hubDiscovery.queryInterface
 (UUID_FOR_STANDARD_JIT_API);
 
 // now lets see if this JIT knows the secret handshake
 
 CustomJIT goodJIT = (JIT) commonJIT.queryInterface
 (UUID_FOR_JIT_I_ALSO_WROTE);
 
 if (goodJIT != null) {
 
 }
 }
 
 
 and in my JIT implementation :
 
 Object queryInterface(UUID id) {
 
 if (UUID_FOR_STANDARD_JIT_API.equals(id)) {
 return this;
 }
 
 if (UUID_FOR_JIT_I_ALSO_WROTE.equals(id)) {
 return myInternalCustomJIT;
 
 return null;
 
 }
 
 
 Anyway, this reminded me of what Ben and I were talking about a few
 days ago. Note that a) I'm just making this up, b) all the above is
 trying to capture some loose ideas on the subject, and c)
 disclaimer is all done pre-coffee in a strange hotel room after
 waking up /disclaimer
 
 geir
 
 
 
 
 
 
 
  -dmitry
 
  Steve Blackburn wrote:
 
 
  I thought it might be helpful to clarify some terminology and a few
  technical issues. Corrections/improvements/clarifications
  welcome ;-)
 
  VM core
 
  The precise composition of the VM core is open to discussion and
  debate. However, I think a safe, broad definition of it is that
  part of the VM which brings together the major components such as
  JITs, classloaders, scheduler, and GC. It's the hub in the wheel
  and is responsible for the main VM bootstrap (bootstrapping the
  classloader, starting the scheduler, memory manager, compiler etc).
 
  VM bootstrap
 
  The bootstrap of the VM has a number of elements to it, including
  gathering command line arguments, and starting the various
  components (above).
 
 
  In the context of a Java-in-Java VM, the above is all written in
  Java.
 
 
  VM 

Re: Terminology etc

2005-05-24 Thread Rodrigo Kumpera
 I think you dind't quite get how it works with MMtk. There are a few 
magic classes that receive special treatment from the compiler and 
translates method calls into pointer operations. This way MMtk can operate 
under the same condition of C code. 
 I'm not sure how did you relate interface calls with MMtk, as this is a 
non-issue. And with memory barriers, it's a problem all languages will have 
to face, be it Java, C or assembly.
   
But it might still be that C-implemented modules (particularly the
 GC) are faster than their Java-implemented counterparts, because
 they ignore array-index-checking, don't need write barriers and
 interface calls, etc... One could follow the strategy, that
 modules must support Java interfaces but might be implemented in
 C. This strategy would bring us back to an design that doesn't
 mandate which language has to be used to implement a certain
 module.
 
 Uli



Re: [arch] The Third Way : C/C++ and Java and lets go forward

2005-05-23 Thread Rodrigo Kumpera
I still think a java-in-java solution is the way to go. All components can 
be tested from a JVM until it can hosts itself. The only reason for using 
C/C++ is with a vm as seed.
 If the objective is to write a high performance JVM, having a vm with an 
interpreter doesn't help much.
 Starting with java, it's more propable that we can have a nice 
non-optimizing JIT and runtime by the same time the C/C++ effort have a 
working baseline compiler.
 jikesRVM cannot be licensed under the ASL, but what about MMtk? 
 Rodrigo

 On 5/23/05, Geir Magnusson Jr. [EMAIL PROTECTED] wrote: 
 
 I'd like to formally propose what I think many have been thinking and
 maybe have already suggested - that we'll need to combine both C/C++
 and Java in this effort. After thinking about it, I don't really
 see upside to having two parallel tracks, when in fact we can and
 should be working together.
 
 So, to do that :
 
 I. [VM Kernel] Start with a core 'kernel' in C/C++ that provides
 intrinsics needed by the other modules.
 
 II. [Modular VM Architecture] Clearly define the other modules,
 what intrinsics they need from the core kernel, and what
 relationships they have to other modules
 
 III. [VM-Class library Interface] Begin seriously examining the GNU
 Classpath VM interface, iteratively producing a Common Classlibrary
 Interface that we can ask GNU Classpath to implement. Right now,
 they have a VM interface that is a great starting point, but I get
 the impression that there is a whole suite of intrinsic VM
 functionality that should be standardized and exposed to the class
 library by the VM.
 
 To do this I'd like to
 
 a) Respectfully petition JamVM for a one-time license grant of the
 codebase under the Apache License that we can start with. We would
 use this as our base kernel, refactoring out the modules that we
 decide on in II above, and working to implement those modules in
 whatever makes sense - Java or C. Robert brought up this issue on
 the list, so I have responded w/ a request on this list.
 
 b) Consider starting a second mail list harmony-arch, for
 modularity discussions, to separate out the traffic from the dev
 traffic.
 
 Lets get moving. Comments?
 
 geir
 
 
 
 --
 Geir Magnusson Jr +1-203-665-6437
 [EMAIL PROTECTED]
 
 



Re: Threading

2005-05-22 Thread Rodrigo Kumpera
Green threads have a lot of problems that are hard to solve. You have to 
deal with blocking function, interupts, syscall restarts, blocking native 
code, etc...

JikesRVM handles that gracefully? My impression is that everyone is dropping 
this M:N model because of implementation issues. BEA dropped it on jrockit. 
IBM was developing such system for posix threads in linux, but a simple 1:1 
that solved some scalability problems in the kernel was choosen.





On 5/22/05, Steve Blackburn [EMAIL PROTECTED] wrote:
 
 The Jikes RVM experience is kind of interesting...
 
 From the outset, one of the key goals of the project was to achieve
 much greater levels of scalability than the commercial VMs could deliver
 (BTW, the project was then known as Jalapeno). The design decision
 was to use a multiplexed threading model, where the VM scheduled its own
 green threads on top of posix threads, and multiple posix threads were
 supported. One view of this was that it was pointless to have more than
 one posix thread per physical CPU (since multiple posix threads would
 only have to time slice anyway). Under that world view, the JVM might
 be run on a 64-way SMP with 64 kernel threads onto which the user
 threads were mapped. This resulted in a highly scalable system: one of
 the first big achievements of the project (many years ago now) was
 enormously better scalability than the commercial VMs on very large SMP
 boxes.
 
 I was discussing this recently and the view was put that really this
 level of scalability was probably not worth the various sacrifices
 associated with the approach (our load balancing leaves something to be
 desired, for example). So as far as I know, most VMs these days just
 rely on posix style threads. Of course in that case your scalability
 will largely depend on your underlying kernel threads implementation.
 
 As a side note, I am working on a project with MITRE right now where
 we're implementing coroutine support in Jikes RVM so we can support
 massive numbers of coroutines (they're using this to run large scale
 scale simulations). We've got the system pretty much working and can
 support  10 of these very light weight threads. This has been
 demoed at MITRE and far outscales the commercial VMs. We achieve it
 with a simple variation of cactus stacks. We expect that once
 completed, the MITRE work will be contributed back to Jikes RVM.
 
 Incidentally, this is a good example of where James Gosling misses the
 point a little: MITRE got involved in Jikes RVM not because it is
 better than the Sun VM, but because it was OSS which meant they could
 fix a limitation (and redistribute the fix) that they observed in the
 commercial and non-commercial VMs alike.
 
 --Steve



Re: [arch] VM Candidate : JikesRVM http://jikesrvm.sourceforge.net/ (and some bla bla about compilers and stuff)

2005-05-20 Thread Rodrigo Kumpera
jikesRVM have 3 compilers, baseline, quick and opt.

baseline is meant to have jit times comparable to an interpreter preparation 
time[1].
quick is a replacement for baseline on PPC, as the generated code is too 
slow, and aim to only get low hanging fruits. It's a linear, one pass 
compiles, so it doesn't perform any of the fancy (and time demanding) 
optimizations you suggest.
opt is the optimizing compiler that goes as further as possible on 
performance.

HotSpot have interpreter, client and server compiler. The big diference is 
that in server mode code is recompiled once one of it's dependencies is 
resolved (think of incremental whole program global optimization) and 
speculative optimizations are done (like inlining/devirtualizing 
should-be-virtual methods) [2].

A good JVM needs not only great JITers and GCs, but work in a adaptative 
way, doing only the most profitable things for the user. Eg, if compiling 
with opt will take more time the total method execution time in interpreted 
mode, then don't.

Rodrigo

[1] SableVM shows that with some preparation interpreting can be a lot 
faster, see Etiene phd thesis for a great explanation about this.
[2]OVM, I think, does some of this by class hierachy analisis for example.

On 5/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 
 
  The JCVM docs talks about possible support for generating ELF binaries
  usign just Java code, although this doesn't appear complete at this
  point. See
  http://jcvm.sourceforge.net/share/jc/doc/jc.html#Java%20to%20ELF
 
 
  GCC also has its own internal format (called RTL I think). I don't know
  much (anything) about that, but it is easy to speculate about producing
  RTL from bytecode and feeding it to the compiler. I'd speculate that GCJ
  works something like this.
 
 
 RTL is Register Transfer Language I believe. RTL is kinda like an
 abstract assembler language, however its not exactly multiplatform.
 Meaning the RTL generated for each platform may differ.
 
 Writing a GCC front end is actually rather non-trivial I'm afraid.
 While it does do a traditional AST tree more or less, its implemented
 with some rather complicated and undocumented macros (and from what I
 remember a few layers of them).
 
 Its unlikely that GCC is going to change license or offer an exemption
 so more or less taking code from GCC itself is not legally feasible for
 an ASL project. On the other hand, I'm not convinced that GCC is an
 acceptable compiler for a JIT anyhow. Allow me to demonstrate. Find a
 large C project (say 1000 sources). Build the project on a fairly fast
 machine.
 
 Okay, now boot up your favorite Java Application Server (whichever that
 may be). Imagine if it took that long to start up. Sure it would be
 fast once it did. However, due to how default Java classloading works
 (by lazy loading when it is asked for) there would be a perceived
 performance impact. We COULD mark some classes hot and the such, but it
 would still be jerky and appear slow. Its possible that you could rip
 pieces of GCC out and construct a more traditional JIT, but its really
 not what we want.
 
 So there are probably people here who know more about it than me. I'm
 just a simple southern boy who reads compiler books in the bathroom. As
 I understand it though, a runtime compiler requires a slightly blacker
 belt of compiler writing. You need to not only try and generate
 efficient native code, but optimize compiler time so that no one notices
 that you did it! One approach to this is to generate code that has been
 through the first few levels of optimization and then do the native
 optimizations on another pass. (Generally starting up in interpreted
 mode) This is actually kinda smart, because many of the highest yield
 optimizations are these early optimizations (variable elimination, local
 and global common subexpression elimination, algebraic simplification,
 copy propagation, constant propagation, loop-invariant optimization,
 etc). While the native and headier stuff should be done, since the Java
 compiler can optimize and performs some optimizations, its unlikely to
 be the highest yield most of the time. This does mean that more work
 will have to be done and redone to get to the most optimal code, but the
 pause should be low.
 
 Some of the things we COULD do to get things up and running are very
 cool for that purpose, but ultimately we'll probably have to do the
 heavy lifting of writing a JIT that balances the force while trying to
 slay the complexity dragon
 
 -Andy
 
 
 Keep in mind I know squad about GCC and friends.
 
 
  Me neither.
 
  Nick
 
 
  IMPORTANT: This e-mail, including any attachments, may contain private 
 or confidential information. If you think you may not be the intended 
 recipient, or if you have received this e-mail in error, please contact the 
 sender immediately and delete all copies of this e-mail. If you are not the 
 intended recipient, you must not reproduce any part of this e-mail or 

Re: [arch] VM Candidate : JikesRVM http://jikesrvm.sourceforge.net/

2005-05-19 Thread Rodrigo Kumpera
I think jikesrvm can be executed from another JVM, this should make 
debugging easy.



On 5/19/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
  The problem of Java written JVM/JIT isn't one of performance. You can
  theoretically achieve the same performance (although I'm not 100%
  convinced, I'm partially there)
 
 It is reasonable to model the performance of a Java runtime in several
 aspects, especially throughput and interactivity (start-up time).
 JIT (and JVM) written in Java can achieve the same throughput as one
 written in C/C++/etc. But good start-up time / interactivity are more
 difficult to achieve and have to be elaborated.
 
 Part of a runtime written in Java has to be interpreted, or compiled
 before executed. Throughput is sacrificed when interpreted and
 interactivity is sacrificed when compiled.
 
 Another possible disadvantage, which might not be discussed, is
 reflective nature of Java-written JVM. This has been appealed as one of
 strong points in this list so far as removing boundary of
 languages. But we have to consider maintenance and debugging of the 
 runtime.
 Java-written JIT is compiled by itself eventually. In the case,
 debugging will become pretty hard. Of course, such a runtime will have
 another interpreter or a baseline compiler (written in C/C++?) and
 Java-written JIT can be debugged exhaustively. But such a reflective
 nature certainly makee debugging harder.
 
 I myself do not have any experience on development of Java-written JIT
 and then I am not very sure how it makes maintainance and debugging
 harder. There have been a few Java-written JIT, Jikes RVM and OpenJIT
 and we may be able to have input from the developers of them if we hope.
 
 Kazuyuki Shudo [EMAIL PROTECTED] http://www.shudo.net/



Re: [arch] VM Candidate : JikesRVM http://jikesrvm.sourceforge.net/

2005-05-19 Thread Rodrigo Kumpera
Sure it does, we would be writing just a front-end. Which in case is not an 
option for Harmony, since such code must be GPL. 


Does anybody know if GCC allows such a thing?
 
 Keep in mind I know squad about GCC and friends.
 
 --
 Stefano, who should really do his homework some day.
 



Re: Intro to Classpath

2005-05-17 Thread Rodrigo Kumpera
I'm wondering, some parts of the JDK seens to be product features and not a 
standard. For examples, the sound system should use arts, esd or alsa (I 
believe Sun support the last 2). The printing system should support cups, 
lprng or both? The same goes for the crypto algorithms on the pack.

Rodrigo

On 5/16/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 
 Excellent! This is exactly what I was looking for!
 
 I will send an email to the mailing list tonight, read those links, and 
 maybe even hop onto the IRC channel in the next day or so.
 
 Thanks much!
 
 Stu Statman
 



Re: The topic of the Java Compiler

2005-05-17 Thread Rodrigo Kumpera
Nether support apt, AFAIK, which seens to be an easier task to do with the 
Eclipse compiler.

Rodrigo

On 5/16/05, Nick Lothian [EMAIL PROTECTED] wrote:
 
 
   Berlin == Berlin Brown [EMAIL PROTECTED] writes:
 
  Berlin The compiler seems to be a non-issue at this time
  with a focus
  Berlin on the JavaVM. What are your thoughts on the different
  Berlin compilers?
 
  For Harmony I would say the leading contender is the java
  compiler that comes with Eclipse. It is written in java and
  already supports all the 1.5 features.
 
  As far as gcj goes, the new 1.5-ish front end I'm writing has
  a standalone part which does all the language processing and
  bytecode generation. It does not depend on the rest of gcc
  at all. It is written in C++.
 
  kjc is also out there and being developed, but I don't know
  much about it.
 
 
 The Eclipse compiler is already being embedded in Tomcat and works well.
 It also has the advantage of a liberal Apache compatible licence.
 
 There is also the Jikes compiler (which isn't the same as the Eclipse
 compiler, nor the Jikes RVM).
 
 Nick
 
 IMPORTANT: This e-mail, including any attachments, may contain private or 
 confidential information. If you think you may not be the intended 
 recipient, or if you have received this e-mail in error, please contact the 
 sender immediately and delete all copies of this e-mail. If you are not the 
 intended recipient, you must not reproduce any part of this e-mail or 
 disclose its contents to any other party.
 This email represents the views of the individual sender, which do not 
 necessarily reflect those of education.au limited except where the sender 
 expressly states otherwise.
 It is your responsibility to scan this email and any files transmitted 
 with it for viruses or any other defects.
 education.au limited will not be liable for any loss, damage or 
 consequence caused directly or indirectly by this email.



Re: Stop this framework/modularity madness!

2005-05-17 Thread Rodrigo Kumpera
Maybe this pluggable layer that is not well defined. I think that having 
this as a link time thing is more than enouth. It doesn´t mean that only one 
GC algorithm or JITer will be available at runtime, but all the options 
should be defined when building the JVM.
 Refactoring a system to have a framework emerge is a lot easier, productive 
and produce less cluter. Lets not make generic what doesn't need to be.
 But a better argument is close inspection of this list archive. Seens like 
nobody trully knows how to proper layer a JVM for good modularity. The 
JikesRVM folks are refactoring to make this possible on their JVM, but right 
now they can't really say how it will look when finished.
  Rodrigo
 On 5/17/05, Royce Ausburn [EMAIL PROTECTED] wrote: 
 
 
 Ack - Sorry about the incomplete mail.
 
 On 17/05/2005, at 3:58 AM, Rodrigo Kumpera wrote:
 
 
  This must not be the focus until required, so no JIT plugable layer
  until someone tries to write another JIT for Harmony (emphasis on
  another).
 
 
 In my experience, delaying the 'modular design' of a system causes
 more work. When you rip out the old part, not only do you have to
 design and implement this pluggable layer, this layer probably needs
 alot more design work because you have to consider all the other
 systems that are affected. This also isn't easy because the project
 hasn't been designed in a modular fashion. THen you have to
 reimplement the old part (the JIT bit in your example) - May as well
 design everything to work together in harmony to begin with.
 
 --Royce
 



Re: Developing Harmony

2005-05-17 Thread Rodrigo Kumpera
C++, just C++, is a recipe for trouble. Most projects that use it define a 
subset to make development a less painfull talk. Usually operator 
overloading, templates and virtual inheritance are discarded.

Rodrigo

On 5/17/05, Ben Laurie [EMAIL PROTECTED] wrote:
 
 Jónas Tryggvi Jóhannsson wrote:
 
  Question to the floor: if it had to be one of C and C++, which would
  you prefer?
 
 
  I can´t think of a single reason why C should be preferred over C++,
 as
  C can simply be viewed as a subset of C++.
 
 That sounds like a reason to me.
 
  As Java users, all of us
  appreciate object orientation and understand how it can be used to
  simplify software and make it more readable. Writing C code in an object
  oriented manner simply does not give the same level of abstraction C++
  can provide.
 
 I agree. Many developers don't.
 
  Im however very fond of the idea of writing the JVM in Java. Im
  beginning to look into the JikesRVM and I really like the idea,
  especially as it is the language that everyone on this list is familiar
  with.
 
 As I said before, don't assume we're all Java fans here. I'm far more
 familiar with C++ than I am with Java.
 
  It would also maximize the quality of the tools that we will
  provide, as we would be using them to develop our JVM.
 
 Like I said, a framework that allows you to do this is definitely what I
 want to see. But it should also allow me to write the JVM in C.
 
 Is the Jikes licence one we can use?
 
 Cheers,
 
 Ben.
 
 --
 http://www.apache-ssl.org/ben.html http://www.thebunker.net/
 
 There is no limit to what a man can do or how far he can go if he
 doesn't mind who gets the credit. - Robert Woodruff



Re: Developing Harmony

2005-05-17 Thread Rodrigo Kumpera
If C/C++ is going to be used, the reference compiler is gcc. I don´t think 
the pascal frontend of gcc is up to the others. 
  Rodrigo
 On 5/17/05, Bryce Leo [EMAIL PROTECTED] wrote: 
 
 Now don't go too crazy for my suggesting this, but why not pascal? If
 we're considering C as it is this really isn't a terrible suggestion.
 I know it's fallen out of favor with most of you guys but it compiles
 quickly and supports a good number of operating systems and types of
 hardware, like arm and AMD64 (referencing Free Pascal that is). I'm
 betting that you guys won't like it but I think the option should be
 listed.
 
 Opinions?
 
 ~Bryce Leo
 
 --Veritas Vos Liberabis--



Cross plaform issues

2005-05-17 Thread Rodrigo Kumpera
A quick look at APR reveal that it doesn´t provide all OS abstraction that a 
JVM needs. There are no functions to mark pages as executable, access to 
scalable IO facilities (IOCP, epoll, kqueue, etc...) or workarounds for 
small diferences on syscalls or libC implementation.
 I think Harmony should use autotools and some m4 magic to help with that.
 Rodrigo


Re: Java Security for Harmony

2005-05-11 Thread Rodrigo Kumpera
The verifier is part of the the class loading process, it's the first
step of linking a class to the runtime. More about that can be found
in [1], but the verifier makes sure that a given piece of bytecode
won't mess up the stack or incorrectly use types.

Rodrigo


[1] The Java Virtual Machine Specification, Second Edition


On 5/11/05, FaeLLe [EMAIL PROTECTED] wrote:
 So byte code verification can occur at runtime and still be a confronting
 implementation ?
 
 I think it would make since as long as it occurs because even J2ME pre
 verifies (as i guess you guys know) and that is accepted at run time as long
 as it remains in that state.
 
 
 On 5/11/05, Ben Laurie [EMAIL PROTECTED] wrote:
 
  Anthony Green wrote:
   On Wed, 2005-05-11 at 10:42 +0100, Ben Laurie wrote:
  
  I'm more interested in the modularity with this question - by the sound
  of it, the verifier is an optional module.
  
  
   No, a conforming implementation requires a bytecode verifier.
 
  I thought we'd established that it didn't? That is, verification must
  occur, but it can occur at runtime.
 
  --
  http://www.apache-ssl.org/ben.html http://www.thebunker.net/
 
  There is no limit to what a man can do or how far he can go if he
  doesn't mind who gets the credit. - Robert Woodruff
 
 
 --
 www.FaeLLe.com http://www.FaeLLe.com