Re: [jchevm] native method API for _jc_new_intern_string -- do you plan to add to _jc_ilib_entry table?

2006-02-14 Thread Oliver Deakin

Hi Weldon,

Weldon Washburn wrote:

Archie,
I am working on kernel_path String.java.  
The only VM specific method in String currently is intern. However, 
inclusion of this method in the kernel forces the VM vendor to implement 
the rest of the String class, which has no further VM specific code.


Can we propose to move String out of kernel and into LUNI, and reroute 
Strings intern call through an intern method the VM class in kernel? 
This was mentioned in a JIRA comment made by Tim a couple of weeks ago:

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200601.mbox/[EMAIL
 PROTECTED]

So the String implementation will be held in LUNI, and its intern method 
will just make a direct call to VM.intern(String), which is in the 
kernel. This means that a VM vendor only needs to produce the intern 
method (in VM.java), and not the rest of the String class, cutting down 
the kernel size and the resulting workload for the VM vendor.



It wants to call a native
method to do the intern work.  If you plan to add a native method that
does String intern, I won't spend the time doing interning in Java
code.   I think this code is related to  _jc_ilib_entry table.  Do you
have thoughts on the best approach?

Thanks

--
Weldon Washburn
Intel Middleware Products Division

  


--
Oliver Deakin
IBM United Kingdom Limited



Re: build fails - what next?

2006-02-14 Thread Oliver Deakin

Hi Karan,

Line 72 of jaasnix.xml is:
exec executable=${compiler.name} output=${tmp}/outerr.txt 
failonerror=true


so its a call to the make command on your system. It also produces 
output from the make into ${tmp}/outerr.txt, which I believe
expands to modules/security2/build/tmp/outerr.txt (at least that is 
where it appears on my machine). Inside that file there
should be error messages from the build which will be more helpful for 
diagnosing the problem.


--
Oliver Deakin
IBM United Kingdom Limited



karan malhi wrote:

I set JAVA_HOME to jdk1.5 and the build still fails

timestamp:
[echo] build-date=20060214
[echo] build-time=20060214_1033)
[echo] on platform=Linux version=2.6.14-1.1656_FC4 arch=i386
[echo] with java home = /home/karan/jdk1.5.0_06/jre VM version = 
1.5.0_06-b05 vendor = Sun Microsystems Inc.


BUILD FAILED
/home/karan/projects/Harmony/make/build.xml:41: The following error 
occurred while executing this line:
/home/karan/projects/Harmony/native-src/build.xml:103: The following 
error occurred while executing this line:
/home/karan/projects/Harmony/modules/security2/make/build.xml:331: The 
following error occurred while executing this line:
/home/karan/projects/Harmony/modules/security2/make/native/linux/jaasnix.xml:72: 
exec returned: 1



Geir Magnusson Jr wrote:


That's the problem.  As far as I know, GCJ isn't quite complete yet...



karan malhi wrote:


[echo] on platform=Linux version=2.6.14-1.1656_FC4 arch=i386
[echo] with java home = /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre 
VM version = 4.0.2 20051125 (Red Hat 4.0.2-8) vendor = Free Software 
Foundation, Inc.



Alexey Petrenko wrote:


  [javac] 1. ERROR in
/home/karan/projects/Harmony/modules/security2/src/common/javasrc/javax/crypto/SealedObject.java 


  [javac]  (at line 63)
  [javac] encodedParams = (byte []) s.readUnshared();
  [javac] 
  [javac] The method readUnshared() is undefined for the type
ObjectInputStream
  



What JVM do you use to run ant? It seems that there is no
ObjectInputStream.readUnshared method in your JVM...

--
Alexey A. Petrenko
Intel Middleware Products Division
 











Re: Platform dependent code placement (was: Re: repo layout again)

2006-02-17 Thread Oliver Deakin
 and define
directives.



Using the names consistently will definitely help, but choosing whether
to create a separate copy of the file in a platform-specific
sub-directory, or to use #define's within a file in a shared-family
sub-directory will likely come down to a case by case decision.  For
example, 32-bit vs. 64-bit code may be conveniently #ifdef'ed in some .c
files, but a .h file that defines pointer types etc. may need different
versions of the entire file to keep things readable.
  
This is a tricky one. I think in most cases the difference between 
32/64bit code should be minor and
mostly confined to header defines as Tim suggests. For this ifdef's will 
be sufficient. I would simply suggest
that we adopt a policy of always marking all #else and #endif's clearly 
to indicate which condition

they relate to.
However, there may be instances where using ifdef's obfuscates the code. 
I think most of the time this
will be a judgement call on the part of the coder - if you look at a 
piece of code and cannot tell what
the preprocessor is going to give you on a particular platform, you're 
probably looking at a candidate

for code separation.
  

Finally, I'd suggest that the platform dependent code can be organized
in 3 different ways:

(1) Explicitly, via defining the appropriate file list. For example, 
Ant xml file may choose either one or another fileset, depending on

the current OS and ARCH property values. This approach is most
convenient, for example,  whenever a third-party code is compiled or
the file names could not be changed for some reason.



Ant ?!  ;-)  or platform-specific makefile #includes?

  

(2) Via the file path naming convention. This is the preferred
approach and works well whenever distinctive files for different
platforms can be identified.



yep (modulo discussion of filenames vs. dir names to enable vpath)

  

(3) By means of the preprocessor directives. This could be convenient
if only few lines of code need to vary across the platforms. However,
preprocessor directives would make the code less readable, hence this
should be used with care.

In terms of building process, it means that the code has to pass all 3
stages of filtering before it is selected for the compilation.



I like it.  Let's just discuss what tools do the selection -- but I
agree with the approach.

  

The point is that components at Harmony could be very different,
especially if we take into account that they may belong both to Class
Libraries and VM world.



There will be files that it makes sense to share for sure (like vmi.h
and jni.h etc.) but they should be stable-API types that can be
refreshed across the boundary as required if necessary.

  

Hence, the most efficient (in terms of code
sharing and readability) code placement would require a maximum
flexibility, though preserving some well-defined rules. The scheme
based on file dir/name matching seems to be flexible enough.

How does the above proposal sound?



  
Sounds good :) It makes a lot of sense to organise the code in a way 
that promotes reuse across platforms.

+1 from me


--
Oliver Deakin
IBM United Kingdom Limited




Cool, perhaps we can discuss if it should be gmake + vpath or ant.

Thanks for resurrecting this thread.

Regards,
Tim


  

Maybe in some components we would want to include a window manager
family too, though let's cross that bridge...

I had a quick hunt round for a recognized standard or convention for OS
and CPU family names, but it seems there are enough subtle differences
around that we should just define them for ourselves.



My VM's config script maintains CPU type, OS name, and word size as three
independent values.  These are combined in various ways in the source code
and support scripts depending on the particular need.  The distribution script
names the 'tar' files for the binaries with all three as a part of the file name
as, ...-CPU-OS-WORD.tar as the tail end of the file name.  (NB:  I am going
to simplify the distribution scripts shortly into a single script that creates 
the
various pieces, binaries, source, and documentation.  This will be out soon.)

Does this help?

Dan Lydick

  

Regards,
Tim


--

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.



Dan Lydick

  


  




Re: java.lang.String.replaceFirst from IBM VM throws NPE

2006-02-21 Thread Oliver Deakin

Hi Alexey,

ok, Ive recreated your problem using the latest snapshot and VME. 
Essentially the code at 
modules/kernel/src/main/java/java/lang/String.java is the same as that 
in the VME kernel.jar. Looking in there (these calls can also be seen if 
the test is run within a debugger), we see that the replaceFirst(String, 
String) implementation is:


public String replaceFirst(String expr, String substitute) {
   return Pattern.compile(expr).matcher(this).replaceFirst(substitute);
}

Unfortunately the implementation of Pattern at 
modules/regex/src/main/java/java/util/regex/Pattern.java is only a stub 
(as HARMONY-39 has not yet been accepted into the Harmony SVN 
repository) and as such just returns null. Thus when we try to 
dereference the return from Pattern.compile(expr) we receive a 
NullPointerException. Once the regex in HARMONY-39 is moved into SVN 
this should go away.



As a sideline, I think we should be able to move String.java out of 
kernel entirely anyway. We already have an implementation at 
modules/kernel/src/main/java/java/lang/String.java, and the only VM 
specific code in String is the intern() method. This method could simply 
be redirected to call VM.intern(String), a class which is within kernel, 
and then String.java can be moved into LUNI. It also means that the VM 
writer(s) need not implement the rest of the String class unnecessarily. 
Sound good?



Alexey Petrenko wrote:

We got problem with Harmony on IBM VM on Windows.
java.lang.String.replaceFirst throws NPE.

Here is the testcase:
public class Test {
public static void main(String args[]) {
String xx = test;
xx = xx.replaceFirst(t,z);
}
}

Here is the stack trace:
C:\Work\Harmony\Sources\Harmony\deploy\jre\binjava Test
Exception in thread main java.lang.NullPointerException
at java.lang.String.replaceFirst(String.java:1642)
at Test.main(Test.java:4)

Since IBM VM is not an OpenSource I can not check java.lang.String for
the cause of this problem :(

Can we ask IBM guys somehow to fix this issue?

--
Alexey A. Petrenko
Intel Middleware Products Division
  


--
Oliver Deakin
IBM United Kingdom Limited



Re: java.lang.String.replaceFirst from IBM VM throws NPE

2006-02-22 Thread Oliver Deakin

Sergey Soldatov wrote:

But the way, how it affects the performance? java.lang.String is the most
used class and even such simple operations will cause loading of full regex
package.
  


Do you have another solution in mind that would avoid the regex package 
load?



On 2/21/06, Oliver Deakin [EMAIL PROTECTED] wrote:
  

Hi Alexey,

 Looking in there (these calls can also be seen if
the test is run within a debugger), we see that the replaceFirst(String,
String) implementation is:

public String replaceFirst(String expr, String substitute) {
   return Pattern.compile(expr).matcher(this).replaceFirst(substitute);
}

Unfortunately the implementation of Pattern at
modules/regex/src/main/java/java/util/regex/Pattern.java is only a stub
(as HARMONY-39 has not yet been accepted into the Harmony SVN
repository) and as such just returns null. Thus when we try to
dereference the return from Pattern.compile(expr) we receive a
NullPointerException. Once the regex in HARMONY-39 is moved into SVN
this should go away.


As a sideline, I think we should be able to move String.java out of
kernel entirely anyway. We already have an implementation at
modules/kernel/src/main/java/java/lang/String.java, and the only VM
specific code in String is the intern() method. This method could simply
be redirected to call VM.intern(String), a class which is within kernel,
and then String.java can be moved into LUNI. It also means that the VM
writer(s) need not implement the rest of the String class unnecessarily.
Sound good?


Alexey Petrenko wrote:


We got problem with Harmony on IBM VM on Windows.
java.lang.String.replaceFirst throws NPE.

Here is the testcase:
public class Test {
public static void main(String args[]) {
String xx = test;
xx = xx.replaceFirst(t,z);
}
}

Here is the stack trace:
C:\Work\Harmony\Sources\Harmony\deploy\jre\binjava Test
Exception in thread main java.lang.NullPointerException
at java.lang.String.replaceFirst(String.java:1642)
at Test.main(Test.java:4)

Since IBM VM is not an OpenSource I can not check java.lang.String for
the cause of this problem :(

Can we ask IBM guys somehow to fix this issue?

--
Alexey A. Petrenko
Intel Middleware Products Division

  

--
Oliver Deakin
IBM United Kingdom Limited






--
Sergey Soldatov
Intel Middleware Products Division

  


--
Oliver Deakin
IBM United Kingdom Limited



Re: classlib build status emails?

2006-02-22 Thread Oliver Deakin

Yup, got to echo Tims words - very cool :)

It's great for us to be able to see how near/far we are from a full J2SE 
implementation, and provides

a simple way for people to find areas that need work. Thanks!


Stuart Ballard wrote:

Stuart Ballard stuart.a.ballard at gmail.com writes:
  

If you can give me an url that will always point to the latest jar file(s), I
can set up nightly japi results and mail diffs to this list.



Geir gave me a pointer to the latest snapshots, so the japi results are now 
online:


http://www.kaffe.org/~stuart/japi/htmlout/h-jdk10-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-jdk11-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-jdk12-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-jdk13-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-jdk14-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-jdk15-harmony
http://www.kaffe.org/~stuart/japi/htmlout/h-harmony-jdk15

The last report triggers a recently-discovered bug in japitools that causes some
StringBuffer methods to be incorrectly reported as missing in jdk15 (which
would mean that they are extra methods in harmony). I suggest ignoring the last
report for now, or at least verifying anything it claims against Sun's
documentation before acting on it.

Other than that the reports should give correct information about Harmony's
coverage of the API defined in each JDK version.

Whenever these results change for better or worse, (unless I've screwed
something up), an email will be sent to this list with the differences.

Stuart.


  


--
Oliver Deakin
IBM United Kingdom Limited



Re: Platform dependent code placement (was: Re: repo layout again)

2006-03-09 Thread Oliver Deakin

Time to resurrect this thread again :)

With the work that Mark and I have been doing in HARMONY-183/155/144/171 
we will be at a point soon where all the shared code has been taken out 
of the native-src/win.IA32 and native-src/linux.IA32 directories and 
combined into native-src/shared. Once completed we will be in a good 
position to reorganise the code into whatever layout we choose, and 
refactor the makefiles/scripts to use gmake/ant across both platforms. I 
dont think previous posts on this thread really reached a conclusion, so 
Ill reiterate the previous suggestions:


1) Hierarchy of source - two suggestions put forward so far:
   - Keep architecture and OS names solely confined to directory names. 
So, for example, we could have:

  src\main\native\
 shared\
 unix\
 windows\
 windows_x86\
 solaris_x86\
 All windows_x86 specific code will be contained under that 
directory, any generic windows code will be under windows\, and code 
common to all

 platforms will be under shared\ (or whatever name).
 So when looking for a source/header file on, for example, windows 
x86 the compiler would first look in windows_x86, then windows, then common.


   - Alternatively, have directory names as above, but also allow the 
OS and arch to be mixed into file names. To quote Andreys previous mail [1]:
 Files in the source tree are selected for compilation based on 
the OS or ARCH attribute values which may (or may not appear) in a file 
or directory name.

  Some examples are:
src\main\native\solaris\foo.cpp
means file is applicable for whatever system running Solaris;

   src\main\native\win\foo_ia32.cpp
   file is applicable only for  Windows / IA32;

   src\main\native\foo_ia32_em64t.cpp
   file can be compiled for whatever OS on either IA32 or EM64T 
architecture, but nothing else.
 Files will be selected using a regex expression involving the OS 
and arch descriptors. This is intended to cut down duplication between 
source directories.


Personally I prefer the first system as it is simple to maintain, keeps 
file names consistent and concise and allows developers to easily keep 
track of function location.
For example, as Graeme pointed out in [2], the developer will always 
know that hyfile_open() is defined in hyfile.c.


In addition, I don't believe that the second system will give us much of 
a decrease in the number of duplicated files. For example, if a piece of 
code is unique to only linux
and windows on x86, will the file be named foo_linux_windows_x86.c? How 
will the build scripts be able to determine whether this means all linux 
platforms plus
windows_x86 or windows and linux only on x86? In these cases we will 
either end up duplicating foo_x86.c in the windows and linux directories 
or creating an extra directory
called x86 which contains foo_windows_linux.c. Potentially we will 
either get similar amounts of duplication, or more directories than the 
first method, and because there
is no hard rule on the layout (you can choose directory or filenames to 
include OS/arch) there is no guarantee where a developer will choose to 
put their code in these situations.



2) Build tools - there have been two previous suggestions:
   - Use gmake and VPATH to complement the first layout described 
above. This could lead to platform independent makefiles stored in the 
shared\ directory of each module
 that include platform specifics (such as build file lists, 
compiler flags etc) from a centralised set of resources.


   - Alternatively, use Ant to select the set of files to be compiled 
by employing regex expressions. This sits well with the second layout 
described above (although could also
 be applied to the first) and a regex expression has been described 
by Nikolay in [3].


I prefer the use of gmake here. We can use generic makefiles across 
platforms and pointing the compiler at the right files in the first 
layout above is as easy as setting VPATH to, for example,
windows_x86:windows:shared. I think that complex regex expressions will 
be harder to maintain (and initially understand!).



Opinions? Once we agree on ideas, perhaps we could put together a 
Wiki/website(?) page describing layout, tools and a list of OS/arch 
names to use.


Oliver Deakin
IBM United Kingdom Limited

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED]
[2] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED]
[3] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED]




Re: Subclipse Diff: Are there any way to ignore whitespace when creating patch using Subclipse in Eclipse

2006-03-16 Thread Oliver Deakin

Hi Richard,

If you go to Window-Preferences, then look in General-Compare/Patch 
there is a tick box that says Ignore white space. Ticking this may 
solve some of your problems. However formatting that splits a single 
line into multiple lines (where the line length is determined to be too 
long) will still appear as diffs.



Richard Liang wrote:

Dears,

If you select Spaces only as Eclipse tab policy and you format the 
Harmony source code in Eclipse, when you creating patch for the source 
code, Subclipse may regard the source code as total different  with 
the source in SVN. Then other developers cannot know what you have 
changed to the source code. So are there any way to avoid this 
confusion? Thanks a lot.




--
Oliver Deakin
IBM United Kingdom Limited



Re: [vote] Require compiler options that allow partial 5.0 language features

2006-03-17 Thread Oliver Deakin

karan malhi wrote:

+1.

I didnt quite understand as to why is it an undocumented compiler 
feature? The standard javac compiler does have a target option , 
which is probably what eclipse would be using to generate the class 
files for 1.4 target vm.


It is the jsr14 part of the option that is undocumented rather than 
the -target. If you use the standard -target 1.4, the compiler will 
give you an error message and will not compile your source. You have to 
use -target jsr14 to be able to compile 5.0 source into 1.4 bytecode.




Tim Ellison wrote:
As discussed on the list, there is a compiler option in the 5.0


compilers we use that allows source code containing a subset of Java 5.0
language features to be compiled into 1.4 compatible class files.

Since this is quite a significant change I'd like to get a vote on
whether the project should make this compiler option a necessity for our
code.

The positive outcome of this is that we can develop APIs that rely on
those 5.0 language features, and run the resulting code on existing
1.4-compatible VMs.

The downside is that we are using an undocumented compiler feature on
the reference implementation (it is supported on the Eclipse compiler).

[ ] +1 - Yes, change the build scripts to compile 5.0 code to 1.4 target
[ ]  0 - I don't care
[ ] -1 - No, don't change the compiler options (please state why)


Regards,
Tim

 





--
Oliver Deakin
IBM United Kingdom Limited



Re: [classlib] ant platform property definitions

2006-03-30 Thread Oliver Deakin

Mark Hindess wrote:

Dan,

Thanks for the helpful comments.

On 3/30/06, bootjvm [EMAIL PROTECTED] wrote:
  

Concerning the ideas for platform names, I think lower case names
like 'linux' and 'windows' and 'solaris' and 'aix' is by far the simplest
method.  It avoids UPPER case errors with the shift key for these
_very_ common key sequences, reducing Inaccurate kEy SEquences
quite a bit.  I have seen this work well for both platform names
and for project names (such as newproj1 instead of NewProj1)
with favorable long-term response from those who type the key
sequences most.



Very good point.  You are absolutely correct.  Sticking with Ant case
may reduce complexity in the ant files but it makes things more
confusing/complex for users.  This would be a bad idea.  The case
mapping can be managed in a single ant file, for classlib anyway,
which should make it manageable.
  


Agreed. Using mixed case just adds potential for errors. All lower case 
is simple and doesnt
require us to remember whether, for example, aix is AIX or Aix or 
whatever.


  

Bottom line:Mixed case just adds one more level of complexity
to the whole situation.  Other comments below

Dan Lydick




[Original Message]
From: Mark Hindess [EMAIL PROTECTED]
To: Harmony Dev harmony-dev@incubator.apache.org
Date: 3/29/06 10:28:41 AM
Subject: [classlib] ant platform property definitions

Currently a number of the classlib ant files normalize operating
system and architecture names.  Unfortunately they don't
really normalize them in the same way.  ;-)

For instance, native-src/build.xml sets target.platform to
linux.IA32 and modules/security/make/build.xml sets platform.name
to lnx.

  

PLEASE, no abbreviations!  Nobody abbreviates the same way, and
even one individual may use more than one abbreviation for a word!



Agreed.
  


Seconded. Again, abbreviations are something that will add complexity 
and opportunity for errors.



--
Oliver Deakin
IBM United Kingdom Limited



Re: [announce] New Apache Harmony Committers : Mikhail Loenko, George Harley, Stepan Mishura

2006-03-31 Thread Oliver Deakin

Congratulations!

Geir Magnusson Jr wrote:
Please join the Apache Harmony PPMC in welcoming the projects 3 newest 
committers :


  Stepan Mishura, Mikhail Loenko and George Harley

These three individuals have shown sustained dedication to the 
project, an ability to work well with others, and share the common 
vision we have for Harmony. We all continue to expect great things 
from them.


Gentlemen, as a first step to test your almighty powers of 
committitude, please update the committers page on the website.  That 
should be a good  (and harmless) exercise to test if everything is 
working.


Things to do :

1) test ssh-ing to the server people.apache.org.
2) Change your login password on the machine
3) Add a public key to .ssh so you can stop using the password
4) Set your SVN password  : just type 'svnpasswd'

At this point, you should be good to go.  Checkout the website from 
svn and update it.  See if you can figure out how.


Also, for your main harmony/enhanced/classlib/trunk please be sure 
that you have checked out via 'https' and not 'http' or you can't 
check in. You can switch using svn switch. (See the manual)


Finally, although you now have the ability to commit, please remember :

1) continue being as transparent and communicative as possible.  You 
earned committer status in part because of your engagement with 
others.  While it was a  have to situation because you had to submit 
patches and defend them, but we believe it is a want to.  Community 
is the key to any Apache project.


2)We don't want anyone going off and doing lots of work locally, and 
then committing.  Committing is like voting in Chicago - do it early 
and often.  Of course, you don't want to break the build, but keep the 
commit bombs to an absolute minimum, and warn the community if you 
are going to do it - we may suggest it goes into a branch at first.  
Use branches if you need to.


3) Always remember that you can **never** commit code that comes from 
someone else, even a co-worker.  All code from someone else must be 
submitted by the copyright holder (either the author or author's 
employer, depending) as a JIRA, and then follow up with the required 
ACQs and BCC.



Again, thanks for your hard work so far, and welcome.

The Apache Harmony PPMC




--
Oliver Deakin
IBM United Kingdom Limited



New IBM VME

2006-04-04 Thread Oliver Deakin

Hi all,

I'm pleased to announce that a new IBM VME will be made available soon at:
  http://www-128.ibm.com/developerworks/java/jdk/harmony/index.html

The new VME downloads are named Harmony-vme-win.IA32-v2.zip and 
Harmony-vme-linux.IA32-v2.tar.gz. I would like to stress that if you 
download these packages now, they will *not* work with the class library 
code currently in Harmony Subversion. This VME has been created looking 
forward to changes that have been discussed on the list, but have not 
yet been carried out. They are:
- completion of renaming of com.ibm packages, especially in LUNI 
module. The new VME expects only org.apache.harmony package names.
- removal of String from the kernel, and addition of an intern(String) 
method to the org.apache.harmony.kernel.vm.VM class. The new VME does 
*not* contain String in its kernel jars. It does, however, provide an 
intern(String) method in the VM class, as was suggested in [1]. The 
String.intern() method in Harmony will just redirect the call to 
VM.intern(String).


Once these changes are made in the Harmony repository, the new version 
of the VME will be required to run with Harmony classlib. I will send 
out a further mail when this is done confirming that the new VME is 
available and ready to use.


A link to the VME v1 will still be available in the same place, and this 
should still be used until the above changes are made.


Regards,
Oliver


[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED]


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: New IBM VME

2006-04-05 Thread Oliver Deakin

Daniel,

The new VME is still at the 1.4 level, but with updated VM and kernel 
classes. As I have mentioned, two of the main reasons for this VME 
update was to allow for package renaming to go ahead, and also to allow 
String to be removed from the kernel, making kernel implementation 
easier for other VM providers. If there is a necessity in the future for 
a 1.5 VME, then one could be created, but currently classlib only 
requires 1.4. I believe that while we are in a transitional phase from 
1.4 to 1.5, a 1.4 VME will suffice, and the solution that is being 
discussed in the [classlib] Switching to a 5.0 compiler [1] is the way 
to go ahead for the moment.


My hope is that in the not too distant future we may see one or two of 
the Harmony VMs stepping up to 1.5 level, and we can begin to use them 
with our classlib :)


Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200604.mbox/[EMAIL PROTECTED]



Daniel Gandara wrote:

Oliver,
   pleased to hear good news from you!!  I believe this is great for 
the project.
I have one question regarding the version of the VME, is it 1.5?   I'm 
asking this
because we have received contributions of some packages (for example: 
java.math
and java.rmi packages) which are 1.5 (compliant and dependant) but 
cannot be

used with current VM.

Thanks,

Daniel

- Original Message - From: Oliver Deakin 
[EMAIL PROTECTED]

To: harmony-dev@incubator.apache.org
Sent: Tuesday, April 04, 2006 1:26 PM
Subject: New IBM VME



Hi all,

I'm pleased to announce that a new IBM VME will be made available 
soon at:

  http://www-128.ibm.com/developerworks/java/jdk/harmony/index.html

The new VME downloads are named Harmony-vme-win.IA32-v2.zip and 
Harmony-vme-linux.IA32-v2.tar.gz. I would like to stress that if you 
download these packages now, they will *not* work with the class 
library code currently in Harmony Subversion. This VME has been 
created looking forward to changes that have been discussed on the 
list, but have not yet been carried out. They are:
- completion of renaming of com.ibm packages, especially in LUNI 
module. The new VME expects only org.apache.harmony package names.
- removal of String from the kernel, and addition of an 
intern(String) method to the org.apache.harmony.kernel.vm.VM class. 
The new VME does *not* contain String in its kernel jars. It does, 
however, provide an intern(String) method in the VM class, as was 
suggested in [1]. The String.intern() method in Harmony will just 
redirect the call to VM.intern(String).


Once these changes are made in the Harmony repository, the new 
version of the VME will be required to run with Harmony classlib. I 
will send out a further mail when this is done confirming that the 
new VME is available and ready to use.


A link to the VME v1 will still be available in the same place, and 
this should still be used until the above changes are made.


Regards,
Oliver


[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED] 



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [vmi] JNI spec interpretation?

2006-04-05 Thread Oliver Deakin

Robert Lougher wrote:

The word pragmatic springs to mind.  FWIW, JamVM will print nothing
if no exception is pending.  It didn't do this originally -- it blew
up with a SEGV.  I changed it because a user reported an application
which didn't work with JamVM but it did with Suns VM (can't remember
which application, it was a long time ago).
  


This sounds right to me. As a user Id expect a call that prints 
exception output to the screen to just print nothing and return if there 
is no pending exception.



It's all very well bombing out with an assertion failure, but to the
average end-user it's still the VMs fault, especially if it works with
other runtimes (i.e. Suns).
  


Exactly - isn't this one of those differences to undocumented RI 
behaviour that we should try to match?



Rob.

On 4/5/06, Archie Cobbs [EMAIL PROTECTED] wrote:
  

Tim Ellison wrote:


Understood -- my point is that blowing up and corrupting internal
data structures is not something you would do by design.
  

Agreed. By using assertions you get the best of both worlds.
Assertions are especially useful for detecting badly behaving
JNI native code, which can otherwise result in very hard to
track down errors.

-Archie

__
Archie Cobbs  *CTO, Awarix*  http://www.awarix.com

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [vmi] JNI spec interpretation?

2006-04-05 Thread Oliver Deakin



Archie Cobbs wrote:

Oliver Deakin wrote:

The word pragmatic springs to mind.  FWIW, JamVM will print nothing
if no exception is pending.  It didn't do this originally -- it blew
up with a SEGV.  I changed it because a user reported an application
which didn't work with JamVM but it did with Suns VM (can't remember
which application, it was a long time ago).


This sounds right to me. As a user Id expect a call that prints 
exception output to the screen to just print nothing and return if 
there is no pending exception.



It's all very well bombing out with an assertion failure, but to the
average end-user it's still the VMs fault, especially if it works with
other runtimes (i.e. Suns).


Exactly - isn't this one of those differences to undocumented RI 
behaviour that we should try to match?


There is nothing undocumented about this. The JNI spec says (though
not very clearly) that you should not call this function unless you know
there is a pending exception.


What I mean to say is that the behaviour when the function is called 
without a pending exception is unspecified, and in that case I think it 
makes most sense to match the RI.




However, that's not to say that we shouldn't be pragmatic, though, and
try to handle the situation gracefully.

-Archie

__ 

Archie Cobbs  *CTO, Awarix*  
http://www.awarix.com


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [vmi] JNI spec interpretation?

2006-04-05 Thread Oliver Deakin


Tim Ellison wrote:

Etienne Gagnon wrote:
  

Oliver Deakin wrote:


What I mean to say is that the behaviour when the function is called
without a pending exception is unspecified, and in that case I think it
makes most sense to match the RI.
  

Let's say that I disagree to *impose* matching the RI's behavior for
undefined JNI code on Harmony VMs.  In many cases, matching the RI's
behavior on undefined JNI would require reverse engineering way beyond a
point I feel comfortable with.  I definitely think that Harmony's native
code should NEVER rely on undefined JNI behavior.  JIRA reports should
be raised for faulty JNI code.



I agree.
  


Also agree. I think, as we agreed to do with the Java APIs, differences 
with the RI should be examined on a case by case basis. To clarify, my 
mail above wasnt intended as a blanket statement for all differences, 
just for this particular one.


  

On the other hand, I think that it would be a nice thing to keep an
explicit list of expected behavior for some widely used (by 3rd
parties) undefined JNI, so that VM implementors are *encouraged* (not
*forced*) to provide such workarounds.  Some workarounds can be
expensive to implement; we had we had to implement a more expensive
approach for badly written JNI code that does not explicitly reserve
enough local native references (only 16 are guaranteed by default in the
spec).



  


Agreed - and recording where we have chosen to match or differ from the 
RI would be useful also.




I agree -- and users will vote with their feet when choosing between VMs
that make different implementation decisions.  That's healthy.

  

So, I'll add a the ExceptionDecribe workaround to SableVM permanently,
but I do not wish feel obligated to do so. :-)



Of course not, that's your prerogative.

Regards,
Tim

  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: New IBM VME

2006-04-06 Thread Oliver Deakin



Daniel Gandara wrote:

Oliver Deakin wrote:

ok, I'll ask there about removing 1.5 dependencies from java.rmi and
compile it to get 1.4 bytecode...


I hope that you would not have to remove too much when compiling to 
1.4 bytecodes - I guess this is something we still need to 
investigate. Have you tried building the rmi code with the -target 
jsr14 option? Id (amongst others, Im sure) be interested to hear what 
results you get, and whether any alterations are necessary.


I'll try building the code with -target jsr14 and see how much has to be
removed; I'll create a new thread with the results;  ok?


Sounds good - thanks






My hope is that in the not too distant future we may see one or two 
of the Harmony VMs stepping up to 1.5 level, and we can begin to 
use them with our classlib :)


hope so too.

Thanks,

Daniel



Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200604.mbox/[EMAIL PROTECTED] 




Daniel Gandara wrote:

Oliver,
   pleased to hear good news from you!!  I believe this is great 
for the project.
I have one question regarding the version of the VME, is it 1.5?   
I'm asking this
because we have received contributions of some packages (for 
example: java.math
and java.rmi packages) which are 1.5 (compliant and dependant) but 
cannot be

used with current VM.

Thanks,

Daniel

- Original Message - From: Oliver Deakin 
[EMAIL PROTECTED]

To: harmony-dev@incubator.apache.org
Sent: Tuesday, April 04, 2006 1:26 PM
Subject: New IBM VME



Hi all,

I'm pleased to announce that a new IBM VME will be made available 
soon at:

  http://www-128.ibm.com/developerworks/java/jdk/harmony/index.html

The new VME downloads are named Harmony-vme-win.IA32-v2.zip and 
Harmony-vme-linux.IA32-v2.tar.gz. I would like to stress that if 
you download these packages now, they will *not* work with the 
class library code currently in Harmony Subversion. This VME has 
been created looking forward to changes that have been discussed 
on the list, but have not yet been carried out. They are:
- completion of renaming of com.ibm packages, especially in LUNI 
module. The new VME expects only org.apache.harmony package names.
- removal of String from the kernel, and addition of an 
intern(String) method to the org.apache.harmony.kernel.vm.VM 
class. The new VME does *not* contain String in its kernel jars. 
It does, however, provide an intern(String) method in the VM 
class, as was suggested in [1]. The String.intern() method in 
Harmony will just redirect the call to VM.intern(String).


Once these changes are made in the Harmony repository, the new 
version of the VME will be required to run with Harmony classlib. 
I will send out a further mail when this is done confirming that 
the new VME is available and ready to use.


A link to the VME v1 will still be available in the same place, 
and this should still be used until the above changes are made.


Regards,
Oliver


[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED] 



--
Oliver Deakin
IBM United Kingdom Limited


- 


Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]






-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Refactoring package names

2006-04-06 Thread Oliver Deakin

Hi all,

Tim and I plan to complete the refactoring of package names from com.ibm 
to org.apache.harmony over the next few hours. We are going to make a 
backup of the current classlib trunk in a branch before we make our 
changes, so we can revert should anything unexpected occur.


Please be aware that while we are making these changes the build will 
probably be broken, so it may be worth not updating until we complete. 
Ill send a mail to confirm when we are finished.


Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Refactoring package names

2006-04-06 Thread Oliver Deakin

Thanks Richard - this is fixed now

Richard Liang wrote:

Oliver Deakin wrote:
Tim and I have now completed the package renames at repository 
revision 391957. It is now safe to update and start building the 
Harmony classlib again.


If you use the IBM VME in combination with Harmony classlib, you will 
need to download the new version (v2) to continue working with 
classlib after revision 391957. They can be found at:

http://www-128.ibm.com/developerworks/java/jdk/harmony/index.html
and are called Harmony-vme-win.IA32-v2.zip for Windows and 
Harmony-vme-linux.IA32-v2.tar.gz for Linux.
The VME v1 is not compatible with Harmony classlib after revision 
391957.



Awesome! I'm downloading the new VME

BTW, there is only IBM Development Package for Apache Harmony v1.0 
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_USsource=ahdkS_TACT=105AGX05S_CMP=JDK 
link in the index page. But after you login (if you already have a IBM 
ID), following the link you will see the new VME v2.

Regards,
Oliver

Tim Ellison wrote:

Ok, the renames are done -- I'll leave this around for a few hours in
case somebody screams.  Otherwise it is a case of checking out the
earlier revision.

Regards,
Tim

Tim Ellison wrote:
 

FYI: Before we do the refactoring I'm making a copy of trunk onto the
branch directory ... I'll remove it once things are settled.

Regards,
Tim

Oliver Deakin wrote:
  

Hi all,

Tim and I plan to complete the refactoring of package names from 
com.ibm

to org.apache.harmony over the next few hours. We are going to make a
backup of the current classlib trunk in a branch before we make our
changes, so we can revert should anything unexpected occur.

Please be aware that while we are making these changes the build will
probably be broken, so it may be worth not updating until we 
complete.

Ill send a mail to confirm when we are finished.

Regards,
Oliver

  


  







--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [rmi] getting 1.4 bytecode

2006-04-10 Thread Oliver Deakin

Daniel Gandara wrote:
Hi,As discussed with Oliver, I built the rmi code with the -target 
jsr14 option and I got a 1.4 bytecode for the package.I run our 
test suit against the package and it seems to work ok.  


That's great news!


This is a sum up of the experience:
1) -target jsr14 option only worked with Sun's compiler, Icould 
not make it work on Eclipse...


Tim described in [1] that the Eclipse batch compiler unfortunately does 
not support this kind of 1.5 to 1.4 compilation. There has been further 
discussion in that thread as to which solution is best for us - Sun 
compiler using -target jsr14, and/or Eclipse compiler at 1.5 level and 
then use a tool called Retroweaver to alter the bytecodes to work on 1.4.


2) I had to make changes to the code, basically I had tochange 
enums and change/remove java.util.concurrentclasses we use 
(ConcurrentHashMap andThreadPoolExecutor)


I got some enum examples working by just adding a basic Enumeration 
class to the java.lang package in luni, but I only trialled fairly 
simple cases. What kind of alterations did you need to make? It will be 
interesting to know some of the limits of this compiler option.




note: there is obviously a performance penalty due to 2).

The question I have now is if I should send this modified code to be 
used in Harmony during this transitional phase or not.  What do you 
think?


I think it's probably a good thing to get the code out there so we can 
start to use it, even if this means making some temporary modifications.


Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200604.mbox/[EMAIL PROTECTED]




I'll be waitting for comments,
Daniel


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Compiler error on linux with IBM 5.0 and Sun 5.0 compilers

2006-04-13 Thread Oliver Deakin

Hi Mark,

Ive just found the same thing while building the crypto tests in windows 
using J9 50. All tests up to there are ok. The output I get is similar 
to yours:


compile.tests:
[echo] Compiling CRYPTO tests from 
C:\PROGRA~1\eclipse\workspace\classlib\modules\crypto\src\test\java
   [javac] Compiling 31 source files to 
C:\PROGRA~1\eclipse\workspace\classlib\modules\crypto\bin\test
   [javac] An exception has occurred in the compiler (1.5.0). Please 
file a bug
at the Java Developer Connection 
(http://java.sun.com/webapps/bugreport)  after
checking the Bug Parade for duplicates. Include your program and the 
following

diagnostic in your report.  Thank you.

Ill take a look and see if I can find which class it's compiling at the 
time.


Regards,
Oliver

Mark Hindess wrote:

If I do a clean checkout and run:

ant -f make/depends.xml download
ant -f make/build.xml
ant -f make/build-test.xml compile-support
cd modules/crypto
ant -f make/build.xml test

I get a compiler error.  I've tried using both an IBM 5.0 and Sun 5.0
compiler and get the same error:

  An exception has occurred in the compiler (1.5.0_06). Please file a
bug at the Java Developer Connection
(http://java.sun.com/webapps/bugreport)  after checking the Bug Parade
for duplicates. Include your program and the following diagnostic in
your report.  Thank you.

I assume since everyone else is quietly working on adding 5.0 code
that I am the only one with this problem?

I'm about to try the same test under windows, but thought I'd mention
this hear first.

Regards,
 Mark.

--
Mark Hindess [EMAIL PROTECTED]
IBM Java Technology Centre, UK.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Compiler error on linux with IBM 5.0 and Sun 5.0 compilers

2006-04-13 Thread Oliver Deakin

Thanks Mark
This fixes the compilation problems I saw in ~1.java and ~2.java. I can 
now run the crypto testsuite through in Windows ok with no failures. I 
will run the whole set of testsuites for all modules and post if I see 
anything similar elsewhere.



Mark Hindess wrote:

This fixes the *1.java case:

@@ -391,9 +391,13 @@
 }
 }
 try {
-skF[i].getKeySpec(secKeySpec,
-(defaultAlgorithm.equals(defaultAlgorithm2)
-? DESedeKeySpec.class : DESKeySpec.class));
+Class c;
+if (defaultAlgorithm.equals(defaultAlgorithm2)) {
+c = DESedeKeySpec.class;
+} else {
+c = DESKeySpec.class;
+}
+skF[i].getKeySpec(secKeySpec, c);
 fail(getKeySpec(secKey, Class):
InvalidKeySpecException must be thrown);
 } catch (InvalidKeySpecException e) {
 }

Just looking at the 2.java case and will submit a JIRA with both.

-Mark.

On 4/13/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
  

I think that would be a workaround once we understand the cause.

geir

Mikhail Loenko wrote:


We may compile the tests old way...

Thanks,
Mikhail

2006/4/13, Mark Hindess [EMAIL PROTECTED]:
  

Excluding these two tests:

 javax/crypto/SecretKeyFactoryTest1.java
 javax/crypto/SecretKeyFactoryTest2.java

then the compilation works.

-Mark.

On 4/13/06, Oliver Deakin [EMAIL PROTECTED] wrote:


Hi Mark,

Ive just found the same thing while building the crypto tests in windows
using J9 50. All tests up to there are ok. The output I get is similar
to yours:

compile.tests:
 [echo] Compiling CRYPTO tests from
C:\PROGRA~1\eclipse\workspace\classlib\modules\crypto\src\test\java
[javac] Compiling 31 source files to
C:\PROGRA~1\eclipse\workspace\classlib\modules\crypto\bin\test
[javac] An exception has occurred in the compiler (1.5.0). Please
file a bug
 at the Java Developer Connection
(http://java.sun.com/webapps/bugreport)  after
 checking the Bug Parade for duplicates. Include your program and the
following
diagnostic in your report.  Thank you.

Ill take a look and see if I can find which class it's compiling at the
time.

Regards,
Oliver

Mark Hindess wrote:
  

If I do a clean checkout and run:

ant -f make/depends.xml download
ant -f make/build.xml
ant -f make/build-test.xml compile-support
cd modules/crypto
ant -f make/build.xml test

I get a compiler error.  I've tried using both an IBM 5.0 and Sun 5.0
compiler and get the same error:

  An exception has occurred in the compiler (1.5.0_06). Please file a
bug at the Java Developer Connection
(http://java.sun.com/webapps/bugreport)  after checking the Bug Parade
for duplicates. Include your program and the following diagnostic in
your report.  Thank you.

I assume since everyone else is quietly working on adding 5.0 code
that I am the only one with this problem?

I'm about to try the same test under windows, but thought I'd mention
this hear first.

Regards,
 Mark.

--
Mark Hindess [EMAIL PROTECTED]
IBM Java Technology Centre, UK.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  

--
Mark Hindess [EMAIL PROTECTED]
IBM Java Technology Centre, UK.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






--
Mark Hindess [EMAIL PROTECTED]
IBM Java Technology Centre, UK.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html

Re: [docs] rearranging docs and linking from website

2006-04-19 Thread Oliver Deakin

Geir Magnusson Jr wrote:
Yesterday, I just did a little bit of work and took the docs from 
modules/security and modules/regex and put in the trunk/docs directory 
under /security and /regex, respectively.  Other than some minor 
changes to reflect that they are for harmony, I made no changes.


I then added to the standard/site/docs directory a /externals/ 
directory, and then used svn:externals property to link across, and 
then added links to our docs page.


So

1) do not modify things in site/docs/externals - go do it in 
enhanced/classlib/trunk/docs/whatever




Hi Geir,

IMHO these files would fit in better if they were placed into, say, 
site/docs/documentation/classlib/regex|security directories
alongside the other documentation that is linked from the same page [1]. 
I think that adding module subdirs into classlib/trunk/docs
containing static (not generated) docs alongside directories containing 
Doxygen configuration and generated docs might cause

confusion. Is there a reason not to have these stored directly under site?


2) What other docs can we add?


Could the docs that are currently at classlib/trunk/doc/kernel_doc and 
classlib/trunk/doc/vm_doc also be moved to, say,

site/docs/documentation/classlib/kernel|natives?
Perhaps then a link to the kernel docs could be added to [1]. The docs 
under kernel_doc are currently out of date, since
String has been removed from kernel and some com.ibm packages have been 
renamed. The patch I have attached to
HARMONY-332 will allow a set of new up-to-date docs to be generated for 
kernel and native code, which could then

be used for the website.
There are currently two links (under Architecture and guides) to files 
in vm_doc at [2] which would need to be updated to point to the new 
locations.


Regards,
Oliver

[1] http://incubator.apache.org/harmony/documentation/documentation.html
[2] 
http://incubator.apache.org/harmony/subcomponents/classlibrary/index.html




My goal here is to make whatever we have available to people, and 
hopefully entice volunteers to start bringing it together into a 
coherent form.  The fact that it isn't is ok - something is better 
than nothing, but docs are going to be critical for this project, and 
I'm hoping someone steps up and moves us to a good start...


geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing

2006-04-26 Thread Oliver Deakin
 
that

the parameter space itself is also at some point defined in software,
which may have bugs of its own. You circumvent that by making 
humans the

parameter space (don't start about how humans are buggy. We don't
want to
get into existialism or faith systems when talking about unit 
testing do

we?). The thing that gump enables is many monkey QA - a way for
thousands
of human beings to concurrently make shared assertions about software
without actually needing all that much human interaction.

More concretely, if harmony can run all known java software, and run
it to
the asserted satisfaction of all its developers, you can trust that
you have
covered all the /relevant/ parts of the parameter space you describe.


Yes.  And when you can run all knownn Java software, let me know :)
That's my point about the parameter space being huge.  Even when you
reduce the definition to that of all known Java software, you still
have a huge problem on your hands.

 

You
will never get that level of trust when the assertions are made by
software
rather than humans. This is how open source leads to software 
quality.


Quoting myself, 'gump is the most misunderstood piece of software,
ever'.

cheers,

Leo






-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [VOTE] Acceptance of HARMONY-211 : Contribution of java.rmi

2006-05-09 Thread Oliver Deakin

+1

Geir Magnusson Jr wrote:
I have received the ACQs and the BCC for Harmony-211 in paper form and 
have reviewed them, so I can assert that the critical provenance 
paperwork is in order.  It is not in SVN yet, but I wanted to get this 
vote going at the same time as the Intel contribution in the same 
area.  I will get scanned and in SVN ASAP.


This is the contribution from ITC.

Please vote to accept or reject this codebase into the Apache Harmony 
class library :


[ ] + 1 Accept
[ ] -1 Reject  (provide reason below

Lets let this run 3 days unless a) someone states they need more time 
or b) we get all committer votes before then.


geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [VOTE] Acceptance of HARMONY-199 : Contribution of javax.crypto and java.math

2006-05-09 Thread Oliver Deakin

+1

Geir Magnusson Jr wrote:
I have received the ACQs and the BCC for Harmony-199 in paper form and 
have reviewed them, so I can assert that the critical provenance 
paperwork is in order.  It is not in SVN yet, but I wanted to get this 
vote going at the same time as the other contributions from ITC.

I will get scanned and in SVN ASAP.

This is the contribution from ITC.  This is just a vote to accept or 
reject the codebase.  What we do with the codebase  - what parts and 
how we integrate - is up for discussion on the -dev list.


Please vote to accept or reject this codebase into the Apache Harmony 
class library :


[ ] + 1 Accept
[ ] -1 Reject  (provide reason below

Lets let this run 3 days unless a) someone states they need more time 
or b) we get all committer votes before then.


geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-12 Thread Oliver Deakin

Geir Magnusson Jr wrote:



Oliver Deakin wrote:

Geir Magnusson Jr wrote:

Mark Hindess wrote:

On 9 May 2006 at 10:32, Geir Magnusson Jr [EMAIL PROTECTED] wrote:

Mark Hindess wrote:

As the Harmony Classlib grows, I think that being able to work on a
single module (or some subset of the modules) will become the
typical way of working for many (perhaps even most) contributors.

Sure, that makes sense.


So I think we need a plan to support this.  I also think that
forcing ourselves to support this mode of working sooner rather than
later will help us keep from accidentally breaking the modularity in
the current build/development process.

Can you not work on a single module now?


Yes.  But only by checking out everything.


Wouldn't you do that anyway?

I guess thats the mystery to me.  I figured that someone would do 
the full checkout, and go from there.


I think it's good if we can cater not only for those developers who 
want to completely checkout classlib trunk, but also for those who 
want to work only on a single module. When I say work on a single 
module, I mean only checking that module's directory out of SVN, and 
not the code for all the other modules, thereby saving download time 
and disk space, not to mention the subsequent reduced build times of 
only recompiling a single module.


Eh. :)  We have agreement in a root issue, but not as you describe. 
IOW, I agree that it should be that after a full build, you should be 
able to dive into a module and work and re-build in there in isolation.


However, I have little sympathy for the reduced download time or 
diskspace argument.


I *do* sympathize with [psychos like] Mark who has 4 builds at once, 
and see that it would be efficient to have multiple independent 'dev 
subtrees' pointing at one or more pre-built trees.


It seems we have agreement that being able to rebuild a single module in 
isolation is a good idea - we have different ideas about how that may 
then be used, but I think both are valid and possible once the proper 
modularisation of code and build scripts exists. I guess once the build 
system is in place, it's up to the developer how they wish to use it :)






Currently a developer cannot work on both the native and Java code 
for a module without checking out the whole of classlib trunk, 
because the native source directories and Ant build script associated 
with them are completely separate from the modules. IMHO it would be 
great if we could get the build scripts and code repository layout 
into a state where a developer can just checkout a module, grab an 
hdk from some location and start changing and rebuilding both Java 
and native code (and also running tests against the rebuilt libraries).


This is mixing two issues.

yes, I think we should put the native code into each module, so you 
can just work in one module, both native and Java.


Yes, I see a reason why you would want to have a pre-build snapshot to 
use for multiple dev subtrees, for known good regression testing, etc.


But I have trouble with linking them.


ok, fair enough - I have mixed these two issues together. I think this 
stems from what I said above - we just have different ideas about how 
the modularisation will be used once it's in place. When I think of 
modularisation, I imagine a developer grabbing the source just for the 
module they are interested in and a snapshot build (hdk) and working 
away with it - thus I see the modularisation and the creation of an hdk 
as being linked. As you say, really they are two separate issues.




To do this there are at least three steps needed, as far as I can see:

1) Refactor the native code into the modular structure we currently 
have for Java code and tests. This has been spoken about before at 
[1]. The native code would be located within each module at 
modules/module name/src/main/native. As a starting point, can I 
propose that the natives be broken down in the following way:


modules/archive/src/main/native/
   |---archive/
   |---zip/
   
+--zlib/  modules/auth/src/main/native/

   +---auth/

modules/luni/src/main/native/
   |common/
   |fdlibm/
   |launcher/
   |luni/
   |pool/
   |port/
   |sig/
   |thread/
   |vmi/
   +---vmls/

modules/prefs/src/main/native/
  +---prefs/

modules/text/src/main/native

Re: Supporting working on a single module?

2006-05-15 Thread Oliver Deakin

Hi Andrey,


Andrey Chernyshev wrote:

On 5/12/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Geir Magnusson Jr wrote:


SNIP



 To do this there are at least three steps needed, as far as I can 
see:


 1) Refactor the native code into the modular structure we currently
 have for Java code and tests. This has been spoken about before at
 [1]. The native code would be located within each module at
 modules/module name/src/main/native. As a starting point, can I
 propose that the natives be broken down in the following way:

 modules/archive/src/main/native/
|---archive/
|---zip/

 +--zlib/  modules/auth/src/main/native/
+---auth/

 modules/luni/src/main/native/
|common/
|fdlibm/
|launcher/
|luni/
|pool/
|port/
|sig/
|thread/
|vmi/
+---vmls/

 modules/prefs/src/main/native/
   +---prefs/

 modules/text/src/main/native/
|---text/
+--unicode/ (comes from
 the icu4c zips stored currently in depends)

 W/o thinking too hard about it, this works for me just fine.

Great - I am starting to look at how shared includes can be handled
across modules (as Mark alluded to in his earlier post in this thread
[1]), and at what changes will be required to split the natives into
these locations. I will be taking this in small steps, trying to get the
foundation and easy parts done first, and raising a JIRA for each step
rather than in one monolithic change.


Great!  I think splitting the sources by modules at the top level
directory is a good idea.
A few questions before the big source tree reorganization starts:


 modules/archive/src/main/native/
|---archive/
|---zip/


(1) Do we need to keep the 'main' directory in every module? If we
need to have a distinction between tests and sources, may be just pull
tests one level up and have something like:
archive/
   src/
native/
java/
tests/
native/
java/
I wonder if 'main' is an extra level of the directory tree we can
actually avoid of (lazy people don't like typing cd too much :))


I think using src/main and src/test to group our implementation
and test code was a convention we agreed on a while back. Personally
I dont have any problem with it, but it's something we can look at again
if people don't like it. I think that's something that would be fairly easy
to alter once the natives are modularised, should we wish to do so.




(2) Why do we need to have 'luni' two times in the path, e.g.
modules/luni/src/main/native/luni/ ? If we need to put an additional
stuff like 'port' to the luni module, perhaps it could be just enough
to put it into a subdirectory within native, e.g:
modules/luni/src/native/port/ ?


Maybe I am missing something, but I think what you're suggesting (putting
port etc. directly under the native directory) is the same as I laid out 
above -

it's quite likely that my ascii diagram of the directory layout hasnt come
across as intended, so to clarify the resulting native directories will be:

modules/archive/src/main/native/archive/
modules/archive/src/main/native/zip/
modules/archive/src/main/native/zlib/

modules/luni/src/main/native/common/
modules/luni/src/main/native/fdlibm/
modules/luni/src/main/native/launcher/
modules/luni/src/main/native/luni/
modules/luni/src/main/native/pool/
modules/luni/src/main/native/port/
modules/luni/src/main/native/sig/
modules/luni/src/main/native/thread/
modules/luni/src/main/native/vmi/
modules/luni/src/main/native/vmls/

modules/prefs/src/main/native/prefs/

modules/text/src/main/native/text/
modules/text/src/main/native/unicode/

I think this agrees with what you were saying - please let me know if 
I've misunderstood!





BTW, I've noticed that this proposal is very similar to the DRLVM
source tree organization, which is like:
- vm
   - include  - top level include which contains h files exported by
various VM components;
   - interpreter
   - jitrino
   - vmcore
   ...
   other VM components

The module vmcore, for example, contains both native and java code:
vmcore/src/kernel_classes
  - native
  - javasrc

The building system for DRLVM has been designed in a modular way as well:
There is a building engine

Re: Supporting working on a single module?

2006-05-15 Thread Oliver Deakin



Tim Ellison wrote:

Andrey Chernyshev wrote:
snip

  

(1) Do we need to keep the 'main' directory in every module? If we
need to have a distinction between tests and sources, may be just pull
tests one level up and have something like:
archive/
   src/
native/
java/
tests/
native/
java/
I wonder if 'main' is an extra level of the directory tree we can
actually avoid of (lazy people don't like typing cd too much :))



Really lazy people use path completion and don't care ;-)

  

(2) Why do we need to have 'luni' two times in the path, e.g.
modules/luni/src/main/native/luni/ ? If we need to put an additional
stuff like 'port' to the luni module, perhaps it could be just enough
to put it into a subdirectory within native, e.g:
modules/luni/src/native/port/ ?



Is it just the name of that path element that you object to?  Seems a
bit cleaner to me if there is a bucket to put that source in.

However, (comment to Oliver as well), I'm left wondering where the
platform-specific vs. common code distinction is made?
  


I was thinking that platform specific directories would be laid out 
underneath each native
component directory. So, for example, underneath the 
modules/luni/src/main/native/port
directory there would be the following structure (avoiding ascii tree 
diagrams):


modules/luni/src/main/native/port/shared
modules/luni/src/main/native/port/linux
modules/luni/src/main/native/port/windows

with further platform specific directories being added as we expand.


  

BTW, I've noticed that this proposal is very similar to the DRLVM
source tree organization, which is like:



Great minds and all that :-)

  

- vm
   - include  - top level include which contains h files exported by
various VM components;
   - interpreter
   - jitrino
   - vmcore
   ...
   other VM components

The module vmcore, for example, contains both native and java code:
vmcore/src/kernel_classes
  - native
  - javasrc

The building system for DRLVM has been designed in a modular way as well:
There is a building engine part at the build/make and
build/make/targets directory which is shared by all components,
Each VM module has a building descriptor which is currently located at
build/make/components directory, but can also be easily moved to the
component source tree to support the idea of full independent checkout
of a specific module.

I think the building system provided with DRLVM can easily be used to
support the source modularization approach, the proposed 'hdk' in that
case would provide the developers, besides the public includes, with
the shared part of the building scripts as well.



We should continue to collaborate on finding the best solution -- it has
worked very well so far!
  


Agreed!


Regards,
Tim


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-15 Thread Oliver Deakin

Mark Hindess wrote:

On 15 May 2006 at 16:14, Andrey Chernyshev [EMAIL PROTECTED] wrote:
  

Hi Oliver,



I think using src/main and src/test to group our implementation
and test code was a convention we agreed on a while back. Personally
I dont have any problem with it, but it's something we can look at again
  

The current layout is just fine with me as well, in general. I just
thought that, once a big movement over a filesystem starts, it could
be a good chance to remove a few extra levels, in case we find them
redundant. If we don't think they are redundant, then let them leave
as they are.



 modules/text/src/main/native/text/
 modules/text/src/main/native/unicode/

I think this agrees with what you were saying - please let me know if
I've misunderstood!
  

Actually I thought of having the BidiWrapper.c, for example, directly
under the modules/text/src/main/native dir (if  not considering
various OSes and platforms at this time:)). Since we already have a
'text' directory once in the beginning of the path, it may probably
look a bit excessive to repeat it once again at the end.



From the perspective of that single file/module, then what you say might
be reasonable.  But I think it would be nice to have consistency between
modules so that we can share common functionality between build files.
  


Agreed - I think we should keep a consistent layout of the source across 
modules. It also keeps
the source directly in a directory that matches the name of the library 
it will be used to build.



Regards,
 Mark.
  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-15 Thread Oliver Deakin

Andrey Chernyshev wrote:

I was thinking that platform specific directories would be laid out
underneath each native
component directory. So, for example, underneath the
modules/luni/src/main/native/port
directory there would be the following structure (avoiding ascii tree
diagrams):

 modules/luni/src/main/native/port/shared
 modules/luni/src/main/native/port/linux
 modules/luni/src/main/native/port/windows

with further platform specific directories being added as we expand.


Yes, I was thinking about that too, but didn't mention :).
I remember there was a discussion about this sometime in the past [1],
it looked like most people agreed at that time that keeping OSes 
platforms as the directory names is the preferred choice.



Yes, I think you're right. At the moment the layout is quite simple 
since we only have

two platforms.

When we start to expand our platform list, I believe the layout that you 
linked in [1] is
suitably descriptive and simple to use, with directory names 
incorporating OS as the
first level of code specialization and architecture the second, 
separated by an underscore.
I envisage that eventually we might have a layout similar to (hope this 
diagram

works - all subdirs under component are at the same level):

modules/module/src/main/native/component/
   
|--shared/
   
|--aix/
   
|--linux/
   
|--linux_amd/
   
|--linux_ppc/
   
|--linux_s390/
   
|--linux_x86/
   
|--solaris/
   
|--solaris_x86/
   
|--windows/
   
|--windows_amd/
   
|--windows_x86/
   
|--unix/
   
|--zos/
   
|--shared_include/
   
|--windows_include/
   
\--unix_include/



Regards,
Oliver



Thanks,
Andrey.

[1]
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200603.mbox/[EMAIL PROTECTED] 





--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-16 Thread Oliver Deakin

Tim Ellison wrote:

That layout works for me too.

Patches welcome ;-)
  


hehe, had a feeling that might be coming ;)

Im actually working on a couple of preliminary patches at the moment for 
this refactoring. One is a minor tidy up, raised in HARMONY-451. I hope 
to raise another JIRA soon which will alter the build scripts to create 
and populate an hdk directory structure under deploy - which basically 
means moving the deploy/jre and deploy/include directories down a level 
to deploy/jdk/jre and deploy/jdk/include. Once this is done we will be 
in a good position to start moving the native code into modules (which 
Im also happy to work on and provide patches for).



Regards,
Tim
  



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-17 Thread Oliver Deakin
Ive just opened HARMONY-469 which contains a patch to rearrange the 
current layout

of the deploy directory into:

deploy/
  \jdk
  |include
  \jre

This is a preliminary step to reorganising the native code and using the 
deploy directory

as an HDK (as discussed else where in this topic [1]  [2]).

If/when this patch is accepted and applied, there are a couple of issues 
that may affect

us that I thought would be worth bringing to general attention:

1) The IBM VME is laid out to unpack with the same directory structure as
the current deploy directory i.e. it overlays into 
deploy/jre/bin/default. If there is a new
jdk directory added into the structure, then the VME will no longer 
unpack into the
right place, which might confuse new developers hoping to try out our 
code. I am
currently working on making a VME which reflects the new layout 
available, and I will
send a note to the list once it is ready (this would be after 
HARMONY-469 was

applied).
Developers that are already using a VME in their workspace will simply 
need to move

the deploy/jre/bin/default directory to deploy/jdk/jre/bin/default.

2) I was wondering how this change will affect the DRLVM? I notice from 
building
the VM locally that it produces a deploy\jre\bin structure, so I imagine 
that some
small path changes to build scripts would be necessary (similar to the 
changes I

had to make in HARMONY-469).

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 

[2] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-18 Thread Oliver Deakin

Rana Dasgupta wrote:

Hi,
Is there an expectation of a standardised deployment model for the 
Harmony
compliant VM's like DRLVM and others? Eg., should they all produce 
binaries

that can be unpacked to overlay the Harmony Classlib deployment structure
as can the IBM VME? At some point, we will be posting distributions 
for both
class libraries and VM on the Harmony site; they should be deployable 
on a

user machine in some standard way.



I think it makes sense to have a standard shape for deployed binaries, 
so that
for users to get a completely working jdk it is as simple as grabbing 
VME and
Classlib distributions and unpacking them in the same place. There is a 
subtle

difference here between Harmony VMs and the IBM VME however, in that
the IBM VME was intended for development use, and so matches the
expected layout of a developers deploy directory. An actual customer of the
Harmony runtime would expect something closer to what they are used to
with other Java implementations.

It was suggested by Mark [1] that we create snapshots for the proposed hdk,
the jdk and the jre. The hdk is intended for use by Harmony developers;
the jdk and jre are more familiar packages and would be the ones
used by customers. I suggest that any Harmony VM release should match
the structure of one of the latter two. IMHO if a VM matches the jre 
structure

this provides a lowest common denominator solution that can be overlaid
onto jre, jdk and hdk.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]




Thanks,
Rana


On 5/17/06, Andrey Chernyshev [EMAIL PROTECTED] wrote:


 deploy/
   \jdk
   |include
   \jre
...
 2) I was wondering how this change will affect the DRLVM? I notice 
from

 building
 the VM locally that it produces a deploy\jre\bin structure, so I 
imagine

 that some
 small path changes to build scripts would be necessary (similar to the

I think this specific change shouldn't directly affect the DRLVM
building script since it takes the Harmony class libraries in a source
form.

If we, however, want to adjust the JRE image produced by the DRLVM
building script, then I guess the only line that will have to be
changed (in build/make/build.xml) is:

   !-- product binary deploy location --
   property name=build.deploy.dir
location=../${build.os.short}_${build.arch}_${build.cxx}_${build.cfg
}/deploy/jre
/

I can include it as a next cumulative patch.

Surely, some more updates will have to be done for xml files at
build/make/components in case the classlibs will be split by modules
in future.

Thanks,
Andrey.


On 5/17/06, Oliver Deakin [EMAIL PROTECTED] wrote:
 Ive just opened HARMONY-469 which contains a patch to rearrange the
 current layout
 of the deploy directory into:

 deploy/
   \jdk
   |include
   \jre

 This is a preliminary step to reorganising the native code and 
using the

 deploy directory
 as an HDK (as discussed else where in this topic [1]  [2]).

 If/when this patch is accepted and applied, there are a couple of 
issues

 that may affect
 us that I thought would be worth bringing to general attention:

 1) The IBM VME is laid out to unpack with the same directory structure
as
 the current deploy directory i.e. it overlays into
 deploy/jre/bin/default. If there is a new
 jdk directory added into the structure, then the VME will no longer
 unpack into the
 right place, which might confuse new developers hoping to try out our
 code. I am
 currently working on making a VME which reflects the new layout
 available, and I will
 send a note to the list once it is ready (this would be after
 HARMONY-469 was
 applied).
 Developers that are already using a VME in their workspace will simply
 need to move
 the deploy/jre/bin/default directory to deploy/jdk/jre/bin/default.

 2) I was wondering how this change will affect the DRLVM? I notice 
from

 building
 the VM locally that it produces a deploy\jre\bin structure, so I 
imagine

 that some
 small path changes to build scripts would be necessary (similar to the
 changes I
 had to make in HARMONY-469).

 Regards,
 Oliver

 [1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 



 [2]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 




 --
 Oliver Deakin
 IBM United Kingdom Limited


 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






--
Oliver Deakin
IBM United Kingdom Limited

Re: [DRLVM] build process improvement

2006-05-18 Thread Oliver Deakin

Tim Ellison wrote:

Salikh Zakirov wrote:
  

Vladimir Gorr wrote:


My personal opinion is we need to improve the existent build system for
DRLVM contribution.
...
Therefore there are no needs to compile them each of participants. It'd be
fine to have these sources pre-compiled (another snapshot?)
  

The idea of having something precompiled looks close
to idea of HDK (Harmony Development Kit): to have a 
binary bundle which makes development of Harmony as convenient as possible.


I think having third party libraries precompiled qualifies as making
DRLVM developer life easier, so it is well worth of implementing in HDK.

As far as I remember, the consensus about HDK was that it would be 
a good thing, but nobody yet volunteered to do it.



FYI:  Oliver said he was working on it, and has some JIRAs outstanding
to move us towards this goal.

  


Thanks Tim - seems I lost Salikhs original message in my inbox somewhere.

I have raised HARMONY-469 to rearrange the deploy directory under
Classlib into the proposed HDK shape:

deploy/
  \jdk
  |include
  \jre


Im also going to work on creating a doc for the website that describes 
the proposed

layout, in a similar style to the Project Conventions docs at [1].

Regards,
Oliver

[1] 
http://incubator.apache.org/harmony/subcomponents/classlibrary/index.html



Regards,
Tim


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DRLVM] build process improvement

2006-05-18 Thread Oliver Deakin



Mark Hindess wrote:

On 18 May 2006 at 14:36, Alexey Petrenko [EMAIL PROTECTED]
wrote:
  

2006/5/18, Mark Hindess [EMAIL PROTECTED]:


I was assuming that the HDK for the classlib would not contain
any VM specific artifacts.  If it did, which VM artifacts would
it contain SableVM, drlvm, etc. or all of them?  (Of course, the
classlib hdk will contain the VM stubs that the classlib build uses
today.)
  

But we need to run the classlib on some machine.



Of course.
  


We would not want to couple the classlib to a particular VM however. So 
far I have been thinking
about the HDK from a classlib perspective, without considering the VM 
used. I imagined that any
developer who used a classlib HDK would just grab a VM snapshot (or the 
IBM VME) and overlay
that onto the classlib HDK in the same way we overlay the VME onto 
deploy now.


I suggested in the Supporting working on a single module? thread 
(sorry, no link - it's not in the
archives yet) that a Harmony VM could produce releases in a jre or jdk 
layout. This
would allow a Classlib developer to grab an HDK and overlay the VM jre 
bundle onto it,
but at the same time an actual Harmony runtime user could grab a 
classlib jdk/jre snapshot and

combine it with the same VM.

Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DRLVM] build process improvement

2006-05-18 Thread Oliver Deakin

Andrey Chernyshev wrote:

We would not want to couple the classlib to a particular VM however. So
far I have been thinking
about the HDK from a classlib perspective, without considering the VM
used. I imagined that any
developer who used a classlib HDK would just grab a VM snapshot (or the
IBM VME) and overlay
that onto the classlib HDK in the same way we overlay the VME onto
deploy now.


Well, may be I'm missing some simple points here, but...
Is there any specific reason not to bundle a Harmony VM (let's
forget about DRLVM for now) into the HDK? Is this just a matter of
choice between SableVM, drlvm, e.t.c.?
I just thought it could be convenient for developers (as well as for
Harmony users) to take a complete workable JRE without an additional
need to combine something together.


I agree it would be useful to have a complete jre available, but I 
wasn't sure

how we would pick the VM to use when we potentially have 3/4 VMs
(if DRLVM is accepted, and once the classlib adapter is completed) 
capable of

running with Harmony classlib.

Perhaps we could just produce a jre for each VM once they are up and 
working?
I just thought that if we had separate classlib and a VM jre's which 
both contained
a predefined and matching directory structure, then the step of 
unpacking them

into the same directory to get them working is fairly straightforward.

Regards,
Oliver



Thank you,
Andrey Chernyshev
Intel Middleware Products Division

On 5/18/06, Oliver Deakin [EMAIL PROTECTED] wrote:



Mark Hindess wrote:
 On 18 May 2006 at 14:36, Alexey Petrenko 
[EMAIL PROTECTED]

 wrote:

 2006/5/18, Mark Hindess [EMAIL PROTECTED]:

 I was assuming that the HDK for the classlib would not contain
 any VM specific artifacts.  If it did, which VM artifacts would
 it contain SableVM, drlvm, etc. or all of them?  (Of course, the
 classlib hdk will contain the VM stubs that the classlib build uses
 today.)

 But we need to run the classlib on some machine.


 Of course.


We would not want to couple the classlib to a particular VM however. So
far I have been thinking
about the HDK from a classlib perspective, without considering the VM
used. I imagined that any
developer who used a classlib HDK would just grab a VM snapshot (or the
IBM VME) and overlay
that onto the classlib HDK in the same way we overlay the VME onto
deploy now.

I suggested in the Supporting working on a single module? thread
(sorry, no link - it's not in the
archives yet) that a Harmony VM could produce releases in a jre or jdk
layout. This
would allow a Classlib developer to grab an HDK and overlay the VM jre
bundle onto it,
but at the same time an actual Harmony runtime user could grab a
classlib jdk/jre snapshot and
combine it with the same VM.

Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [DRLVM] build process improvement

2006-05-19 Thread Oliver Deakin

Andrey Chernyshev wrote:

On 5/18/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Andrey Chernyshev wrote:
 We would not want to couple the classlib to a particular VM 
however. So

 far I have been thinking
 about the HDK from a classlib perspective, without considering the VM
 used. I imagined that any
 developer who used a classlib HDK would just grab a VM snapshot 
(or the

 IBM VME) and overlay
 that onto the classlib HDK in the same way we overlay the VME onto
 deploy now.

 Well, may be I'm missing some simple points here, but...
 Is there any specific reason not to bundle a Harmony VM (let's
 forget about DRLVM for now) into the HDK? Is this just a matter of
 choice between SableVM, drlvm, e.t.c.?
 I just thought it could be convenient for developers (as well as for
 Harmony users) to take a complete workable JRE without an additional
 need to combine something together.

I agree it would be useful to have a complete jre available, but I
wasn't sure
how we would pick the VM to use when we potentially have 3/4 VMs
(if DRLVM is accepted, and once the classlib adapter is completed)
capable of
running with Harmony classlib.


As for the DRLVM, it doesn't actually require any specific adapter for
Harmony class libraries except a small patch for MsgHelp and
URLClassLoader patch which is already included with JIRA-438.

One additional patch would be required to make it workable with the
most recent classlib version (at the time we preparing DRLVM, we were
just using classlib version taken at March 13). This isn't big as well
- just rename MsgHelp and former com.ibm.oti.vm.VM.java to the right
locations, and then update a bunch of building files to accommodate
the recent API contributions and migrate to 1.5 compiler. I can add
this to JIRA as well, but, I just thought it may make sense to see if
the DRLVM is accepted first.



Perhaps we could just produce a jre for each VM once they are up and
working?


Yes, I think this could be the choice, if we don't select some 
default one.



I just thought that if we had separate classlib and a VM jre's which
both contained
a predefined and matching directory structure, then the step of
unpacking them
into the same directory to get them working is fairly straightforward.


That should work as well, at least for class libs. Actually I was
thinking of HDK containing the pre-compiled binaries for all modules,
not just the ones from the class libraries. VM developers would
probably want to be able to work on a single module as well, for ex.
JIT or GC. They wouldn't want to compile, for example, log4cxx or APR
each time they want to build VM, these binaries are also worth to be
the part of HDK.

Either it would result into a separate HDK's containing VM and
classlib modules, or we just can combine everything within a single
HDK (which seems to me just simpler).


I personally prefer the idea of separate HDKs for VMs and classlib, although
I dont feel strongly about it - it just makes the most sense to me.
If someone wants to work exclusively on, say, classlib, they probably won't
want to have to download extra hdk libraries for all of the VMs.
In the case where someone works on both the Classlib and a VM (or
more than one VM) if the HDKs have a uniform structure then they
need only download the Classlib HDK and the VM HDK of their
choice and unpack them in the same place.

Eventually I imagine us having HDK, JDK and JRE bundles for
the Classlib and each VM, and the developer/user can just pick and
mix as they wish from that selection.

Regards,
Oliver



Thanks,
Andrey.




Regards,
Oliver


 Thank you,
 Andrey Chernyshev
 Intel Middleware Products Division

 On 5/18/06, Oliver Deakin [EMAIL PROTECTED] wrote:


 Mark Hindess wrote:
  On 18 May 2006 at 14:36, Alexey Petrenko
 [EMAIL PROTECTED]
  wrote:
 
  2006/5/18, Mark Hindess [EMAIL PROTECTED]:
 
  I was assuming that the HDK for the classlib would not contain
  any VM specific artifacts.  If it did, which VM artifacts would
  it contain SableVM, drlvm, etc. or all of them?  (Of course, the
  classlib hdk will contain the VM stubs that the classlib build 
uses

  today.)
 
  But we need to run the classlib on some machine.
 
 
  Of course.
 

 We would not want to couple the classlib to a particular VM 
however. So

 far I have been thinking
 about the HDK from a classlib perspective, without considering the VM
 used. I imagined that any
 developer who used a classlib HDK would just grab a VM snapshot 
(or the

 IBM VME) and overlay
 that onto the classlib HDK in the same way we overlay the VME onto
 deploy now.

 I suggested in the Supporting working on a single module? thread
 (sorry, no link - it's not in the
 archives yet) that a Harmony VM could produce releases in a jre or 
jdk

 layout. This
 would allow a Classlib developer to grab an HDK and overlay the VM 
jre

 bundle onto it,
 but at the same time an actual Harmony runtime user could grab a
 classlib jdk/jre snapshot and
 combine it with the same VM

Re: [announce] Swing/AWT Donation Coming...

2006-05-19 Thread Oliver Deakin
Great news! Thanks Intel, and well done to Geir and Tim for getting this 
together for JavaOne!


Tim Ellison wrote:

Way to go Intel!

During the JavaOne talk we were able to demonstrate the following
applications running on last Friday's snapshot build of Harmony with
the IBM VME:

 - RSSOwl 1.2 atom/rss newsreader (www.rssowl.org)
 - Tomcat 5.5.17, the JSP and Servlet examples
 - Eclipse 3.2 RC4 and how it is used for componentised development

and as Geir already said, his announcement of AWT/Swing allowed us to demo:

 - JEdit, a non-trivial Swing application

We are learning a lot about how to manage large contributions into the
Harmony project, and I think everyone is doing a great job.

Code talks.  This much code talks loudly.

Thanks again
Tim

Geir Magnusson Jr wrote:
  

Today during our JavaOne talk (given by Tim and I) I was proud to
demonstrate JEdit running on Harmony!

That's right, with Swing/AWT code.  The formal contribution is on it's
way, and I don't wish to steal any more thunder from the contribution
when it's made, but we (Intel hat on here..) wasn't able to make the
donation in time for the talk today because of internal process loose
ends, and I wanted to make a splash for us at JavaOne.

I expect it will be here in the next couple of days.

Harmony - The question of compatible open source isn't whether, but when!

geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-22 Thread Oliver Deakin

Hi all,

I have opened HARMONY-485, which proposes an additional 
doc for the website describing the HDK and its contents.
The layout of the HDK described in the doc matches that 
produced by the build script alterations raised in

HARMONY-469.

I hope that eventually (once the natives are modularised
and build scripts are altered to understand/use the HDK) 
the doc will expand into a more full description of how 
developers can use the HDK to rebuild Java/native code.


Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Repackaged IBM VME

2006-05-22 Thread Oliver Deakin

Hi all,

There is now a repackaged version of the IBM VME available at:

https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_USsource=ahdk

The new VME archives are called Harmony-vme-win.IA32-v3.zip for Windows
and Harmony-vme-linux.IA32-v3.tar.gz for Linux.

Following discussion on the mailing list [1], the IBM VME has been 
reorganised

into the following directory layout:

EXTRACT_DIR
  |
  +---jre
  ||
  |\---bin
  | |
  | \---default
  |
  \---vme_license

The actual VME binaries are still the same as the previous version, but the
new layout means it can be dropped into both the current development
environment and also any future classlib jre/jdk/hdk snapshots that are 
taken.


Since this is purely a directory layout update, those who already have a 
version

of the IBM VME in their environment will *not* need to get the new packages.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Repackaged IBM VME

2006-05-23 Thread Oliver Deakin

Jimmy, Jing Lv wrote:

Oliver Deakin wrote:

Hi all,

There is now a repackaged version of the IBM VME available at:

https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_USsource=ahdk 



The new VME archives are called Harmony-vme-win.IA32-v3.zip for Windows
and Harmony-vme-linux.IA32-v3.tar.gz for Linux.

Following discussion on the mailing list [1], the IBM VME has been 
reorganised

into the following directory layout:

EXTRACT_DIR
  |
  +---jre
  ||
  |\---bin
  | |
  | \---default
  |
  \---vme_license

The actual VME binaries are still the same as the previous version, 
but the

new layout means it can be dropped into both the current development
environment and also any future classlib jre/jdk/hdk snapshots that 
are taken.


Since this is purely a directory layout update, those who already 
have a version
of the IBM VME in their environment will *not* need to get the new 
packages.


Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





That's great!


Thanks Jimmy!
I should reiterate in case anyone misinterprets my message - these new 
VME packages
contain exactly the same binaries as the previous version i.e. they are 
at a 1.4 level.
The changes in this VME version are purely cosmetic, and are made in 
response to

recent discussions on the list (as described above).

Regards,
Oliver


And we are solicitous of a new VM of Java 5. :)



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Multi-tree HDK config - working directory ( was Re: Supporting working on a single module?)

2006-05-25 Thread Oliver Deakin

Geir Magnusson Jr wrote:
Some stuff that got lost (because I got consumed by J1 and I was the 
only one pushing on it) was the idea of ensuring that


1) the HDK could be anywhere - the was no hard-wired spot.  That 
allowed having multiple simultaneous HDKs (ex different snapshot 
version) at the same time as a full build


Perhaps the HDK would have a default location which could be overridden
by passing a command line option to the build scripts - possibly in a 
similar

way to Marks suggestion for selection of the rmi module location [1].
My modifications to build an HDK from the classlib code (HARMONY-469)
use an Ant property hy.hdk to specify the root directory of the HDK. With
the current patch, this property doesnt quite propagate all the way down
to the native makefiles, but this shouldnt be too hard to extend. Once this
was done, a developer could then override the default HDK location using
a command line similar to:
  
   ant -Dhy.hdk=/my/hdk/location -f make/build.xml



The default HDK location would probably depend on what area you are working
on - in classlib the trunk/deploy directory is the default build 
location at the
moment, and I think it makes sense to keep this as the default HDK 
directory.




2) the build should ensure that the materials of the HDK never get 
overwritten so that we can always tell a contributor w/ a question 
first, before we debug this, do a ant hdk-copy.. or something to 
easily get them to a known good state.


This to me sounds like we need some kind of working directory and a 
'hdk-copy' target.


The working model then allows freedom of choosing an hdk or a current 
full build as the 'base' to work with...


I imagine that an HDK would come in a zip format, much like the current 
snapshots [2].
If this was the case, then once you have downloaded the HDK zip, you can 
unpack it
into your working directory where it will be modified. However, you 
still have the
original zip to fall back on if you need to. I'm not sure that we need 
an extra build
target for this process - to get back to a known good state, you can 
just unpack the

zip again into your working directory.

Am I missing something?

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]

[2] http://people.apache.org/dist/incubator/harmony/snapshots/



Does this make any sense to anyone else?

geir


Oliver Deakin wrote:

Hi all,

I have opened HARMONY-485, which proposes an additional doc for the 
website describing the HDK and its contents.
The layout of the HDK described in the doc matches that produced by 
the build script alterations raised in

HARMONY-469.

I hope that eventually (once the natives are modularised
and build scripts are altered to understand/use the HDK) the doc will 
expand into a more full description of how developers can use the HDK 
to rebuild Java/native code.


Regards,
Oliver




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-25 Thread Oliver Deakin

Geir Magnusson Jr wrote:
On thing to think about (which I didn't realize until now) is that HDK 
should sit above /classlib in the tree right?  the VMs will need it as 
well.  I imagine :


enhanced/
classlib/
drlvm/
jchevm/
bootvm/

So maybe add a

   enhanced/common

directory in which the HDK sits by default?

I'd like to avoid the structural balkanization of making classlib an 
island.


I had imagined that HDKs would be packaged up as zips and made available
in a similar way to snapshots, rather than having a tree of binaries 
checked into

SVN. IMHO keeping it as a zip makes unpacking it where you want very simple,
allows you to easily keep a known good HDK configuration locally (in the
form of the original zip) and makes it very easy to get previous versions of
the HDK (just download an earlier zip).
I'd be interested to hear what general consensus on this matter is, and
how mailing list members would prefer the HDK to be provided.

We discussed [1] that having separate HDKs for each VM and for classlib
makes sense - as long as we keep them in a uniform shape, then they can 
easily
overlaid onto each other to allow a developer to work on the classlib 
and the

VM of their choice.
I don't see this separation of classlib and VMs as a bad thing. I think that
having a VMI enabling us to develop the classlib and VMs as distinct units,
and developing using potentially disparate methods and build systems that
most suit that component, is one of the cool things about Harmony!

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]




geir

Oliver Deakin wrote:

Hi all,

I have opened HARMONY-485, which proposes an additional doc for the 
website describing the HDK and its contents.
The layout of the HDK described in the doc matches that produced by 
the build script alterations raised in

HARMONY-469.

I hope that eventually (once the natives are modularised
and build scripts are altered to understand/use the HDK) the doc will 
expand into a more full description of how developers can use the HDK 
to rebuild Java/native code.


Regards,
Oliver




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-25 Thread Oliver Deakin

Geir Magnusson Jr wrote:



Oliver Deakin wrote:

Geir Magnusson Jr wrote:
On thing to think about (which I didn't realize until now) is that 
HDK should sit above /classlib in the tree right?  the VMs will need 
it as well.  I imagine :


enhanced/
classlib/
drlvm/
jchevm/
bootvm/

So maybe add a

   enhanced/common

directory in which the HDK sits by default?

I'd like to avoid the structural balkanization of making classlib an 
island.


I had imagined that HDKs would be packaged up as zips and made available
in a similar way to snapshots, rather than having a tree of binaries 
checked into
SVN. 


I think we all did.  I don't know what about the above leads you to 
assume I would have wanted this in SVN.


I think I'm just easily confused ;)
So are you suggesting that the HDK should sit in a concrete location
relative to whichever component (VM/classlib) we are working on, or
that the HDK zips should be stored in SVN? (or something else...?)





IMHO keeping it as a zip makes unpacking it where you want very simple,
allows you to easily keep a known good HDK configuration locally 
(in the
form of the original zip) and makes it very easy to get previous 
versions of

the HDK (just download an earlier zip).
I'd be interested to hear what general consensus on this matter is, and
how mailing list members would prefer the HDK to be provided.


Keeping zips around and all that is great, but is it that Eclipse 
would break having it up and above the root of the project tree?


I dont see this as a problem.
If you use the Ant scripts to build (which can be used within Eclipse), 
then

an hy.hdk property (described in one of my other mails in this thread) that
can be set to point at an HDK will enable you to use any location.
If you wanted to work on Java code as a Java project under Eclipse,
then you should only have to point Eclipse at the jre under your HDK
and Eclipse will build against it.

Regards,
Oliver



geir



We discussed [1] that having separate HDKs for each VM and for classlib
makes sense - as long as we keep them in a uniform shape, then they 
can easily
overlaid onto each other to allow a developer to work on the classlib 
and the

VM of their choice.
I don't see this separation of classlib and VMs as a bad thing. I 
think that
having a VMI enabling us to develop the classlib and VMs as distinct 
units,
and developing using potentially disparate methods and build systems 
that

most suit that component, is one of the cool things about Harmony!





Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





geir

Oliver Deakin wrote:

Hi all,

I have opened HARMONY-485, which proposes an additional doc for the 
website describing the HDK and its contents.
The layout of the HDK described in the doc matches that produced by 
the build script alterations raised in

HARMONY-469.

I hope that eventually (once the natives are modularised
and build scripts are altered to understand/use the HDK) the doc 
will expand into a more full description of how developers can use 
the HDK to rebuild Java/native code.


Regards,
Oliver




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-05-25 Thread Oliver Deakin

Geir Magnusson Jr wrote:



Oliver Deakin wrote:

Geir Magnusson Jr wrote:



Oliver Deakin wrote:

Geir Magnusson Jr wrote:
On thing to think about (which I didn't realize until now) is that 
HDK should sit above /classlib in the tree right?  the VMs will 
need it as well.  I imagine :


enhanced/
classlib/
drlvm/
jchevm/
bootvm/

So maybe add a

   enhanced/common

directory in which the HDK sits by default?

I'd like to avoid the structural balkanization of making classlib 
an island.


I had imagined that HDKs would be packaged up as zips and made 
available
in a similar way to snapshots, rather than having a tree of 
binaries checked into
SVN. 


I think we all did.  I don't know what about the above leads you to 
assume I would have wanted this in SVN.


I think I'm just easily confused ;)
So are you suggesting that the HDK should sit in a concrete location
relative to whichever component (VM/classlib) we are working on, or
that the HDK zips should be stored in SVN? (or something else...?)


I think that the HDK should be a zip/tar that you download from our 
distro locations(s), and dropped into


a) a well-known default location

b) anywhere you want so you can set a pointer if you have more than one

and no, they aren't stored in SVN.


Sounds good!









IMHO keeping it as a zip makes unpacking it where you want very 
simple,
allows you to easily keep a known good HDK configuration locally 
(in the
form of the original zip) and makes it very easy to get previous 
versions of

the HDK (just download an earlier zip).
I'd be interested to hear what general consensus on this matter is, 
and

how mailing list members would prefer the HDK to be provided.


Keeping zips around and all that is great, but is it that Eclipse 
would break having it up and above the root of the project tree?


I dont see this as a problem.
If you use the Ant scripts to build (which can be used within 
Eclipse), then
an hy.hdk property (described in one of my other mails in this 
thread) that

can be set to point at an HDK will enable you to use any location.
If you wanted to work on Java code as a Java project under Eclipse,
then you should only have to point Eclipse at the jre under your HDK
and Eclipse will build against it.


Cool



Regards,
Oliver



geir



We discussed [1] that having separate HDKs for each VM and for 
classlib
makes sense - as long as we keep them in a uniform shape, then they 
can easily
overlaid onto each other to allow a developer to work on the 
classlib and the

VM of their choice.
I don't see this separation of classlib and VMs as a bad thing. I 
think that
having a VMI enabling us to develop the classlib and VMs as 
distinct units,
and developing using potentially disparate methods and build 
systems that

most suit that component, is one of the cool things about Harmony!





Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





geir

Oliver Deakin wrote:

Hi all,

I have opened HARMONY-485, which proposes an additional doc for 
the website describing the HDK and its contents.
The layout of the HDK described in the doc matches that produced 
by the build script alterations raised in

HARMONY-469.

I hope that eventually (once the natives are modularised
and build scripts are altered to understand/use the HDK) the doc 
will expand into a more full description of how developers can 
use the HDK to rebuild Java/native code.


Regards,
Oliver




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]







-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Multi-tree HDK config - working directory ( was Re: Supporting working on a single module?)

2006-05-25 Thread Oliver Deakin


Vladimir Gorr wrote:

On 5/25/06, Oliver Deakin [EMAIL PROTECTED] wrote:


Geir Magnusson Jr wrote:
 Some stuff that got lost (because I got consumed by J1 and I was the
 only one pushing on it) was the idea of ensuring that

 1) the HDK could be anywhere - the was no hard-wired spot.  That
 allowed having multiple simultaneous HDKs (ex different snapshot
 version) at the same time as a full build

Perhaps the HDK would have a default location which could be overridden
by passing a command line option to the build scripts - possibly in a
similar
way to Marks suggestion for selection of the rmi module location [1].
My modifications to build an HDK from the classlib code (HARMONY-469)
use an Ant property hy.hdk to specify the root directory of the HDK. 
With

the current patch, this property doesnt quite propagate all the way down
to the native makefiles, but this shouldnt be too hard to extend. Once
this
was done, a developer could then override the default HDK location using
a command line similar to:

   ant -Dhy.hdk=/my/hdk/location -f make/build.xml


The default HDK location would probably depend on what area you are
working
on - in classlib the trunk/deploy directory is the default build
location at the
moment, and I think it makes sense to keep this as the default HDK
directory.


 2) the build should ensure that the materials of the HDK never get
 overwritten so that we can always tell a contributor w/ a question
 first, before we debug this, do a ant hdk-copy.. or something to
 easily get them to a known good state.

 This to me sounds like we need some kind of working directory and a
 'hdk-copy' target.

 The working model then allows freedom of choosing an hdk or a current
 full build as the 'base' to work with...

I imagine that an HDK would come in a zip format, much like the current
snapshots [2].
If this was the case, then once you have downloaded the HDK zip, you can
unpack it
into your working directory where it will be modified.



Oliver,

whether does it mean the HDK will contain the sources (src.zip?) as well?
Otherwise I don't understand what can be modified. Could you please 
clarify

this?


Hi Vladimir,

When you rebuild, the old versions of binaries (dll/so/jar etc.) would 
be overwritten
with the new versions that are built from your altered code, and 
potentially altered

header files will be replaced (e.g. jni.h).
This is similar to the current system, where you can use a snapshot 
build to rebuild
Java code against, and you can then drop your rebuilt jars over those of 
the snapshot

to create an updated jre.

The HDK will contain binaries to build against, and some necessary 
header files, but

no complete src.zip.
I have put up a proposed description of the HDK on the website [1], which
summarises some of the ideas in this thread so far. I hope it helps 
clarify :)


Regards,
Oliver

[1] http://incubator.apache.org/harmony/subcomponents/classlibrary/hdk.html


Thanks,
Vladimir.

However, you still have the

original zip to fall back on if you need to. I'm not sure that we need
an extra build
target for this process - to get back to a known good state, you can
just unpack the
zip again into your working directory.

Am I missing something?

Regards,
Oliver

[1]

http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 


[2] http://people.apache.org/dist/incubator/harmony/snapshots/


 Does this make any sense to anyone else?

 geir


 Oliver Deakin wrote:
 Hi all,

 I have opened HARMONY-485, which proposes an additional doc for the
 website describing the HDK and its contents.
 The layout of the HDK described in the doc matches that produced by
 the build script alterations raised in
 HARMONY-469.

 I hope that eventually (once the natives are modularised
 and build scripts are altered to understand/use the HDK) the doc will
 expand into a more full description of how developers can use the HDK
 to rebuild Java/native code.

 Regards,
 Oliver



 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Multi-tree HDK config - working directory ( was Re: Supporting working on a single module?)

2006-05-25 Thread Oliver Deakin


Geir Magnusson Jr wrote:



Vladimir Gorr wrote:

whether does it mean the HDK will contain the sources (src.zip?) as 
well?
Otherwise I don't understand what can be modified. Could you please 
clarify

this?



I know you addressed to oliver, but let me take a wack at it to see if 
I grok everything


One of the many motivations for the HDK idea was a refactoring of the 
natives into modules, which brought up the interdependency issue.  To 
solve, the idea is top copy natives headers at build time into One Big 
Pile.


From the work Im currently doing on attempting to move the 
native-src/*/include directory
contents into their appropriate modules, using the modularised natives 
layout I
described previously [1], the One Big Pile actually doesnt look that 
bad - for the classlib
there are actually only 13 header files that need to be shared between 
modules, and we

might be able to reduce these with extra work.



So if you are in module foo, and working on something that modified 
foo.h, foo.h will be copied at build from module foo into One Big 
Pile, thereby overwriting the HDK's copy of foo.h, since the HDK and 
One Big Pile are conflated in the current model.


foo.h will only be copied into the include area of the HDK if it is 
required by other
modules. If it is only used by the natives in that module, it will stay 
where it is.




I don't like this, because as I am a forgetful person, I may point 
another project/module at the HDK, and now will be tormented by 
strange things happening because the foo.h has been changed...


That's why I've been suggesting a model (just for everyone's sanity, 
including people posting questions to the dev list), where the HDK is 
never modified, and there's a working area in the individual project 
tree where HDK + ThingsBeingWorkedOn are intermingled for the build 
process local to that individual project.


Isnt that just overwriting a copy of the HDK instead of the original? 
What is the

original HDK being used for then?

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]




geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [VOTE] Acceptance of HARMONY-438 : DRL Virtual Machine Contribution

2006-05-31 Thread Oliver Deakin

+1

Geir Magnusson Jr wrote:
I have received the ACQs and the BCC for Harmony-438, so I can assert 
that the critical provenance paperwork is in order and in SVN.


Please vote to accept or reject this codebase into the Apache Harmony 
class library :


[ ] + 1 Accept
[ ] -1 Reject  (provide reason below)

Lets let this run a minimum of 3 days unless a) someone states they 
need more time or b) we get all committer votes before then.


I think that getting this into SVN and letting people supply patches 
against SVN will be productive...


geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] deploy directory reorganised into HDK shape

2006-05-31 Thread Oliver Deakin

Hi Nathan,

Good catch - you will need to point your Installed JREs  dialog at the
deploy/jdk/jre directory instead of deploy/jre. No changes to the
plug-in itself should be needed however.
Thanks for spotting that!

Regards,
Oliver

Nathan Beyer wrote:

Does this have any affect on the Eclipse Plug-in for Harmony VM type? Do we
just point at 'deploy/jdk' instead of 'deploy' now?

-Nathan

  

-Original Message-
From: Oliver Deakin [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 31, 2006 3:48 AM
To: harmony-dev@incubator.apache.org
Subject: [classlib] deploy directory reorganised into HDK shape

Hi all,

Tim has just applied the patch for HARMONY-469. This patch changes
the build scripts so that the deploy directory is laid out in the same way
as the hdkbase directory described in [1].

So from SVN revision r410479, when you build the classlib it will create
a directory structure that looks like:

deploy
 \---jdk
  |---jre
  \---include


For those who are already developing in classlib using the IBM VME,
you will need to move your VME directory from deploy/jre/bin/default
to deploy/jdk/jre/bin/default. After that the old deploy/jre directory can
be deleted. You should then be able to run tests and continue
developing as usual.

Anyone who is newly coming into the project who wants to use the
IBM VME should grab the latest versions (Harmony-vme-win.IA32-v3.zip
and Harmony-vme-linux.IA32-v3.tar.gz) at:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_USsourc
e=ahdk


These VME packages are organised with the following layout:

EXTRACT_DIR
  |
  +---jre
  ||
  |\---bin
  | |
  | \---default
  |
  \---vme_license

so that they can be unpacked directly into the classlib deploy directory
structure (under deploy/jdk)
Instructions on developing and building the Harmony classlib can
be found at [2].

Regards,
Oliver

[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/hdk.html
[2]
http://incubator.apache.org/harmony/subcomponents/classlibrary/build_class
lib.html

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-06-05 Thread Oliver Deakin

Just thought Id give a bit of a heads up on HARMONY-561.
The patch attached to that JIRA moves the header files under the
native-src/platform/include directories into
/modules/luni|archive/src/main/native. It also updates the build 
scripts and

makefiles to move the headers into a shared location (hdk/include, as
described at [1])  and compile against their new location.

The next piece of work I intend to look at is getting the windows/linux 
makefiles to
build their .lib/.a files directly into the hdk/lib directory (also 
described in [1]),
and getting each native component to link against the libs at that 
location. Once
this is done the deploy directory should look like a complete HDK after 
a global
build. ie it will contain everything needed to build each individual 
native component

or java module against it.

We should then be able to make the final moves of the classlib native code
into the modules (and start creating some classlib HDK snapshots).

Regards,
Oliver


[1] http://incubator.apache.org/harmony/subcomponents/classlibrary/hdk.html

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [announce] New Apache Harmony Committer : Nathan Beyer

2006-06-06 Thread Oliver Deakin

Congrats Nathan - keep up the good work!

Geir Magnusson Jr wrote:

Please join the Apache Harmony PPMC in welcoming the project's newest
committer, Nathan Beyer.

Nathan has shown sustained dedication to the project, an ability to work
well with others, and share the common vision we have for Harmony. We
all continue to expect great things from him.

Nathan, as a first step to test your almighty powers of committership,
please update the committers page on the website.  That should be a good
 (and harmless) exercise to test if everything is working.

Things to do :

1) test ssh-ing to the server people.apache.org.
2) Change your login password on the machine as per the account email
3) Add a public key to .ssh so you can stop using the password
4) Set your SVN password  : just type 'svnpasswd'

At this point, you should be good to go.  Checkout the website from svn
and update it.  See if you can figure out how.

Also, for your main harmony/enhanced/classlib/trunk please be sure that
you have checked out via 'https' and not 'http' or you can't check in.
You can switch using svn switch. (See the manual)

Finally, although you now have the ability to commit, please remember :

1) continue being as transparent and communicative as possible.  You
earned committer status in part because of your engagement with others.
 While it was a  have to situation because you had to submit patches
and defend them, but we believe it is a want to.  Community is the key
to any Apache project.

2)We don't want anyone going off and doing lots of work locally, and
then committing.  Committing is like voting in Chicago - do it early and
often.  Of course, you don't want to break the build, but keep the
commit bombs to an absolute minimum, and warn the community if you are
going to do it - we may suggest it goes into a branch at first.  Use
branches if you need to.

3) Always remember that you can **never** commit code that comes from
someone else, even a co-worker.  All code from someone else must be
submitted by the copyright holder (either the author or author's
employer, depending) as a JIRA, and then follow up with the required
ACQs and BCC.


Again, thanks for your hard work so far, and welcome.

The Apache Harmony PPMC


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [VOTE] Acceptance of Harmony-528 : AWT, Java2D and Swing

2006-06-06 Thread Oliver Deakin

+1

Geir Magnusson Jr wrote:

I have received the ACQs and the BCC for Harmony-528, so I can assert
that the critical provenance paperwork is in order and in SVN.

Please vote to accept or reject this codebase into the Apache Harmony
class library :

[ ] + 1 Accept
[ ] -1 Reject  (provide reason below)

Lets let this run a minimum of 3 days unless a) someone states they need
more time or b) we get all committer votes before then.

Again, I think that getting this into SVN and letting people supply
patches against SVN will be productive.  Also, there's a lot of
excitement around getting this in and a binary snapshot created...

geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Supporting working on a single module?

2006-06-07 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

Just thought Id give a bit of a heads up on HARMONY-561.
The patch attached to that JIRA moves the header files under the
native-src/platform/include directories into
/modules/luni|archive/src/main/native. It also updates the build
scripts and
makefiles to move the headers into a shared location (hdk/include, as
described at [1])  and compile against their new location.



Right, and I really don't like it, have been saying I don't like
overwriting the HDK, gave a reason for why I don't like it, and never
heard a reason why it must be this way.
  


ok, I understand that - perhaps I should have used deploy/include 
instead of hdk/include
in the above description, but I was trying to link the patch with the 
HDK layout described
on the website so it was clear how it would enable us to create and use 
an HDK.


The patches Im submitting at the moment are *not* intended to overwrite 
a base HDK (which
I believe is what you did not like), but rather to place their output in 
the working deploy

directory.
What Im currently doing is just attempting to modularise the native code 
with building against
an HDK in mind. This does *not* preclude your suggestion of having an 
immutable base HDK -
in fact, I hope that the work Im doing will enable us to do exactly 
that. However, before
that can happen there is plenty of work to be done reorganising the 
natives and making

the build scripts capable of building against an HDK.

In summary, here's what IMHO still needs to be done with the natives to 
get the HDK use

that we have discussed:
1) Reorganise natives into modules (this has been hanging over us for 
too long!)
2) Alter the build scripts to be capable of building in a modular way 
(Java and native code)

against the contents of the deploy (or target) directory.
3) Finally, alter the build scripts to be capable of building against a 
base HDK that
is immutable (i.e. your suggestion) but still putting its produced 
binaries into the deploy

directory (so not overwriting the base HDK content).

Does that sound good?


If you want me to put my money where my mouth is and just patch it, I'm
more than able to do that, but I'd rather reach consensus together on
how to go forward.
  


Agreed - concensus is preferable and Im glad you brought up the fact 
that you were unhappy
with what was happening and gave me an opportunity to explain. Id like 
the HDK to be

satisfactory and useful to all participants in the project.

Regards,
Oliver

  

The next piece of work I intend to look at is getting the windows/linux
makefiles to
build their .lib/.a files directly into the hdk/lib directory (also
described in [1]),



See above

  

and getting each native component to link against the libs at that
location. Once
this is done the deploy directory should look like a complete HDK after
a global
build. ie it will contain everything needed to build each individual
native component
or java module against it.



That is a good outcome if it isn't the hdk directory.  If it's the HDK
directory, consider me against it.

  

We should then be able to make the final moves of the classlib native code
into the modules (and start creating some classlib HDK snapshots).



Great.

geir

  

Regards,
Oliver


[1] http://incubator.apache.org/harmony/subcomponents/classlibrary/hdk.html

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] moving to 1.5 for real - discuss

2006-06-08 Thread Oliver Deakin

Tim Ellison wrote:

Thanks to many stellar contributions all round we are pretty much
exhausting the work we can do with the temporary solution we adopted of
source=1.5 target=jsr14|1.4 compiler flags.

How do you feel about moving to 1.5 for real?  It would be simple to
change the Java compiler build options to generate 1.5 class files.
  


Good idea! We've been edging around what we can do with jsr14 for a 
while now. That
solution was only ever intended to be temporary, maybe now is the time 
to fully step

up to 5.0.
Altering the build system to create 5.0 level classes is a one liner 
AFAICS, and will allow
us to enable some of the code we have that currently is lying dormant 
(such as the 5.0

version of itc's rmi, and we can remove rmi2.1.4).


AIUI there is at least some 1.5 awareness in DRLVM and JCHEVM (and
SableVM?) which would allow the current functionality to work, and IBM
can provide a 1.5 version of the IBM VME to run on.

Thoughts?
  


I think this move is probably overdue - it'd be great to step the 
classlib and our VMs up

to 5.0, feels like we're moving closer to our target!

Regards,
Oliver

Tim

  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [general] revisiting structure of SVN

2006-06-08 Thread Oliver Deakin

Vladimir Gorr wrote:

On 6/7/06, Ivan Volosyuk [EMAIL PROTECTED] wrote:


2006/6/6, Oliver Deakin [EMAIL PROTECTED]:
 Ivan Volosyuk wrote:
  2006/6/6, Oliver Deakin [EMAIL PROTECTED]:
  SNIP
  I can see how confusion could be caused by their location, however
  IMHO it would cause more confusion to have the kernel stubs
  located separate to the rest of the Java code.
  Perhaps it would be clearer if the directories were renamed
  luni-kernel-stubs and security-kernel-stubs (which would also match
  the jar names they generate)?
 
 
  A small note.
  As a writter of classlib adapter for jchevm I can say that the stubs
  was quite useful for writting kernel classes specific for jchevm.
  There are also some troubles with them: number of functions 
fallbacks

  to null value, while it can be something like RuntimeException(not
  implemented). Some of the classes has better implementation in 
drlvm,

  which can also be used as default implementation.
 

 Yes, you're right - not all of the classes in luni-kernel and
 security-kernel
 contain implementation code. Some literally are stub classes that just
 return
 null, or throw an exception from every method. From a quick look, the
 implemented classes are:

 luni-kernel:
java.lang.StackTraceElement
java.lang.System
java.lang.ThreadGroup
java.lang.Throwable
java.lang.ref.SoftReference
java.lang.ref.WeakReference

 security-kernel:
java.security.AccessControlContext
java.security.AccessController

 Ivan, are you proposing that we fill in the gaps with some of the
kernel
 classes from drlvm so that we have a complete set of reference kernel
 classes to help future VM writers?

Well, after a bit of thinking, no.
The stubbed version of kernel classes should be as clean as possible.
Any implementation in this classes can add false dependencies to other
classes which (the dependencies) can be absent in the other vm
implementation. As the drlvm is already in svn the initial
implementation for some classes can be taken directly from there.



Not sure this thing is right to do. The kernel classes use the 
VM-specific

interfaces and have a lot of internal dependencies.
Only VM writer can define them in accordance with own design. It's 
clear he

can look at the existent implementation
(as example) and no more than.


Agreed - the kernel classes that are provided in Harmony 
(classlib/drlvm/other)
shouldn't drive the implementation of the VM. They're there for example 
only,

and how the VM writer uses them is their choice - but they should not feel
they have to copy them exactly or implement any part of their VM to fit
in with them. However, developers are free to take as much inspiration
from them as they like :)

The bottom line is, the kernel is intended as a VM-specific Java part of 
the

whole VMI, and VM developers are free to implement it as they wish within
the API spec.

Regards,
Oliver



Thanks,
Vladimir.


--

Ivan
Intel Middleware Products Division

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [general] More SVN restructuring thoughts... (was Re: Ant build | IOException)

2006-06-08 Thread Oliver Deakin

That's great, thanks Garrett. Sound like a very sensible, simple way to
approach global builds.

Regards,
Oliver

Garrett Rooney wrote:

On 6/8/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Would /classlib and /drlvm be checked out in your local workspace as 
peers

to /trunk or would they be checked out under /trunk/classlib etc. and
linked to
the right bit of SVN from there? i.e. if you wanted to make local
changes to classlib,
say, and be able to incorporate them into your global build under 
/trunk,

but also generate patches against the code in svn under /classlib/trunk
for submission,
where would the code you are making changes to be checked out to in the
above
diagram? (my knowledge of svn switch is somewhat slim...)


The basic idea is that you initially check out a copy of trunk that
has a few empty directories in it.  Then, svn switch says hey, this
directory in my working copy that used to point to
$HARMONY/trunk/classpath (which is empty) should now point to
$HARMONY/classpath/trunk.  It'll then essentially check out the
contents of $HARMONY/classpath/trunk into your working copy's
classpath directory.  At that point you can do things like run an
update or a diff at the top level of your working copy and it'll
nicely recurse into the switched subdirectories.  If you make changes
in those subdirectories there and commit them they'll be committed to
the $HARMONY/classpath/trunk directory in the repository (or wherever
you had that subdirectory switched to).  Does that make sense?

-garrett

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [drlvm] building...

2006-06-09 Thread Oliver Deakin


Garrett Rooney wrote:

SNIP

Now sometimes you do need to have a totally different implementation
for a new platform, but a lot of the time, there can be some minor
ifdefs (based on availability of FEATURES, not platform) that allow
the same file to work on multiple platforms.

Just splitting up into two implementations (or really N
implementations, since eventually Harmony will run on more than just
Linux and Windows) is a bit of a waste, and ifdefing based on platform
is just as bad, since the stuff that works on FreeBSD or OpenBSD or
Solaris is often the same as the stuff that works on Linux...

What we ended up with in APR is something like this:

There's a base implementation of each file that is designed to work on
any unix system.  These go in unix/ subdirectories.  If it's feasable
to make that file work elsewhere (Netware, Windows, OS/2, BeOS,
whatever) then it's done via ifdefs.  If the ifdefs get out of
control, or the platform is really that different then you start
splitting off into platform specific implementations, but that's a
last resort.


We actually had some discussion about doing exactly that kind of thing
a while back [1] and more recently [2]. The basic ideas were:

- Make the code as shared as possible, by using IFDEFs where it
makes sense. We've made some progress in this area, with a lot of
our code shared, but as you say there is still more to do.
Im working on moving the native code around at the moment, so
more unification is probably something I will look at at the same
time.
- Where IFDEFs are starting to make the code difficult to read,
split the source up into platform specific files. The kind of layout
that could be used for this is described in [2].

I think this matches what you're describing for APR. Do you
agree?

Im interested to know what kind of build system is used in APR -
previous suggestions for picking up platform specific code have
included using clever Ant scripts to figure out which file versions
are needed, or using gmake and its vpath variable to pick them
up (they are described more extensively in [1]). What does APR
use to deal with building platform specific versions of files
over shared ones?



So in the end, the main things to keep in mind are to make your unix
implemenation try to work across as many systems as possible, ifdefing
based on availability of features as much as you can, not based on
specific platforms, and only as a last resort split out into totally
different platform specific implementations.



IIRC Mark suggested ifdef'ing on features as opposed to platforms before
[3], and it seems like a good idea to me, providing a more obvious
reason for the existence of the ifdef and allowing us to use ifdefs that
describe more than just a ballpark platform distinction.

I think in general we have similar ideas to those you describe,
but we're not at a point yet where they have been completely embodied
in the codebase, or even summarised in a single post/page.
Perhaps I will put these ideas together into a page for the classlib
section of the Harmony website, so we have something concrete to talk
about without digging back into the mail archives.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200603.mbox/[EMAIL PROTECTED]
[2] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]
[3] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200602.mbox/[EMAIL PROTECTED]

-garrett

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [drlvm] build -unhook the classlib build and point to the existing in-tree one

2006-06-09 Thread Oliver Deakin

Geir Magnusson Jr wrote:

snip

Denis Sharypov wrote:
  

I would like to give the background on how the build system is designed and
then
give answers for the specific questions.

The build system provided with the VM is designed to be able to build the
whole harmony codebase
no matter how many modules it contains. The intent was to propose it later
as a unified build system
for the Harmony project after receiving the community feedback.



Speaking for myself, I'd personally prefer a much more loosely coupled
build framework, to allow the individual parts of Harmony (drlvm,
jchevm, classlib, tools, etc) the freedom to build in the way best
suited for that part, as determined by the people working on that part.

There has to be some overall federation, such as where generated
artifacts are to be placed when generating a JDK, but I'm hoping that
coupling doesn't have to be tight.

Others agree, disagree?

  


Agreed - IMHO using a build system for each VM or classlib that most 
suits that

particular component makes sense.
Classlib currently has a fairly straightforward build system (which 
could probably be
simplified further with some work), and this is good enough for it - it 
suits all the

requirements of that component and is easy to use.
DRLVM has a more complex build system, but probably with some 
justification due

to the complexity of the project it is compiling.

As long as they produce binaries that can be combined (adhere to the VMI,
produced JDK 'shape' is standard etc.) then I dont think it matters if the
underlying build systems are the same.

Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[classlib] Modularising the native code - it begins!

2006-06-13 Thread Oliver Deakin

Hi all,

As you have probably noticed, I recently raised HARMONY-596
(which Mark has already kindly applied) to make .lib and .a files generate
straight into the deploy/lib directory.

I think that now we are in a position to finally modularise the classlib 
native

code. I've volunteered to do this, and plan to move the code down into the
modules in the layout described in [1], since I believe there were no
objections.

I will probably warm up with some of the easier modules - prefs/text/auth
- before moving onto archive and luni. Ill raise a separate JIRA for each
set of moves, and let you all know how I progress and if there are any
problems/questions I have.

I also plan to create a doc for the website describing the location of 
the native

code, and summarising how platform specific and shared code is laid out
within each native component.

Please let me know if there are any issues with this - otherwise I will
continue to work on it and submit patches.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-14 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

Hi all,

As you have probably noticed, I recently raised HARMONY-596
(which Mark has already kindly applied) to make .lib and .a files generate
straight into the deploy/lib directory.

I think that now we are in a position to finally modularise the classlib
native
code. I've volunteered to do this, and plan to move the code down into the
modules in the layout described in [1], since I believe there were no
objections.



Other than my outstanding objections to how HDK is being conflated with
the deploy directory, none :)
  


Notice I didn't mention the hdk at all in my previous mail ;)


I know I owe you some responses from last week, and that stuff shouldn't
stand in the way of what you want to do here.
  


Yes, our plans for the HDK are separate to what I'm doing here - this is 
purely about

putting the native code under the modules directory.

  

I will probably warm up with some of the easier modules - prefs/text/auth
- before moving onto archive and luni. Ill raise a separate JIRA for each
set of moves, and let you all know how I progress and if there are any
problems/questions I have.



I assume this is gradual - that you can do one module first, it can go
into SVN, and all still is fine?
  


That's exactly what I plan to do - small chunks that can be applied to SVN
individually rather than one world changing event.
I will take one module at a time (probably starting with prefs) and 
prepare a
patch to move the native code that should be under that module into the 
right
place, and alter all the necessary build scripts. Once that's ready, Ill 
raise a

JIRA and attach the patch in the usual way, and then move onto the next
module with a new patch and JIRA.

Regards,
Oliver


  

I also plan to create a doc for the website describing the location of
the native
code, and summarising how platform specific and shared code is laid out
within each native component.



Ooh!  Ah!  THanks!

geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-14 Thread Oliver Deakin
I have a couple of questions about location of makefiles and makefile 
includes


1) Currently the makefiles for the modules are stored underneath the 
platform

and component they relate to. For example, the luni makefiles are situated
at native-src/linux.IA32/luni/makefile and 
native-src/win.IA32/luni/makefile.


Once I move the luni native code into the luni module, its code
layout will look like:

modules/luni/src/main/native/luni/
  |---shared
  |---linux
  \---windows

But where should the platform specific makefiles go?

I think we have two choices here - put the linux one in the linux dir,
and similar for the windows version (as it is now), or put them both
under the modules/luni/src/main/native/luni/ directory and rename them
something like makefile.linux and makefile.windows.

Personally Im happy to go with the first option and keep each makefile
under its corresponding platform directory until we have reason
to change it. Just thought Id gauge if anyone has any preference.

2) The makefiles for each native component include two files
(defines.mak and rules.mak on windows and makefile.include
and rules.mk on linux) that are generic across all components.

The question is: where should these common files be located once
the natives are moved into the modules?

At the moment, I can't really see an obvious location where all modules
could access them.
The only option I've thought of so far is to have one copy of the files in
each module that contains native code (so that would be one copy in
each of archive, auth, luni, prefs and text). The files would be located 
under

/modules/modulename/src/main/native, and shared by all the
native components under that module.
Any preferences/ideas about this?

Regards,
Oliver


Oliver Deakin wrote:

Hi all,

As you have probably noticed, I recently raised HARMONY-596
(which Mark has already kindly applied) to make .lib and .a files 
generate

straight into the deploy/lib directory.

I think that now we are in a position to finally modularise the 
classlib native
code. I've volunteered to do this, and plan to move the code down into 
the

modules in the layout described in [1], since I believe there were no
objections.

I will probably warm up with some of the easier modules - 
prefs/text/auth

- before moving onto archive and luni. Ill raise a separate JIRA for each
set of moves, and let you all know how I progress and if there are any
problems/questions I have.

I also plan to create a doc for the website describing the location of 
the native

code, and summarising how platform specific and shared code is laid out
within each native component.

Please let me know if there are any issues with this - otherwise I will
continue to work on it and submit patches.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-14 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

I have a couple of questions about location of makefiles and makefile
includes

1) Currently the makefiles for the modules are stored underneath the
platform
and component they relate to. For example, the luni makefiles are situated
at native-src/linux.IA32/luni/makefile and
native-src/win.IA32/luni/makefile.

Once I move the luni native code into the luni module, its code
layout will look like:

modules/luni/src/main/native/luni/
  |---shared
  |---linux
  \---windows

But where should the platform specific makefiles go?

I think we have two choices here - put the linux one in the linux dir,
and similar for the windows version (as it is now), or put them both
under the modules/luni/src/main/native/luni/ directory and rename them
something like makefile.linux and makefile.windows.

Personally Im happy to go with the first option and keep each makefile
under its corresponding platform directory until we have reason
to change it. Just thought Id gauge if anyone has any preference.



I agree.  I'm also wondering how painful it would be to switch to
something other than NMAKE as it seems pretty braindead.
  


There was talk some time ago about possibly switching to gmake on all 
platforms, and
using some of its capabilities to make picking up platform specific and 
shared code a
simpler process. I'm not sure that any conclusion was ever reached in 
that thread.

Perhaps its something that needs to be reexamined in the future.

  

2) The makefiles for each native component include two files
(defines.mak and rules.mak on windows and makefile.include
and rules.mk on linux) that are generic across all components.

The question is: where should these common files be located once
the natives are moved into the modules?

At the moment, I can't really see an obvious location where all modules
could access them.
The only option I've thought of so far is to have one copy of the files in
each module that contains native code (so that would be one copy in
each of archive, auth, luni, prefs and text). The files would be located
under
/modules/modulename/src/main/native, and shared by all the
native components under that module.
Any preferences/ideas about this?



I think that works.  I've been having similar thoughts about this re
drlvm, and have been using the classlib make config as a reference.  I'm
trying to limit the amount of duplicated things because I'm slothful and
lazy and don't want to maintain them.
  


In general with these things I'm also all in favour of sharing as much 
as possible to keep

maintenance down.
I think that in this case, however, the duplication shouldn't have too 
much impact. The two
makefile includes largely define directory locations, compiler flags and 
generic compile lines

for each platform, and these shouldn't change very regularly.

Regards,
Oliver


geir

  

Regards,
Oliver


Oliver Deakin wrote:


Hi all,

As you have probably noticed, I recently raised HARMONY-596
(which Mark has already kindly applied) to make .lib and .a files
generate
straight into the deploy/lib directory.

I think that now we are in a position to finally modularise the
classlib native
code. I've volunteered to do this, and plan to move the code down into
the
modules in the layout described in [1], since I believe there were no
objections.

I will probably warm up with some of the easier modules -
prefs/text/auth
- before moving onto archive and luni. Ill raise a separate JIRA for each
set of moves, and let you all know how I progress and if there are any
problems/questions I have.

I also plan to create a doc for the website describing the location of
the native
code, and summarising how platform specific and shared code is laid out
within each native component.

Please let me know if there are any issues with this - otherwise I will
continue to work on it and submit patches.

Regards,
Oliver

[1]
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL
 PROTECTED]


  



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-14 Thread Oliver Deakin

Mark Hindess wrote:
snip

On 14 June 2006 at 7:24, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
  

Oliver Deakin wrote:


2) The makefiles for each native component include two files
(defines.mak and rules.mak on windows and makefile.include
and rules.mk on linux) that are generic across all components.

The question is: where should these common files be located once
the natives are moved into the modules?

At the moment, I can't really see an obvious location where all modules
could access them.
The only option I've thought of so far is to have one copy of the files in
each module that contains native code (so that would be one copy in
each of archive, auth, luni, prefs and text). The files would be located
under
/modules/modulename/src/main/native, and shared by all the
native components under that module.
Any preferences/ideas about this?
  

I think that works.  I've been having similar thoughts about this re
drlvm, and have been using the classlib make config as a reference.  I'm
trying to limit the amount of duplicated things because I'm slothful and
lazy and don't want to maintain them.



I'd rather not maintain lots of copies.  Could we not keep the shared
parts in the deploy (I was tempted to say hdk) somehow?  It's might
sound a little crazy but actually given that we want modules to be
consistent with other compiled artifacts it's actually quite useful to
have common structure, variable and compile flag settings.
  


I would also prefer if we could find a good central place to keep these 
rather than many copies -

do you have any suggestions?

If we copy them into deploy at build time (similar to how we copy in the 
headers) then we

just need to pick a place for them to live before they are copied.
Putting them under depends didn't quite feel right to me at first - I 
thought the depends directory
was intended to contain external dependencies, but now that I look at it 
I see that the
depends/files dir contains the Harmony properties files, so maybe I'm 
wrong. If that's the case,

then perhaps they could go in a separate directory under depends?
Alternatively, they could go into a subdir of /make, but I like that less.

Thoughts?

Regards,
Oliver

(Aside: The linux kernel used to do something like this with a 
Rules.make file that you included.  Now they do it slightly differently

where you set a variable pointing to your module source and use the
standard kernel Makefile from the built source tree like:

  make -C kernel-source-dir M=$PWD modules modules_install

I quite like this since it ensures consistency.)

Regards,
 Mark.



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-15 Thread Oliver Deakin

Tim Ellison wrote:

Oliver Deakin wrote:
snip
  

but now that I look at it
I see that the
depends/files dir contains the Harmony properties files, so maybe I'm
wrong.



Those are launcher-specific, so don't get too attached to them, I'll
  


I'll try.. ;)
It looks like the .properties files contain some messages for the port 
lib and zip, which I guess

will need to be split out and put into their respective native components.


move them out to the tools/launcher directory where they better belong.
  


So, with the removal of these Harmony internal files from depends, that 
directory becomes external
dependencies only again, meaning we shouldn't put the makefile includes 
there?


Regards,
Oliver


Regards,
Tim


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-15 Thread Oliver Deakin



Mark Hindess wrote:

On 14 June 2006 at 16:45, Oliver Deakin [EMAIL PROTECTED] wrote:
  

Mark Hindess wrote:
snip


On 14 June 2006 at 7:24, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
  
  

Oliver Deakin wrote:


snip
  
  

I think that works.  I've been having similar thoughts about this re
drlvm, and have been using the classlib make config as a reference.  I'm
trying to limit the amount of duplicated things because I'm slothful and
lazy and don't want to maintain them.



I'd rather not maintain lots of copies.  Could we not keep the shared
parts in the deploy (I was tempted to say hdk) somehow?  It's might
sound a little crazy but actually given that we want modules to be
consistent with other compiled artifacts it's actually quite useful to
have common structure, variable and compile flag settings.
  
  
I would also prefer if we could find a good central place to keep these 
rather than many copies -

do you have any suggestions?



In the support sub-directory?  The support tree is already required
by most (all?) modules.  I renamed the originally contributed
'test_support' tree to support since I figured we might have other
support files.  For example java code for ant tasks if the build gets to
hard to manage with basic ant.
  


This idea seemed quite appealing at first, but having had a look at the 
support
directory it still looks very much like a test support bundle, even if 
it isn't named as such.
There is a manifest and src directory in there that gives it the 
appearance of another module,
so adding in a build directory, or something similar, might confuse 
things.


Perhaps initially I will put them in a build subdir of depends, just 
to get these patches
going. Then in the future maybe the support directory will be 
reorganised so it isn't

as test oriented, and these can be moved.

Sound fair?

Regards,
Oliver


-Mark





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] odd problems when running tests under linux...

2006-06-16 Thread Oliver Deakin



[EMAIL PROTECTED] wrote:

I just got my ubuntu box setup, and when running the test suite, I see this :

[junit] Exception in thread main java.lang.NoClassDefFoundError:
org.apache.harmony.luni.www.protocol.jar.JarURLConnection
  


This package was renamed a while back. It's now
org.apache.harmony.luni.internal.net.www.protocol.jar.JarURLConnection 
(!), and the
latest J9 VME kernels have those changes reflected in their code, which 
leads me to

think that you might have an old version of the kernel classes.

Could you tell me if in your jre/bin/default directory there is a single 
kernel.jar

or two jars, luni-kernel.jar and security-kernel.jar?


[junit] at com.ibm.oti.vm.VM.shutdown(VM.java:264)
[junit] at java.lang.Runtime.exitImpl(Native Method)
[junit] at java.lang.Runtime.exit(Runtime.java:215)
[junit] at java.lang.System.exit(System.java:525)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:571)


I'm very willing to entertain the idea that I have something misconfigured
or a wrong version of something - I've been throwing things onto the box
as fast as I can...

I'm under the latest ubuntu v6, have Sun's JDK5 on the machine, got the
latest v3 j9 VM installed... what else do I need to tell you?

geir



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-16 Thread Oliver Deakin


Tim Ellison wrote:

Oliver Deakin wrote:
  

Tim Ellison wrote:


Oliver Deakin wrote:
snip
 
  
move them out to the tools/launcher directory where they better belong.
  

So, with the removal of these Harmony internal files from depends, that
directory becomes external
dependencies only again, meaning we shouldn't put the makefile includes
there?



As Geir wrote elsewhere, it makes sense to share all resources at the
appropriate level within the project, so if the makefile rules are to be
used by drlvm as well as classlib then they would find a home in the
/dependencies (or whatever it is called) that is a peer of them.  If it
makes sense that the rules are classlib wide then put them in
classlib/depends (or whatever) and so on.  The actual dir names are open
for debate.
  


Currently these includes are only used by classlib, and I dont know of 
any plans to

adopt them in any of the Harmony VMs, so I think they should go under
classlib/trunk/depends, in a new build directory.

Ill put together a patch that will move these includes to the 
depends/build dir,

alter the build system to copy them into a shared place in deploy (probably
deploy/build) and update the makefiles for all components to pick them up
from there.

These files will need to be made part of the HDK when we start producing 
them,
since they will not be stored in any particular module but will be 
required by

all modules with native code.

Regards,
Oliver


Regards,
Tim

  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Simplifying the ant files?

2006-06-16 Thread Oliver Deakin
:

  hy.module-name.blah.blah

Is anyone really attached to xml properties files?  Personally I find
it much easier to read text properties like:

  hy.luni.src.main.java=src/main/java
  hy.luni.bin.main=bin/main

than:

  hy
luni
  src
main
  java location=src/main/java /
/main
  /src
  bin
main location=bin/main /
  /bin
/luni
  /hy

[Aside: hy.luni.bin.main isn't used (correctly) any more anyway.]
  


These files can be a little bit unreadable - are you suggesting that we 
still have
a separate properties file but it contains plain one-line properties 
instead of xml,
or get rid of the separate properties file altogether and incorporate 
all the

required properties into whichever build script they are appropriate to?



6) Mikhail (IIRC?) modified a few of the module build files to use
macros.  I like this idea (in fact one of my earlier abandoned JIRAs
took it quite a bit further) because it helps to hide the less
important details of what is going on away from the important ones -
in the case of the modules leaving only differences between modules.


7) The init targets in each module build.xml don't really contribute
anything to the build.  Does anyone really read the output they
generate?
  


I actually found myself removing the init echos from a module I was 
building the other day

because it was bugging me - I wouldn't miss these :)



8) Running ant -f make/build.xml from a module sub-directory doesn't
actually clean the main compiled classes.  (I think this is pretty
important to getting consistent expected behaviour so I'm looking at
this right now and will fix it shortly unless anyone shouts.)
  


Do you mean the classes that are built under the classlib_trunk/build 
directory?
Since I've been doing work on modularisation recently, I've been 
thinking about where modules

should build their classes when you run the module level build script.

I was thinking about a developer who only had, say, luni checked out and 
an HDK to build
against. He goes into the luni module and runs the main build.xml, but 
when this builds the
Java code it tries to stick the classes into module_dir/../../build, 
which doesnt
actually exist! I was wondering if maybe the default output dir for a 
modular java build
should be module_dir/build, and for a global build it will be 
classlib_trunk/build?


So can I suggest that the modular build.xml defaults to building and 
cleaning its
classes in a module_dir/build directory, but allow the location of the 
build

directory to be overridden?



Well, these are some of the things that are bothering me.  I suspect
others have other issues?
  


I also notice that the modular build.xml looks to open the properties 
file in

the classlib/trunk/make directory by using a relative path from the location
of the module i.e. module_dir/../../make/properties.xml.
If a developer only has that module checked out, then this relative
location will not work - this is something that I guess can be approached
if you rework the properties files (5).

Regards,
Oliver



Regards,
 Mark.



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] odd problems when running tests under linux...

2006-06-16 Thread Oliver Deakin


Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

[EMAIL PROTECTED] wrote:


I just got my ubuntu box setup, and when running the test suite, I see
this :

[junit] Exception in thread main java.lang.NoClassDefFoundError:
org.apache.harmony.luni.www.protocol.jar.JarURLConnection
  
  

This package was renamed a while back. It's now
org.apache.harmony.luni.internal.net.www.protocol.jar.JarURLConnection
(!), and the
latest J9 VME kernels have those changes reflected in their code, which
leads me to
think that you might have an old version of the kernel classes.



  

Could you tell me if in your jre/bin/default directory there is a single
kernel.jar
or two jars, luni-kernel.jar and security-kernel.jar?



two.

This is a brand new machine (it was an T42 running windows until about
4pm yesterday), and it got v3 of the j9 vm yesterday... do I don't think
I crossed the beams - this is the only j9vm it's ever had.
  


Right, think Ive found the cause - looks like I mistakenly included a 
version of com.ibm.oti.vm.VM.class
in the kernel on Linux that had that old package name in (don't ask me 
how - must have been a friday

afternoon :( )

I have replaced the VME v3 for Linux on DeveloperWorks with a one that 
contains the updated kernel.
Please try this one and let me know how you get on - sorry for the 
hassle caused.



Doesn't anyone else see this?
  


I guess everyone else was still using the v2 and just moved the files 
over - v2 had the correct
package names in. Also, the Windows v3 contains the right package 
references.


Thanks for finding this!

Regards,
Oliver


(thx)

geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib]native codes layout question(was Re: [classlib][NIO|VMI]JNI 1.4 enhancement on ByteBuffer)

2006-06-20 Thread Oliver Deakin

Paulex Yang wrote:
Seems no one objects this proposal:), so I'm going to implement the 
JNI1.4 enhancement in nio module, i.e, provide patch to Harmony-578, 
Because this implementation requires some native codes, so I probably 
need to reintroduce hynio.dll(.so), but I have some questions.(Excuse 
me about my ignorance on the native layout evolution).


At first, seems native codes will be separated into modules(I guess 
Oli is working on?), so should I assume my native codes will be 
directly put into nio modules, or still in native-src/win.IA32/nio 
directory? because I'm used to provide a shell to move/svn add new 
files in the patch, so it will be easier for me to know how others 
think about it.


It depends on whether you want to wait for what I'm doing or not :)
If you want to get the code out now, then you can temporarily put it 
under native-src/win.IA32/nio and I will move it later as part of the 
natives modularisation.
However, if you don't mind waiting a day or so I should be able to 
submit my first patch to move the prefs natives. This ought to be enough 
of an example for you to put your native code directly into 
modules/nio/src/main/native.




And second, the native codes probably need portlib, so the portlib's 
header file must be accessible, say, portsock.h, but now it has been 
moved into luni/src/main/native/blabla, should I include one in my 
patch so that nio module can have a copy? or the header file itself 
should be put some other well known directory(deploy/build/include I 
guess)?


At build time, the copy.native.includes target in luni/make/build.xml 
is called - it copies a selection of the header files in 
luni/src/main/native/include that need to be shared between modules into 
the deploy/include directory. This is done with an explicit fileset in 
luni/make/build.xml - if you need to have portsock.h added to the list 
of shared header files, then this is the place to make that change. Just 
add its filename to the list, and next time you build it will appear in 
the deploy/include directory. Your nio code should include the headers 
from the deploy/include dir, and *not* directly from the 
luni/src/main/native/include dir.


I hope this makes more sense now - if it doesn't, please let me know. I 
am in the process of writing up some documentation for the website on 
the natives layout and where headers should go (and also how modules 
should build against the HDK) - once that is complete it should all be a 
lot clearer.


Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib]native codes layout question(was Re: [classlib][NIO|VMI]JNI 1.4 enhancement on ByteBuffer)

2006-06-21 Thread Oliver Deakin
Hi Jimmy,

LvJimmy,Jing wrote:
 Hi Oliver:

 I've seen the modularisation on native, that's great! :) But I have a
 question here.
 As I work on native before, I usually build the whole native code once,
 and then make seperate modules alone, e.g., luni, or nio. It was easy to
 enter directory luni and force build by make and create a new DLL/so
 file.
 In this way it is easy to write codes and debug. However after
 modularisation nowadays, as there are envionment variables in the
 makefile,
 which are defined in the build.xml, I have no idea but build whole native
 with ant, even after changing a single line.
 So I turn to help for if there's an easy way to build single module
 alone? Thanks!

If you want to rebuild all the natives in one go, you can still (currently)
go to the native-src directory and run ant. This will rebuild all the native
components.

Rebuilding a single component can also be done. For example, to rebuild
the hyluni.dll you would:
1. cd to native-src/platform/luni
2. set the HY_HDK environment variable to point to a directory where
you have a complete prebuilt HDK (which could be the deploy dir if you
have previously run a global build).
3. Run make/nmake. The hyluni.dll will be built against the libs already
in HY_HDK, and the generated dll will be placed into the
native-src/platform
directory, where you can then copy it wherever you want

Once the natives are all modularised (so native-src no longer exists) you
will be able to just go to the module you want and run ant build.native
(or some similarly name target) and the natives will be incrementally
rebuilt and automatically placed into your target directory.

Hope this helps,
Oliver


 at 06-6-20,Oliver Deakin [EMAIL PROTECTED] wrote:

 Paulex Yang wrote:
  Seems no one objects this proposal:), so I'm going to implement the
  JNI1.4 enhancement in nio module, i.e, provide patch to Harmony-578,
  Because this implementation requires some native codes, so I probably
  need to reintroduce hynio.dll(.so), but I have some questions.(Excuse
  me about my ignorance on the native layout evolution).
 
  At first, seems native codes will be separated into modules(I guess
  Oli is working on?), so should I assume my native codes will be
  directly put into nio modules, or still in native-src/win.IA32/nio
  directory? because I'm used to provide a shell to move/svn add new
  files in the patch, so it will be easier for me to know how others
  think about it.

 It depends on whether you want to wait for what I'm doing or not :)
 If you want to get the code out now, then you can temporarily put it
 under native-src/win.IA32/nio and I will move it later as part of the
 natives modularisation.
 However, if you don't mind waiting a day or so I should be able to
 submit my first patch to move the prefs natives. This ought to be enough
 of an example for you to put your native code directly into
 modules/nio/src/main/native.

 
  And second, the native codes probably need portlib, so the portlib's
  header file must be accessible, say, portsock.h, but now it has been
  moved into luni/src/main/native/blabla, should I include one in my
  patch so that nio module can have a copy? or the header file itself
  should be put some other well known directory(deploy/build/include I
  guess)?

 At build time, the copy.native.includes target in luni/make/build.xml
 is called - it copies a selection of the header files in
 luni/src/main/native/include that need to be shared between modules into
 the deploy/include directory. This is done with an explicit fileset in
 luni/make/build.xml - if you need to have portsock.h added to the list
 of shared header files, then this is the place to make that change. Just
 add its filename to the list, and next time you build it will appear in
 the deploy/include directory. Your nio code should include the headers
 from the deploy/include dir, and *not* directly from the
 luni/src/main/native/include dir.

 I hope this makes more sense now - if it doesn't, please let me know. I
 am in the process of writing up some documentation for the website on
 the natives layout and where headers should go (and also how modules
 should build against the HDK) - once that is complete it should all be a
 lot clearer.

 Regards,
 Oliver

 -- 
 Oliver Deakin
 IBM United Kingdom Limited


 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





-- 
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib]native codes layout question(was Re: [classlib][NIO|VMI]JNI 1.4 enhancement on ByteBuffer)

2006-06-21 Thread Oliver Deakin

Tim Ellison wrote:

Oliver Deakin wrote:
  

Rebuilding a single component can also be done. For example, to rebuild
the hyluni.dll you would:
1. cd to native-src/platform/luni
2. set the HY_HDK environment variable to point to a directory where
you have a complete prebuilt HDK (which could be the deploy dir if you
have previously run a global build).



Can we have it set to the deploy dir by default?
  


This is only a temporary step while the natives are still in native-src. 
Once all the native code is
moved into the modules the default will be the deploy directory. In 
fact, if you go to the prefs
module and type in Ant build.native it will build the prefs native 
code and place the output
libs into the deploy directory. (ok, that's not strictly true - you need 
to use
Ant -Dmake.command=nmake build.native because the modular scripts dont 
pick up this
variable from /make/properties.xml yet, but this will be fixed in the 
future.)



  

3. Run make/nmake. The hyluni.dll will be built against the libs already
in HY_HDK, and the generated dll will be placed into the
native-src/platform
directory, where you can then copy it wherever you want

Once the natives are all modularised (so native-src no longer exists) you
will be able to just go to the module you want and run ant build.native
(or some similarly name target) and the natives will be incrementally
rebuilt and automatically placed into your target directory.



This is the mode of working that people should get used to, so that
if/when we have ./configure steps too they will still build the natives
the same way (i.e. rather than just typing (n)make).
  


Agreed

Regards,
Oliver


Regards,
Tim

  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[drlvm] Help building on Windows!

2006-06-21 Thread Oliver Deakin

Hi all,
I'm trying to build drlvm, and with a small amount of effort Ive got its 
prereqs and
got it to start building, which is great. However, I hit a snag which 
stops the build

from completing (8 mins in, sigh):

build.native.init:
[echo] ## Building native of 'vm.vmi'

build.native.c:
  [cc] 0 total files to be compiled.

build.native.cpp:
  [cc] 2 total files to be compiled.
  [cc] j9vmls.cpp
  [cc] C:\Program 
Files\eclipse.3.2.RC1a\workspace\drlvm\trunk\vm\vmi\src\j9
vmls.cpp(20) : fatal error C1083: Cannot open include file: 'hyvmls.h': 
No such

file or directory
  [cc] vmi.cpp
  [cc] C:\Program 
Files\eclipse.3.2.RC1a\workspace\drlvm\trunk\vm\vmi\src\vm
i.cpp(25) : fatal error C1083: Cannot open include file: 'zipsup.h': No 
such fil

e or directory
  [cc] Generating Code...

BUILD FAILED
C:\PROGRA~1\eclipse.3.2.RC1a\workspace\drlvm\trunk\build\make\build.xml:373: 
The

following error occurred while executing this line:
C:\PROGRA~1\eclipse.3.2.RC1a\workspace\drlvm\trunk\build\make\build.xml:380: 
The

following error occurred while executing this line:
C:\PROGRA~1\eclipse.3.2.RC1a\workspace\drlvm\trunk\build\make\build_component.xml
:72: The following error occurred while executing this line:
C:\PROGRA~1\eclipse.3.2.RC1a\workspace\drlvm\trunk\build\win_ia32_msvc_release\se
mis\build\targets\build.native.xml:75: cl failed with return code 2

Total time: 7 minutes 50 seconds


It looks like the build cannot find the classlib header files it needs. 
Unfortunately
the error messages are unhelpful in that they don't tell me anything 
about the

include paths used. I have set the CLASSLIB_HOME variable in
make/win.properties to point to a prebuilt classlib/trunk checkout. Is 
there

something else I need to set?

Thanks,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [drlvm] Help building on Windows!

2006-06-21 Thread Oliver Deakin

Thanks Geir - setting external.dep.CLASSLIB did the job.

So, in summary for those interested, to build drlvm against a prebuilt 
classlib/trunk

in any location you need to change:
- the value of CLASSLIB_HOME in make/win.properties
- the value of external.dep.CLASSLIB in build/make/build.xml
so they both point to the root dir of the classlib/trunk checkout.

Regards,
Oliver

Geir Magnusson Jr wrote:

I just did a svn update, build.bat clean, and build.bat and it ran to
completion.  Tests worked too.

svn stat from /trunk shows nothing not checked in..

Are you building against classlib as it is in SVN, or something that
you've been working on?

Also, it assumes you already build classlib.

maybe... - right now, it assumes that drlvm and classlib are in parallel
directories.  You can adjust via the external.deps.CLASSLIB property in
build.xml

geir


Magnusson, Geir wrote:
  

Since I'm responsible for current config, I'll take a look.

Maybe osmething changed because of the classlib structure changes

geir




-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [jira] Created: (HARMONY-638) [classlib] Tests hang on in luni : tests.api.java.io.FileTest / test_setReadOnly on Ubunti 6.06

2006-06-21 Thread Oliver Deakin

I remember seeing problems like this before, in a non-Harmony jre, where a
Runtime.exec() would never return. I hunted around and found an 
interesting page

on JavaWorld:

 http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html

I found the Why Runtime.exec() hangs section very useful, and it 
solved my problem
at the time. I'm not saying that what you're seeing is definitely the 
same thing that I encountered,
but it may help. I notice that in this test stdout and stderr for the 
process are not read - is it
possible that it is producing some output, and since it isn't being read 
the process just waits

leading to the exec never exiting?

Regards,
Oliver


Geir Magnusson Jr (JIRA) wrote:

[classlib] Tests hang on in luni : tests.api.java.io.FileTest / 
test_setReadOnly on Ubunti 6.06
---

 Key: HARMONY-638
 URL: http://issues.apache.org/jira/browse/HARMONY-638
 Project: Harmony
Type: Bug

  Components: Classlib  
Reporter: Geir Magnusson Jr



it seems that exec() doesn't work - it never returns.  Bug in IBM j9 kernel 
classes?

Mark Hindress is trying to duplcate to reconfirm.

ubuntu 6.06
gcc 3.4
invoked by Sun Java 1.5.0_07
IBM J9 for linux, v3 mk II



  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [jira] Created: (HARMONY-638) [classlib] Tests hang on in luni : tests.api.java.io.FileTest / test_setReadOnly on Ubunti 6.06

2006-06-21 Thread Oliver Deakin

The example returns once they have fixed it. If you look at the example
just before the Why Runtime.exec() hangs heading (BadExecJava2.java)
you will see a situation that is almost exactly the same as in the test 
case.

i.e. an exec followed by a waitFor(), without any reading of streams.
In this case the exec never returns.

The later example is how to fix it.

Regards,
Oliver


Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

I remember seeing problems like this before, in a non-Harmony jre, where a
Runtime.exec() would never return. I hunted around and found an
interesting page
on JavaWorld:

 http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html

I found the Why Runtime.exec() hangs section very useful, and it
solved my problem
at the time. I'm not saying that what you're seeing is definitely the
same thing that I encountered,
but it may help. I notice that in this test stdout and stderr for the
process are not read - is it
possible that it is producing some output, and since it isn't being read
the process just waits
leading to the exec never exiting?



I think the phrasing in the article is misleading, because we're not
seeing exec() *return*, which is a different issue than the case that
they are talking about, namely the subsequent waitFor() not returning.

So I think this doesn't apply.  Interesting reading, though.

geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [jira] Created: (HARMONY-638) [classlib] Tests hang on in luni : tests.api.java.io.FileTest / test_setReadOnly on Ubunti 6.06

2006-06-21 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

The example returns once they have fixed it. If you look at the example
just before the Why Runtime.exec() hangs heading (BadExecJava2.java)
you will see a situation that is almost exactly the same as in the test
case.
i.e. an exec followed by a waitFor(), without any reading of streams.
In this case the exec never returns.

The later example is how to fix it.



I don't think so.

When they are saying the exec never returns, they are meaning that
from the POV of the programmer, the spawned thing never returns.

That's why you can do

exec()
do IO
waitFor()
done

However, in our case, when I say exec never returns I mean exec()
never returns.

so you'd never actually get to do IO above, because you are hanging in
exec().

Big difference, right?
  


Certainly - do you have a stack trace that gives more info for what each 
of the threads are
doing? Could you attach it to the JIRA if you do please - Ive just tried 
running a cut down
version of the test on my SLES9 machine and it exited fine, so it would 
be useful to have

a trace from your machine to see what's happening.

Thanks,
Oliver


geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [jira] Created: (HARMONY-638) [classlib] Tests hang on in luni : tests.api.java.io.FileTest / test_setReadOnly on Ubunti 6.06

2006-06-22 Thread Oliver Deakin



Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

Geir Magnusson Jr wrote:


However, in our case, when I say exec never returns I mean exec()
never returns.

so you'd never actually get to do IO above, because you are hanging in
exec().

Big difference, right?
  
  
Certainly 



So I assume you agree?  (Just want to make sure that I'm not missing
something terribly important here...)
  


Yes, that is a different case to what I was imagining (and what was on 
that page I linked).

Seems that *I* was missing something important ;)


- do you have a stack trace that gives more info for what each
  

of the threads are
doing? Could you attach it to the JIRA if you do please - Ive just tried
running a cut down
version of the test on my SLES9 machine and it exited fine, so it would
be useful to have
a trace from your machine to see what's happening.




Done
  


Thanks

Regards,
Oliver


geir


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib]native codes layout question(was Re: [classlib][NIO|VMI]JNI 1.4 enhancement on ByteBuffer)

2006-06-22 Thread Oliver Deakin



Paulex Yang wrote:

Oliver Deakin wrote:

Paulex Yang wrote:
Seems no one objects this proposal:), so I'm going to implement the 
JNI1.4 enhancement in nio module, i.e, provide patch to Harmony-578, 
Because this implementation requires some native codes, so I 
probably need to reintroduce hynio.dll(.so), but I have some 
questions.(Excuse me about my ignorance on the native layout 
evolution).


At first, seems native codes will be separated into modules(I guess 
Oli is working on?), so should I assume my native codes will be 
directly put into nio modules, or still in native-src/win.IA32/nio 
directory? because I'm used to provide a shell to move/svn add new 
files in the patch, so it will be easier for me to know how others 
think about it.


It depends on whether you want to wait for what I'm doing or not :)
If you want to get the code out now, then you can temporarily put it 
under native-src/win.IA32/nio and I will move it later as part of the 
natives modularisation.
However, if you don't mind waiting a day or so I should be able to 
submit my first patch to move the prefs natives. This ought to be 
enough of an example for you to put your native code directly into 
modules/nio/src/main/native.




And second, the native codes probably need portlib, so the portlib's 
header file must be accessible, say, portsock.h, but now it has been 
moved into luni/src/main/native/blabla, should I include one in my 
patch so that nio module can have a copy? or the header file itself 
should be put some other well known directory(deploy/build/include I 
guess)?


At build time, the copy.native.includes target in 
luni/make/build.xml is called - it copies a selection of the header 
files in luni/src/main/native/include that need to be shared between 
modules into the deploy/include directory. This is done with an 
explicit fileset in luni/make/build.xml - if you need to have 
portsock.h added to the list of shared header files, then this is the 
place to make that change. Just add its filename to the list, and 
next time you build it will appear in the deploy/include directory. 
Your nio code should include the headers from the deploy/include dir, 
and *not* directly from the luni/src/main/native/include dir.
Oli, I tried to modify the luni/make/build.xml, and it successfully 
copied the portsock.h, but I found I still cannot build my native 
codes. So I looked inside the portsock.h, and found that all its 
content is just to include another file: ../port/hysock.h, my native 
codes in modules/nio/src/main/native cannot find ../port/hysock.h so 
it fails. I guess the reason why the luni natives still can build is 
all LUNI's native codes are still located in native-src/luni, so they 
can found the hysock.h in native-src/port.


Seems portsock.h is useless and confusable, so I suggest the steps 
below to fix this problem:

1. svn delete portsock.h in luni
2. svn move hysock.h from native-src to luni
3. update all reference to portsock.h to hysock.h
4. rebuild


Yes, that sounds reasonable. From what I can see portsock.h is basically
pointless, since it just includes hysock.h. Your plan to replace
portsock.h with hysock.h sounds like the right thing to do here.



If no one objects, I'll raise a separated JIRA and provide patch


Thanks!

Regards,
Oliver




I hope this makes more sense now - if it doesn't, please let me know. 
I am in the process of writing up some documentation for the website 
on the natives layout and where headers should go (and also how 
modules should build against the HDK) - once that is complete it 
should all be a lot clearer.


Regards,
Oliver






--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] mem

2006-06-26 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Jimmy, Jing Lv wrote:
  

Geir Magnusson Jr wrote:


Why would I use

portLib-mem_allocate_memory(portLib)

over just calling

hymem_allocate_memory(portlib, )

  

Hi Geir:

   Not sure if the later is hymem_allocate_memory(int size)? 



Well, the one I'm looking at is

hymem.c

void *VMCALL
hymem_allocate_memory (struct HyPortLibrary *portLibrary,
UDATA byteAmount)


If so,
  

they are the same indeed, and the later is the a macro. Everytime before
we use the macro, call PORT_ACCESS_FROM_ENV (env)
   e.g.:
   somemethod(JNIEnv * env, ...){
PORT_ACCESS_FROM_ENV (env);
...
hymem_allocate_memory(sizeof(something));
...
   }

   And they are defined in hyport.h. :)



I've heard this rumor before, but I don't ever see how.
PORT_ACCESS_FROM_* is a simple little macro that just returns *portLib.
  


yeah, that's right - the PORT_ACCESS_FROM_* macro (defined
in hyport.h) sets the pointer privatePortLibrary to address the relevant 
port library.


Also defined in hyport.h is hymem_allocate_memory(param1) (the single param
function Jimmy was talking about) which is a macro defined to be:
privatePortLibrary-mem_allocate_memory(privatePortLibrary, param1).

So the call to PORT_ACCESS_FROM_* sets up a port library pointer, and the
macro versions of the hymem calls automatically add it in as the first 
parameter

to the relevant function call.

Regards,
Oliver


geir

  

geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-06-26 Thread Oliver Deakin

Hi all,

Here's another heads up to let you all know how it's going.

As you have probably noticed, I raised HARMONY-628 last week moving
the prefs code into its module. This patch has already been applied to 
SVN head.

Earlier today I raised HARMONY-668 to move the launcher and vmls source
into the luni module.

Just in case anyone was wondering, the ordering of the native code I am
moving isn't (entirely) random. I'm working my way through the targets
in native-src/platform/makefile in the reverse order they need to be 
built.

This provides me with gradual steps where I know all dependencies
will be built by the time my changes are reached.

I plan to move text, archive and auth next (into the text, archive and 
auth modules,
respectively and somewhat unsurprisingly!). I will post again if I hit 
any problems.


Regards,
Oliver


Oliver Deakin wrote:

Hi all,

As you have probably noticed, I recently raised HARMONY-596
(which Mark has already kindly applied) to make .lib and .a files 
generate

straight into the deploy/lib directory.

I think that now we are in a position to finally modularise the 
classlib native
code. I've volunteered to do this, and plan to move the code down into 
the

modules in the layout described in [1], since I believe there were no
objections.

I will probably warm up with some of the easier modules - 
prefs/text/auth

- before moving onto archive and luni. Ill raise a separate JIRA for each
set of moves, and let you all know how I progress and if there are any
problems/questions I have.

I also plan to create a doc for the website describing the location of 
the native

code, and summarising how platform specific and shared code is laid out
within each native component.

Please let me know if there are any issues with this - otherwise I will
continue to work on it and submit patches.

Regards,
Oliver

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED] 





--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [continuum] BUILD FAILURE: Classlib/linux.ia32 Build/Test

2006-06-26 Thread Oliver Deakin
Ok, this is my fault (gulp!) - I ran the build on both my Windows and 
Linux machines, and they both built and passed the tests. However, I 
think the Linux machine may have somehow picked up an old artifact (not 
sure how this would happen since the link error is against a lib not 
moved by my changes, but it's the only reason I can think of for it 
working) from another build and linked against that. My patch was 
missing an alteration to the Linux executable link line that pointed it 
to the deploy/jdk/jre/bin directory.


Mark has applied a fix to the link line and the Linux builds should be 
back in working order now. Sorry for the break - guess it's my turn at 
the bar...


Regards,
Oliver


Apache Harmony Build wrote:

Online report : 
http://ibmonly.hursley.ibm.com/continuum/linux.ia32/servlet/continuum/target/ProjectBuild.vm/view/ProjectBuild/id/6/buildId/3134
Build statistics:
  State: Failed
  Previous State: Ok
  Started at: Mon, 26 Jun 2006 19:09:18 +0100
  Finished at: Mon, 26 Jun 2006 19:11:49 +0100
  Total time: 2m 31s
  Build Trigger: Schedule
  Exit code: 1
  Building machine hostname: hy2
  Operating system : Linux(unknown)
  Java version : 1.5.0_06(Sun Microsystems Inc.)

SNIP!
 [exec]  [exec] cc -O1 -march=pentium3 -DLINUX -D_REENTRANT 
-DIPv6_FUNCTION_SUPPORT -DHYX86  
-I/home/hybld/continuum-working-directory/6/classlib/deploy/include 
-I/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/include -I. 
-I../shared/   -c -o ../shared/launcher_copyright.o 
../shared/launcher_copyright.c
 [exec]  [exec] cc -O1 -march=pentium3 -DLINUX -D_REENTRANT 
-DIPv6_FUNCTION_SUPPORT -DHYX86  
-I/home/hybld/continuum-working-directory/6/classlib/deploy/include 
-I/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/include -I. 
-I../shared/   -c -o ../shared/strbuf.o ../shared/strbuf.c
 [exec]  [exec] cc -O1 -march=pentium3 -DLINUX -D_REENTRANT 
-DIPv6_FUNCTION_SUPPORT -DHYX86  
-I/home/hybld/continuum-working-directory/6/classlib/deploy/include 
-I/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/include -I. 
-I../shared/   -c -o ../shared/libhlp.o ../shared/libhlp.c
 [exec]  [exec] cc  \
 [exec]  [exec] ../shared/main.o ../shared/cmain.o 
../shared/launcher_copyright.o ../shared/strbuf.o ../shared/libhlp.o   \
 [exec]  [exec] -Xlinker --start-group 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so
 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhythr.so
 -Xlinker --end-group \
 [exec]  [exec] -o ../java -lm -lpthread -lc -ldl \
 [exec]  [exec] -Xlinker -z -Xlinker origin \
 [exec]  [exec] -Xlinker -rpath -Xlinker \$ORIGIN/ \
 [exec]  [exec] -Xlinker -rpath-link -Xlinker ..
 [exec]  [exec] /usr/bin/ld: warning: libhysig.so, needed by 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so,
 not found (try using -rpath or -rpath-link)
 [exec]  [exec] 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so:
 undefined reference to [EMAIL PROTECTED]'
 [exec]  [exec] 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so:
 undefined reference to [EMAIL PROTECTED]'
 [exec]  [exec] 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so:
 undefined reference to [EMAIL PROTECTED]'
 [exec]  [exec] 
/home/hybld/continuum-working-directory/6/classlib/deploy/jdk/jre/bin/libhyprt.so:
 undefined reference to [EMAIL PROTECTED]'
 [exec]  [exec] collect2: ld returned 1 exit status
 [exec]  [exec] make: *** [../java] Error 1

 [exec] BUILD FAILED
 [exec] /home/hybld/continuum-working-directory/6/classlib/build.xml:90: 
The following error occurred while executing this line:
 [exec] 
/home/hybld/continuum-working-directory/6/classlib/native-src/build.xml:136: 
The following error occurred while executing this line:
 [exec] 
/home/hybld/continuum-working-directory/6/classlib/modules/luni/build.xml:97: 
exec returned: 2

 [exec] Total time: 2 minutes 25 seconds

BUILD FAILED
/home/hybld/continuum-working-directory/6/build.xml:118: exec returned: 1

Total time: 2 minutes 29 seconds





  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using Visual Studio C++ Express to compile classlib - fails.

2006-06-27 Thread Oliver Deakin



Thorbjørn Ravn Andersen wrote:

Mark Hindess skrev  den 26-06-2006 21:57:

What would a feasible path be from here?
  

Looks like one of the variables set in one of those empty included 
makefiles should be CC set to cl ?
  
I forgot to state that I somewhat arbitrarily tried doing exactly 
that, but without any success in locating the right one.   I therefore 
hoped to hear from a person who knows more about this build system.


BTW Do you know if there is any particular reason that files are 
copied over to the deploy directory instead of being read directly 
from their original location and the compiled output written to deploy?


Do you mean the header files in deploy/include? If so, the reason they 
are copied
there is so that they are in a shared location for all modules. (In fact 
it's the same
reason that libs are built into deploy/lib and makefile includes are 
copied into

deploy/build/make).

A little while ago there were a couple of threads where we discussed a 
Harmony

Development Kit (HDK) [1] which would allow a developer to build any
individual module without necessarily having the other modules present in
their workspace.
To this end we have set up the deploy directory to contain all the 
resources
required to rebuild a single module standalone - that means any shared 
header
files, any libs used to link against and any makefile includes. The 
header files

you see in deploy/include are those required to be shared between multiple
modules, and (loosely) define a native API between modules.

The idea is that eventually we will package HDKs up and make them available
as a downloadable bundle (in a similar way to the current classlib 
snapshots).

Then a developer can just download the HDK, unpack it somewhere and
rebuild any individual module against it that they are interested in 
working on.

As a consequence, they could also *only* checkout the module they are
interested in, rather than the whole of classlib/trunk, and still be 
able to rebuild

their altered code.

Regards,
Oliver

[1] http://incubator.apache.org/harmony/subcomponents/classlibrary/hdk.html



--
  Thorbjørn



--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Can't build native on winXP

2006-06-28 Thread Oliver Deakin

Gregory Shimansky wrote:

On Wednesday 28 June 2006 02:08 Matt Benson wrote:
  

nmake seems to choke looking for a ntwin32.mak file.
I don't care too much about the native stuff actually,
but I wanted to play with the build system, so I want
to make sure I don't break anything.  Does anyone have
any advice?



The file ntwin32.mak and win32.mak can be found in Microsoft Platform SDK. 
When you install it, it will have SetEnv.Cmd script which sets up INCLUDE, 
LIB and PATH variables to have necessary paths which nmake also uses 
(INCLUDE) to find the necessary files such as ntwin32.mak.


You can also take a look at this [1] discussion, a very similar question was 
asked just recently.


Unfortunately the mail-archives.apache.org have a very inconvenient interface 
to read a thread of replies (is there a good way to do it with a web 
interface?).
  


There are archives of the mailing list on gmane [1] - you can get a 
single thread

view by clicking on the comments link at the bottom of the post you are
interested in. I havnt really used this much before, just stumbled 
across it once

by chance.

Regards,
Oliver

[1] http://blog.gmane.org/gmane.comp.java.harmony.devel

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Can't build native on winXP

2006-06-28 Thread Oliver Deakin

Never seen that before - thanks!

Regards,
Oliver

Dmitry M. Kononov wrote:

There is another way to use gmane:

http://news.gmane.org/gmane.comp.java.harmony.devel

On 6/28/06, Oliver Deakin [EMAIL PROTECTED] wrote:

There are archives of the mailing list on gmane [1] - you can get a
single thread
view by clicking on the comments link at the bottom of the post you 
are

interested in. I havnt really used this much before, just stumbled
across it once
by chance.

Regards,
Oliver

[1] http://blog.gmane.org/gmane.comp.java.harmony.devel


Thanks.


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Modularising the native code - it begins!

2006-07-04 Thread Oliver Deakin

Hi all,

I have just raised HARMONY-744 which moves the vmi and luni native
code into the luni module, and the zip and zlib code into archive.
AFAICS, there is only one more set of moves to do before native
modularisation is complete (phew!).

With the last set of changes, I plan to move the native-src/build.xml into
the top level make dir and call it build-native.xml (to fit in with 
build-java.xml

and build-test.xml).

If anyone has any objections, please let me know.

Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [announce] New Apache Harmony Committer : Weldon Washburn

2006-07-05 Thread Oliver Deakin

Congratulations Weldon :)

Geir Magnusson Jr wrote:

Please join the Apache Harmony PPMC in welcoming the project's newest
committer, Weldon Washburn.

Weldon was one of the initial committers listed on the original Harmony
proposal.  Incubator tradition is such that listed committers be granted
committer status immediately, but we've altered that process a little
and look for continued commitment, participation, alignment and
contribution.  Weldon certainly has participated beyond the initial
proposal in both our community discussions as well as code
contributions, most significantly in the classpath library adapter,
which we hope this committer status will make it easier to finish :)

We all expect continued participation and contribution.

Weldon, as a first step to test your almighty powers of committership,
please update the committers page on the website.  That should be a good
 (and harmless) exercise to test if everything is working.

Things to do :

1) test ssh-ing to the server people.apache.org.
2) Change your login password on the machine as per the account email
3) Add a public key to .ssh so you can stop using the password
4) Change your svn password as described in the account email

At this point, you should be good to go.  Checkout the website from svn
and update it.  See if you can figure out how.

Also, for your main harmony/enhanced/classlib/trunk please be sure that
you have checked out via 'https' and not 'http' or you can't check in.
You can switch using svn switch. (See the manual)

Finally, although you now have the ability to commit, please remember :

1) continue being as transparent and communicative as possible.  You
earned committer status in part because of your engagement with others.
 While it was a  have to situation because you had to submit patches
and defend them, but we believe it is a want to.  Community is the key
to any Apache project.

2)We don't want anyone going off and doing lots of work locally, and
then committing.  Committing is like voting in Chicago - do it early and
often.  Of course, you don't want to break the build, but keep the
commit bombs to an absolute minimum, and warn the community if you are
going to do it - we may suggest it goes into a branch at first.  Use
branches if you need to.

3) Always remember that you can **never** commit code that comes from
someone else, even a co-worker.  All code from someone else must be
submitted by the copyright holder (either the author or author's
employer, depending) as a JIRA, and then follow up with the required
ACQs and BCC.


Again, thanks for your hard work so far, and welcome.

The Apache Harmony PPMC


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Is it OK for VM kernel class to call internal classlib API?

2006-07-07 Thread Oliver Deakin

Andrey Chernyshev wrote:

I was trying to compile the kernel classes set from DRLVM independently
from the classlib and found it difficult because kernel classes set
currently have a dependence on the internal classlib API, e.g.
org.apache.harmony.luni.internal.net.www.protocol.jar.JarURLConnection
and org.apache.harmony.luni.util.DeleteOnExit classes.

I've found the spec for kernel class org.apache.harmony.kernel.vm.VM
(l'm looking at
classlib\trunk\modules\luni-kernel\src\main\java\org\apache\harmony\kernel\vm\VM.java) 

assumes that VM is calling excplicitly 
JarURLConnection.closeCachedFiles()

and DeleteOnExit.deleteOnExit() during VM shutdown.

On the other hand, there is standard API in J2SE called
java.lang.Runtime.addShutdownHook(Thread) which can be used to specify
the
tasks that have to be done during VM shutdown.
BTW, this is exactly how DRLVM implements these methods in it's
VM.java. For example, for VM.closeJars() it currently does like:

public static void closeJars() {
class CloseJarsHook implements Runnable {
public void run() {
JarURLConnection.closeCachedFiles();
}
}
   ...
   Runtime.getRuntime().addShutdownHook(new Thread(new 
CloseJarsHook()));

}

Are there any problems with this approach, should the DRLVM (or other
Vm's) implement these methods differently?


There is a potential problem with this approach. The code that is run by the
shutdown hooks is not restricted in its behaviour and the order that the
shutdown hooks are executed in is not guaranteed. If the deleteOnExit()
and/or closeCachedFiles are not the last hooks to be executed, it is quite
possible that a subsequent shutdown hook could try to access files that
have already been deleted. The only way to guarantee that this will
never happen is to make sure that deleteOnExit() and closeCachedFiles()
are called after all shutdown hooks have completed.

Of course, how this is implemented is down to the VM developer - but
I would strongly recommend not using shutdown hooks for this purpose.



May be it makes sense just to move VM.closeJars() and
VM.deleteOnExit() methods and their drlvm implementation to the luni
classlib component, under assumption that they do not really contain
any VM-specific code?


The version of VM.java currently under luni-kernel does not contain
any VM specific code either, as all its methods simply return null :)
It does, however, give hints as to how to implement the methods
in the javadoc comments (perhaps it should clarify the reason
for not using shutdown hooks).

As described above, I think there is a problem with this implementation,
so I would not like to see it used as an example for other VM
developers.


I have noticed that the Javadoc comments for the DRLVM implementation
of deleteOnExit() and closeJars() both say:

 /**
* 1) This is temporary implementation
* 2) We've proposed another approach to perform shutdown actions.
*/

Have I missed the proposal of this other approach, or are the comments
inaccurate?


I guess it will simplify a bit the org.apache.harmony.kernel.vm.VM
interface as well as would allow to avoid extra dependencies between
VM and classlib.



IMHO there is no problem for kernel classes to have dependencies on
classlib - they are, after all, just another part of the class library that
happens to get developed separately due to their nature.

Regards,
Oliver

--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Is it OK for VM kernel class to call internal classlib API?

2006-07-10 Thread Oliver Deakin

Andrey Chernyshev wrote:

On 7/7/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Andrey Chernyshev wrote:
 I was trying to compile the kernel classes set from DRLVM 
independently

 from the classlib and found it difficult because kernel classes set
 currently have a dependence on the internal classlib API, e.g.
 org.apache.harmony.luni.internal.net.www.protocol.jar.JarURLConnection
 and org.apache.harmony.luni.util.DeleteOnExit classes.

 I've found the spec for kernel class org.apache.harmony.kernel.vm.VM
 (l'm looking at
 
classlib\trunk\modules\luni-kernel\src\main\java\org\apache\harmony\kernel\vm\VM.java) 



 assumes that VM is calling excplicitly
 JarURLConnection.closeCachedFiles()
 and DeleteOnExit.deleteOnExit() during VM shutdown.

 On the other hand, there is standard API in J2SE called
 java.lang.Runtime.addShutdownHook(Thread) which can be used to specify
 the
 tasks that have to be done during VM shutdown.
 BTW, this is exactly how DRLVM implements these methods in it's
 VM.java. For example, for VM.closeJars() it currently does like:

 public static void closeJars() {
 class CloseJarsHook implements Runnable {
 public void run() {
 JarURLConnection.closeCachedFiles();
 }
 }
...
Runtime.getRuntime().addShutdownHook(new Thread(new
 CloseJarsHook()));
 }

 Are there any problems with this approach, should the DRLVM (or other
 Vm's) implement these methods differently?

There is a potential problem with this approach. The code that is run 
by the

shutdown hooks is not restricted in its behaviour and the order that the
shutdown hooks are executed in is not guaranteed. If the deleteOnExit()
and/or closeCachedFiles are not the last hooks to be executed, it is 
quite

possible that a subsequent shutdown hook could try to access files that
have already been deleted. The only way to guarantee that this will
never happen is to make sure that deleteOnExit() and closeCachedFiles()
are called after all shutdown hooks have completed.

Of course, how this is implemented is down to the VM developer - but
I would strongly recommend not using shutdown hooks for this purpose.


Thanks Oliver, this explanation sounds reasonable.
If the issue is just in the shutdown hooks order, then, still may be
it makes sense to add VM.addClasslibShutdownHook(Runnable) method or
something like that, which can be:
- used by the classlib to do whatever resource cleanup / shutdown
works they need;
- guaranteed by VM to be executed always after Runtime's shutdown
hooks are done (can be specified in the contract)?
This approach looks more universal than having two specific methods
cloaseJars() and deletOnExit(). It will also allow VM to do not call
internal classlib API explicitly.


I think the same problem exists with this approach. You still need to
guarantee that closeCachedFiles() and deleteOnExit() are the last classlib
shutdown hooks that are called, otherwise they may cause problems
with any classlib shutdown hooks called after them. Once you put
a restriction on the classlib shutdown hooks that closeCachedFiles() and
deleteOnExit() will be that last ones called, you are basically
in a similar situation to just calling the methods explicitly after all
shutdown hooks complete.

I think it would be fine to add an addClasslibShutdownHook() method
to VM if it was needed, but at the moment I don't believe it is. You are
always going to be in the position of requiring closeCachedFiles()
and deleteOnExit() to run last, and adding an extra shutdown hook
mechanism will not change that.

If you feel uncomfortable with the vm directly calling
JarURLConnection.closeCachedFiles() and
DeleteOnExit.deleteOnExit(), then perhaps those calls could
be moved into VM.java? So, for example, you could add a private
method to DRLVM's VM.java which is called by the vm after running
all shutdown hooks, and would be responsible for calling closeCachedFiles()
and deleteOnExit(). This way the vm is only calling methods within the VM
class, and not directly into classlib internals.







 May be it makes sense just to move VM.closeJars() and
 VM.deleteOnExit() methods and their drlvm implementation to the luni
 classlib component, under assumption that they do not really contain
 any VM-specific code?

The version of VM.java currently under luni-kernel does not contain
any VM specific code either, as all its methods simply return null :)
It does, however, give hints as to how to implement the methods
in the javadoc comments (perhaps it should clarify the reason
for not using shutdown hooks).

As described above, I think there is a problem with this implementation,
so I would not like to see it used as an example for other VM
developers.


I have noticed that the Javadoc comments for the DRLVM implementation
of deleteOnExit() and closeJars() both say:

 /**
* 1) This is temporary implementation
* 2) We've proposed another approach to perform shutdown actions.
*/

Have I missed the proposal of this other

Re: [classlib] Testing conventions - a proposal

2006-07-10 Thread Oliver Deakin

Geir Magnusson Jr wrote:

Oliver Deakin wrote:
  

George Harley wrote:


Hi,

Just seen Tim's note on test support classes and it really caught my
attention as I have been mulling over this issue for a little while
now. I think that it is a good time for us to return to the topic of
class library test layouts.

The current proposal [1] sets out to segment our different types of
test by placing them in different file locations. 
  

ok - on closer reading of this document, I have a few gripes...

First, what happened to the Maven layout we agreed on a while ago?



Maven layout?  We were doing that layout in Jakarta projects long
before maven
  


Interesting - I hadn't realised that was the case. However, it still
doesn't explain the missing java directory ;)


This is a fun thread.  I plan to read it from end to end later today and
comment.

Initial thoughts are that I've been wanting to use TestNG for months
(hence my resistance to any JUnit deps more than we needed to) and
second, annotations won't solve our problems.  More later :)
  


No, annotations will not solve *all* our problems - but, as you probably
already know, they may solve some of those recently discussed on this
list when used in conjunction with TestNG (such as platform specific
tests, test exclusions etc.).

Regards,
Oliver



geir

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using Visual Studio C++ Express to compile classlib - fails.

2006-07-10 Thread Oliver Deakin
Sorry I havnt replied sooner Thorbjørn - I have been off list the last 
few days
or so, and am still wading through all the mail. I think Marks response 
summed up

what I was thinking pretty well but Ill go through and check...

Mark Hindess wrote:

On 5 July 2006 at 13:39, =?ISO-8859-1?Q?Thorbj=F8rn_Ravn_Andersen?= [EMAIL 
PROTECTED] wrote:
  

Oliver Deakin skrev  den 27-06-2006 12:25:


Do you mean the header files in deploy/include? If so, the reason
they are copied there is so that they are in a shared location for
all modules. (In fact it's the same reason that libs are built into
deploy/lib and makefile includes are copied into deploy/build/make).
  
So it is basically a platform agnostic symbolic link? 



No.  It is intended that modules can build against the resulting deploy
tree without having to know about the structure/location of the other
modules.
  


Exactly - the deploy directory will contain all build dependencies for 
modules
in Harmony classlib. There is no direct internal dependency at build 
time from

one module to another (ie for header file location etc.), only a dependency
on the contents of the deploy directory. A module can then just add the
deploy/include and deploy/jdk/include directories to its include path, and
expect all its dependencies to be satisfied.

  

Personally I do not like doing it so, would it be possible to do it
with -I's instead so we do not have redundant copies lying around?



You could use -I flags but then each module would need to know the
location of the header files of the other modules.  It would also
mean we'd need to separate the header that describe internal APIs and
external APIs within modules making the include paths within modules
more complicated.  Now it is quite clear what represents the external
API - only the headers in deploy.
  


Yup - the headers in deploy create a native interface layer between the 
modules.


  

As a consequence, they could also *only* checkout the module they
are interested in, rather than the whole of classlib/trunk, and
still be able to rebuild their altered code.
  

I have heard this discussion, but I am not convinced that having lots
of different building enviroments is a good idea.



But the key here is that since even when you check out everything you
are still building against deploy so in fact the build environment
from the perspective of a single module is actually identical _not_
different.  (Thaft is, no matter how a module is built it's always
building against what is in deploy and never peek directly in to other
modules.)  I think this is important to ensure that modules are always a
well-defined unit that can be replaced.
  


Agreed - the idea was to make the modules depend only on the 
shape/content of
the deploy directory, and not the other modules. So if a module changes 
shape,
disappears or gets added, as long as it puts the right stuff into deploy 
then all the

other modules are happy.

Regards,
Oliver

  

I can however also appreciate tolerable build times :)



Agreed, but I don't think that is the main motivation.

Regards,
 Mark.





-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Is it OK for VM kernel class to call internal classlib API?

2006-07-11 Thread Oliver Deakin

Andrey Chernyshev wrote:

On 7/10/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Andrey Chernyshev wrote:
snip

 Thanks Oliver, this explanation sounds reasonable.
 If the issue is just in the shutdown hooks order, then, still may be
 it makes sense to add VM.addClasslibShutdownHook(Runnable) method or
 something like that, which can be:
 - used by the classlib to do whatever resource cleanup / shutdown
 works they need;
 - guaranteed by VM to be executed always after Runtime's shutdown
 hooks are done (can be specified in the contract)?
 This approach looks more universal than having two specific methods
 cloaseJars() and deletOnExit(). It will also allow VM to do not call
 internal classlib API explicitly.

I think the same problem exists with this approach. You still need to
guarantee that closeCachedFiles() and deleteOnExit() are the last 
classlib

shutdown hooks that are called, otherwise they may cause problems


I think class lib could add just one shutdown hook which is guaranteed
by VM to be run after all other (e.g. application's) shutdown hooks
are completed.


How will the VM know which shutdown hook is the right one?


Within this shutdown hook, classlib are free to choose whatever order
they like, e.g. run all others hooks (whatever may appear in classlib
in the future) and then finally call closeCachedFiles() and
deleteOnExit().

My point was that, if classlib do really care about the shutdown
sequence of it's internal resources, it should be classlib who defines
the shutdown hooks execution order, not VM. IMHO there is no need to
bring this up to the classlib-VM contract.


Isn't adding a special shutdown hook that the classlib requires and
the VM runs last another classlib-VM contract?




with any classlib shutdown hooks called after them. Once you put
a restriction on the classlib shutdown hooks that closeCachedFiles() and
deleteOnExit() will be that last ones called, you are basically
in a similar situation to just calling the methods explicitly after all
shutdown hooks complete.


I agree the situation is exactly the same in terms of the code being
executed, the only question is where this code is located - VM.java
(which is a part of VM, right?) or classlib. I think the good
interface between VM and classlib would minimize VM's knowledge about
classlib and vice versa. In the ideal situation, I would limit it to a
subset of J2SE API.


But remember that the kernel classes are the Java part of the interface 
between
the VM and classlib, not wholly a part of the VM. They are implemented 
by the

VM vendor because they require internal knowledge of the VM (or vice versa)
- this does not mean that they cannot have internal knowledge of the class
libraries also.





I think it would be fine to add an addClasslibShutdownHook() method
to VM if it was needed, but at the moment I don't believe it is. You are


well, to be precise, I'm suggesting to replace existing VM.closeJars()
and VM.deletOnExit() with VM.addClasslibShutdownHook(), not just to
add :)


always going to be in the position of requiring closeCachedFiles()
and deleteOnExit() to run last, and adding an extra shutdown hook
mechanism will not change that.


I thought of:
- VM could guarantee that the classlib shutdown hook is run last;
- Classlib could guarantee that closeCachedFiles() and deleteOnExit()
are called last within the classlib shutdown hook.


This seems to me like saying the VM will only know of one special
method to run instead of two. The difference is small IMHO, and
I cannot see much advantage. I can understand what you are trying to
do, but I don't think the effort is necessary.

Also, these are not the only instances of the kernel classes
having knowledge of classlib internals (Runtime class uses
org.apache.harmony.luni.internal.process.SystemProcess.create()
for example), so making this change will not resolve all the
VM-classlib contracts.

Regards,
Oliver





If you feel uncomfortable with the vm directly calling
JarURLConnection.closeCachedFiles() and
DeleteOnExit.deleteOnExit(), then perhaps those calls could
be moved into VM.java? So, for example, you could add a private
method to DRLVM's VM.java which is called by the vm after running
all shutdown hooks, and would be responsible for calling 
closeCachedFiles()
and deleteOnExit(). This way the vm is only calling methods within 
the VM

class, and not directly into classlib internals.


Yes, of course the dependencies could be localized within VM java.
However, I don't think it really matters where these dependencies are
coming from - VM.java or any other VM file, we either have them
between VM and classlib or not :)






 
  May be it makes sense just to move VM.closeJars() and
  VM.deleteOnExit() methods and their drlvm implementation to the 
luni
  classlib component, under assumption that they do not really 
contain

  any VM-specific code?

 The version of VM.java currently under luni-kernel does not contain
 any VM specific code either, as all its

Re: [general] milestones and roadmap (round 1 summary)

2006-07-11 Thread Oliver Deakin

Geir Magnusson Jr wrote:

I think this captures the input so far w/ a minimum of editorializing on
my part for now :)  let me know if anything was left off, or if there
are new things to be added
  


Thanks! A useful reminder :)

One thing I can think of to be added is improved test coverage - I take 
it the

coverage information at http://wiki.apache.org/harmony/Coverage_information
is still being updated regularly? If so, it's a good indicator of where 
people can

pick up jobs.

Also I seem to remember us discussing splitting snapshots into JRE, JDK
and HDK flavours. I think it received general agreement at the time - is
this something worth adding to the snapshots category?

Regards,
Oliver


General
===
- switch to java 5

- get some of the principal JDK tools in place

- use system libraries, dynamically where appropriate - libz,
  libpng, libjpeg, liblcms, libicu*, etc.

- modularity
  -- DRLVM - refine and document the internal interfaces and API
 such that one can substitute parts of VM (ex, can the MMTk
 activities be a first step?)
  -- classlib - is there opportunity for refactoring classlib
 natives to be more modular WRT portlib?

Build/Test Framework

- regular schedule for snapshots.  Maybe every two weeks for now?
  -- classlib
  -- classlib + DRLVM
  -- classlib + classlib adapter + jchevm

- Build/CI/test framework - Mark/IBM get us booted?
  -- make it easy for anyone to setup the CI infrastructure
 and report back to a website here

- Performance
  -- measure baseline
  -- start looking for hotspots

- Stability and reliability
  -- stress testing

- application-driven advancement
  -- which apps work
  -- tool for users to generate missing classes reports for us

- JCK
  -- acquire
  -- integrate

- federated build
  -- agreement between parts on things like debug/release flag,
 structure of artifacts (model after classlib for now),
 common dependency pool where possible, etc

Classlib

- concurrency : integration of Doug Lea's java.util.concurrency package.
  -- Nathan is looking at it
  -- need support from DRLVM + JCHEVM

- CORBA - yoko?

- JMX
  -- Mark has MX4J in place
  -- need to see if we can host MX4J as separate distributable
 of the Harmony project

- package completion roadmap

Ports
=
- em64t platform support
- ipf platform support
- amd64
- linux/ppc64
- osx/intel
- osx/ppc

Community
=
- make things accessible to users

- increase commmitter pool

- get out of incubator
  -- I think it's too early now, and we aren't suffering
 being in here, so I'd prefer to drop this one...


(p.s. I just got a osx/intel box, so I'm really hoping that port is easy...)

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] debug compilation as default

2006-07-11 Thread Oliver Deakin

Mark Hindess wrote:

On 11 July 2006 at 16:42, Ivan Volosyuk [EMAIL PROTECTED] wrote:
  

[snip]

Working on it. Not sure I like the way make is called from ant build.
Here is an example:

(from modules/luni/build.xml)

target name=clean.native if=is.windows
exec failonerror=true
  executable=${make.command}
  dir=${hy.luni.src.main.native}/vmi/${hy.os}
env key=HY_HDK value=${hy.hdk} /
arg line=clean /
/exec
exec failonerror=true
  executable=${make.command}
  dir=${hy.luni.src.main.native}/luni/${hy.os}
env key=HY_HDK value=${hy.hdk} /
arg line=clean /
/exec
exec failonerror=true
  executable=${make.command}
  dir=${hy.luni.src.main.native}/launcher/${hy.os}
env key=HY_HDK value=${hy.hdk} /
arg line=clean /
/exec
exec failonerror=true
  executable=${make.command}
  dir=${hy.luni.src.main.native}/vmls/${hy.os}
env key=HY_HDK value=${hy.hdk} /
arg line=clean /
/exec
/target

This means that I should copy paste the environment variable from ant
variable conversion code in dozen of places. BTW, why the clean up is
just windows specific? What about Linux?



Good question.  I look forward to Oliver's answer. ;-)

  


Thanks Mark ;) This is just a mistake Ive made in the Ant script - the
if=is.windows bit definitely shouldn't be there.

Shall I open a JIRA for it's removal, or are you happy to just take the if
test out Mark?


I'm going to create some kind of macro command which will include all
common settings for make execution. (/me is reading manuals)



Excellent idea.  Something like (untested):

  make dir=${hy.luni.src.main.native}/vmls/${hy.os} target=clean /

and:

  macrodef name=make
attribute name=dir /
attribute name=target default= /
sequential
  exec failonerror=true
executable=${make.command}
dir=@{dir}
env key=HY_HDK value=${hy.hdk} /
arg line=@{target} /
  /exec
sequential
  /macrodef

(You might have to make the default for target all.)

At the moment the only common file you could put this in is
properties.xml which isn't elegant but it might be okay for now.  We
really need to have a common file that gets moved to the deploy tree -
like the make fragments.
  


That is definitely a good idea. I did it the long way because I don't really
know a great deal about Ant macros - I was more interested in getting it
working first, and was hoping that someone would come along and simplify
it later.

Regards,
Oliver


Regards,
 Mark.



-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [drlvm] using the harmony launcher

2006-07-13 Thread Oliver Deakin
 the fact that an exception already exists
on entering DestroyJavaVM, and clear it before trying to resolve
the VMStart class.



(3)
CreateJavaVM can only be called once for now – many internal data
structures in DRLVM are kept as global variables (jni_env, java_vm,
Global_Env e.t.c.). Therefore, it will be hard to organize the
multiple instances of JavaVM unless all these things are encapsulated
somewhere (into JNIEnv?).

(4)
Launcher wants the vm dll in the default directory unless the option
is specified. Should we realign the drlvm build output and move all
dll's into the default subdir?


yup, or put it into a deploy/jdk/jre/bin/drlvm directory and use the
launcher
options to specify its location.



(5)
What to do with the _org.apache.harmony.vmi.portlib option that
launcher is offering to VM?


This passes the classlib port library function table to the VM. I think
this just makes the classlib port library available to the VM for it to
use/query if it wishes. I think it is fine for the drlvm to ignore this
option
if it wants to.

Regards,
Oliver



Most likely there are more issues that I'm overlooking at the moment.
Please consider the suggested patch is a workaround to make the things
working, I'm wondering if there is a more graceful way to do this.

Thanks,
Andrey.


On 7/11/06, Andrey Chernyshev [EMAIL PROTECTED] wrote:

OK, so I'm going to add CreateJavaVM into vm\vmcore\src\jni\jni.cpp
and also add implementation into DestroyVM (stub is already seem to be
present there) based on destroy_vm(). Then we'll see how it works with
the launcher.

Thanks,
Andrey.


On 7/11/06, Geir Magnusson Jr [EMAIL PROTECTED] wrote:
 This has been my thinking - even if not perfect, lets get it working
 using the launcher and then fix as required. It's arguable if that
 brokenness matters at this point, and I think that there's plenty to
 be gained from having it work via the launcher.

 geir

 Rana Dasgupta wrote:
  create_vm() looks quite close/complete to being a complete
prototype for
  CreateJavaVM,
  but I think more work is needed in DestroyVM which prototypes
DestroyJavaVM
  for functional completeness. It is non waiting on user threads,
it does not
  send the corresponding JVMTI shutdown events, I also don't know
if it
  handles shutdown hooks cleanly ( but these may not be critical
right now
  for hooking up to the launcher ). What do you think?
 
  When I ran a non trivial test.. upto 32 threads instantiating a
very large
  number of objects with -XcleanupOnExit which uses
DestroyVM, it
  exited cleanly. Maybe OK to hookup and fix bugs as we go.
 
  Thanks,
  Rana
 
 
  On 7/10/06, Andrey Chernyshev [EMAIL PROTECTED] wrote:
 
  Yes, it seems like the launcher will need at least
JNI_CreateJavaVM
  and DestroyJavaVM functions.
 
  I couldn't find implementation for CreateJavaVM in drlvm codebase.
  Perhaps create_vm() function in vm\vmcore\src\init\vm_main.cpp
can be
  adopted for that purpose?
  Is there are any tricks and caveats one should be aware of before
  trying to produce CreateJavaVM from it?
 
  I've also seen a prototype for DestroyJavaVM in
  vm\vmcore\src\init\vm.cpp - comment says it needs to be
improved to
  wait till all Java threads are completed.
 
  Any more ideas what needs to be done to implement those?
 
  Thanks,
  Andrey.
 
 
 
-
  Terms of use : http://incubator.apache.org/harmony/mailing.html
  To unsubscribe, e-mail:
[EMAIL PROTECTED]
  For additional commands, e-mail:
[EMAIL PROTECTED]
 
 
 

 -
 Terms of use : http://incubator.apache.org/harmony/mailing.html
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




--
Andrey Chernyshev
Intel Middleware Products Division






--
Oliver Deakin
IBM United Kingdom Limited


Re: [announce] New Apache Harmony Committer : Paulex Yang

2006-07-17 Thread Oliver Deakin

Congratulations Paulex! :)

Geir Magnusson Jr wrote:

Please join the Apache Harmony PPMC in welcoming the project's newest
committer, Paulex Yang.

Mark has demonstrated the elements that help build a healthy community,
namely his ability to work together with others, continued dedication to
the project, an understanding of our overall goals of the project, and
a really unnatural and probably unhealthy focus on nio :)

We all continue to expect great things from him.

Mark, as a first step to test your almighty powers of committership,
please update the committers page on the website.  That should be a good
(and harmless) exercise to test if everything is working.

Things to do :

1) test ssh-ing to the server people.apache.org.
2) Change your login password on the machine as per the account email
3) Add a public key to .ssh so you can stop using the password
4) Change your svn password as described in the account email

At this point, you should be good to go.  Checkout the website from svn
and update it.  See if you can figure out how.

Also, for your main harmony/enhanced/classlib/trunk please be sure that
you have checked out via 'https' and not 'http' or you can't check in.
You can switch using svn switch. (See the manual)

Finally, although you now have the ability to commit, please remember :

1) continue being as transparent and communicative as possible.  You
earned committer status in part because of your engagement with others.
 While it was a  have to situation because you had to submit patches
and defend them, but we believe it is a want to.  Community is the key
to any Apache project.

2)We don't want anyone going off and doing lots of work locally, and
then committing.  Committing is like voting in Chicago - do it early and
often.  Of course, you don't want to break the build, but keep the
commit bombs to an absolute minimum, and warn the community if you are
going to do it - we may suggest it goes into a branch at first.  Use
branches if you need to.

3) Always remember that you can **never** commit code that comes from
someone else, even a co-worker.  All code from someone else must be
submitted by the copyright holder (either the author or author's
employer, depending) as a JIRA, and then follow up with the required
ACQs and BCC.

Again, thanks for your hard work so far, and welcome.

The Apache Harmony PPMC

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


  


--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [drlvm] using the harmony launcher

2006-07-17 Thread Oliver Deakin

Andrey Chernyshev wrote:

On 7/13/06, Oliver Deakin [EMAIL PROTECTED] wrote:

Andrey Chernyshev wrote:
SNIP!


or:
launcher calls CreateJavaVM()
CreateJavaVM() passes call to create_vm()
create_vm() makes its usual calls and returns. A flag is set to
indicate that VMStart still needs to be run.
create_vm() and CreateJavaVM() both exit, returning control to the
launcher
launcher makes a call to run some Java code (possibly main method)
drlvm picks up flag indicating VMStart needs to be executed and runs
it before the specified class

The first approach seems better to me, since it gets all of the vm
initialization
completed within the CreateJavaVM() call.


I also like the first approach, it would be strange behavior for JNI
CallXMethod to run the calls in a separate thread if some special flag
is found.




Does this answer your question? Any other ideas?


I don't think there is an issue with not running of certain parts of
VMStart: one can divide it into 3 different parts:
- initialization;
- running main() method of the user app (in a separate thread)
- shutdown.

The code for VMStart's init() and shutdown() is equally run regardless
of whether drlvm is started with it's own or classlib launcher. The
difference is only in running the user app main() method. Classlib
launcher calls it directly, through JNI, while the drlvm's launcher
wraps it into the run() method and always runs in a separate thread.
I suspect there could be some difference in the classloader's used for
the user app code, perhaps the drlvm classloading experts could give
some more details.


If we separate initialisation and execution of the main() method in VMStart
so that CreateJavaVM works, is there actually a need to keep the
main() method execution in there? I can't think of a reason to launch
the main() method in a separate thread (perhaps, as you suggest, some
drlvm experts can help with this one) - if there is no reason to do
this, then I would probably suggest either altering the drlvm launcher
to use a more conventional launcher approach (CreateJavaVM followed
by CallStaticMethod) or abandoning the drlvm launcher in favour
of the classlib one. Either way, I think the start() method in VMStart
could probably be removed.



Thanks,
Andrey.





 (2)
 If I pass a wrong app class name to the classlib launcher, drlvm
 reports class not found exception and then is crashed. This happens
 because the classlib launcher, once it fails to run the app class,
 reports an exception (with ExceptionDescribe) but doesn't clear it
 (doesn't call ExceptionClear). Then it immediately goes with
 DestroyJavaVM those current implementation in drlvm doesn't expect
 that there is some pending exception object in the TLS.
 Eventually, destroy_vm fails with assert in the class loading code
 while resolving VMStart class (VMStart holds the Java part of the
 shutdown method), because it mistakenly picks up the ClassNotFound
 exception object. It is remaining from unsuccessful attempt of
 classlib launcher to run the app's class main method.

 The question is, who's responsibility should be to clear the exception
 object in that case? I tend to think that classlib launcher should be
 doing this once it takes the responsibility to process the possible
 exceptions while running the app main class.

Although the classlib launcher should probably tidy up after itself and
call ExceptionClear, I don't believe that there is a spec requirement to
clear pending exceptions before calling DestroyJavaVM. Therefore
any launcher could call DestroyJavaVM with an exception pending,
and drlvm would throw a ClassNotFound.
IMHO drlvm should handle the fact that an exception already exists
on entering DestroyJavaVM, and clear it before trying to resolve
the VMStart class.


Yes, this sounds reasonable. Then, what should be the expected
behavior for DestroyVM in case it finds pending exception, should it
silently ignore it, or report a warning or what? JNI spec doesn't seem
to specify these details.


Agreed, the JNI spec isn't too clear on this matter.
In JNI, handling of uncaught exceptions is expected to be done
by the developer (ie using ExceptionOccurred, ExceptionDescribe,
ExceptionClear) rather than the VM. This leads me to think that if
any exceptions have not been cleared by the JNI code before
DestroyJavaVM() is called, then the developer will expect the VM
to just silently ignore them and carry on shutting down.

So, IMHO one of the first things that should be done after entering
DestroyJavaVM is to check for pending exceptions, and clear them
if they're going to interfere with the shutdown sequence.

I've just created a simple launcher that:
 - creates a Java VM
 - invokes the main method of a class that throws a RuntimeException
 - calls DestroyJavaVM without clearing the pending exception
Both the RI and IBM VMs exit without any complaints, so I think that
drlvm should match this behaviour also.

Regards,
Oliver






 (3)
 CreateJavaVM can only be called once

Re: [drlvm] using the harmony launcher

2006-07-17 Thread Oliver Deakin
 which uses
 DestroyVM, it
   exited cleanly. Maybe OK to hookup and fix bugs as we go.
  
   Thanks,
   Rana
  
  
   On 7/10/06, Andrey Chernyshev [EMAIL PROTECTED] wrote:
  
   Yes, it seems like the launcher will need at least 
JNI_CreateJavaVM

   and DestroyJavaVM functions.
  
   I couldn't find implementation for CreateJavaVM in drlvm 
codebase.

   Perhaps create_vm() function in vm\vmcore\src\init\vm_main.cpp
 can be
   adopted for that purpose?
   Is there are any tricks and caveats one should be aware of 
before

   trying to produce CreateJavaVM from it?
  
   I've also seen a prototype for DestroyJavaVM in
   vm\vmcore\src\init\vm.cpp - comment says it needs to be 
improved to

   wait till all Java threads are completed.
  
   Any more ideas what needs to be done to implement those?
  
   Thanks,
   Andrey.
  
  
  
 -
   Terms of use : http://incubator.apache.org/harmony/mailing.html
   To unsubscribe, e-mail: 
[EMAIL PROTECTED]

   For additional commands, e-mail:
 [EMAIL PROTECTED]
  
  
  
 
  
-

  Terms of use : http://incubator.apache.org/harmony/mailing.html
  To unsubscribe, e-mail: 
[EMAIL PROTECTED]
  For additional commands, e-mail: 
[EMAIL PROTECTED]

 
 


 --
 Andrey Chernyshev
 Intel Middleware Products Division




--

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.

-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]







--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [classlib] Testing conventions - a proposal

2006-07-18 Thread Oliver Deakin
 different kinds of tests and permit the exclusion of 
individual tests/groups of tests [3]. I would like to strongly 
propose that we consider using TestNG as a means of providing the 
different test configurations required by Harmony. Using a 
combination of annotations and XML to capture the kinds of 
sophisticated test configurations that people need, and that allows 
us to specify down to the individual method, has got to be more 
scalable and flexible than where we are headed now.


Thanks for reading this far.

Best regards,
George


[1] 
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html 


[2] http://testng.org
[3] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL PROTECTED] 







-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Oliver Deakin
IBM United Kingdom Limited


-
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  1   2   3   >