Re: Docs now use Daisy-wiki defined URLspace

2005-10-29 Thread Thor Heinrichs-Wolpert
Not sure if this is what you wanted, but I made the blue background  
transparent.

Cheers,
Thor HW

On 29-Oct-05, at 11:31 AM, Ross Gardler wrote:


Pier Fumagalli wrote:


On 29 Oct 2005, at 17:18, hepabolu wrote:

On my brand new Powerbook, the background color of the Apache   
Cocoon logo (left with feather) has a different shade of blue as   
background, while I can't remember having seen that on my  
Windows  machine.


AFAICT colors are identical. Anyone else seeing this?

Yes... I'm seeing it, but I think it's because of the color   
management done on the Mac...




I'm no graphics person, can anyone create a version with a  
transparent background?


http://svn.apache.org/repos/asf/cocoon/whiteboard/daisy-to-docs/src/ 
documentation/content/xdocs/images/cocoon-logo.gif


Ross






osgi in trunk - runtime failure

2005-08-15 Thread Thor Heinrichs-Wolpert
I see knopflerfish in the trunk, but to compile it I have to copy the  
knopflerfish jars from core to /lib/osgi/knopflerfish.


Ihave built it using
  build osgi
  cocoon osgi or ./cocoon.sh osgi
as described in the README.osgi

When I try and run it I get an error in this line:

Error: Command -install build/osgi/org.apache.cocoon_1.0.0.jar  
failed, Failed to install bundle: java.util.zip.ZipException: No such  
file or directory


The file is there, and the template right beside it loads fine, so  
I'm assuming there is a missed dependency somewhere.


Any hints?

Cheers,
Thor HW




oscar or knopflerfish

2005-08-15 Thread Thor Heinrichs-Wolpert
Is cocoon going to use knopflerfish or oscar for its default OSGI  
container?


Cheers,
Thor HW


Re: [RT] Micro kernel based Cocoon

2005-05-20 Thread Thor Heinrichs-Wolpert
Daniel:
Check out RIO, which is a QoS oriented system based upon Jini.  It  
has either completed its relicensing or will have completed its  
relicensing to use the Apache 2.0 license.  This is the  
infrastructure used in Sun's RFID initiative and in their Formula1  
Race Car monitoring system.

I can poorly describe RIO as a framework for service beans, their  
dependencies and the QoS the components expect.

Cheers,
Thor HW
On 20-May-05, at 6:38 AM, Daniel Fagerstrom wrote:
Sylvain proposed [1] to base blocks on the OSGi service platform [2] 
[3]. After having studied it in more detail I'm completely  
convinced that it is the way to go.

OSGi

The OSGi service platform is a standarized, component oriented,  
computing environment for networked services. It handled bundles,  
which from a deplyment perspective can be any jar with some extra  
meta info in the manifest. Each bundle declare its dependencies on  
other bundles and what packages it exposes. The framework takes  
care about classloader isolation and there are also support for hot  
deployment, (which require more work in writing the bundles).

There is lifecycle support for dynamical: installation, start,  
stop, update and uninstallation of bundles. There is a service  
layer with registration and lookup of services. A number of APIs  
for standard services has also be defined [1] e.g. log,  
configuration, user admin and http services.

OSGi specification is currently at its 3rd release. It is used as  
kernel for Eclipse (since 3.0), each plugin is a bundle. It is used  
for embeded applications e.g. BMWs 5 series, mobile phones etc.

There are 12 compliant implementations and at least 3 with  
friendly licenses: the Eclipse kernel [4] (release 3+),  
Knoplerfish [5] (release 3), Oscar [6] (release 3-). There is also  
a bundle repository [7]. The Eclipse OSGi contain some extra  
functionallity that probably will be part of OSGi release 4.  
Knoplerfish is more lightweight and has a minimal framework  
distribution at only 200kB.

Alternatives

So what would be the alternatives to using OSGi? We have Pier's  
kernel, Metro [8] and Geronimos GBeans [9]. For Pier's kernel is  
solves AFACS a subset of what OSGi does and IMO we shouldn't base  
something as important as the Cocoon kernel on a one man show if we  
can avoid it. Using Metro will just not happen due to community  
reasons.

So GBeans seem like the only serious alternative. I don't know  
enough about GBeans to be able to evaluate it. But its much earlier  
in its development, it is not a standard and there is only one  
implementation, so it should IMO have a considerable technical  
advantage to OSGi to be used instead. Also I would assume that the  
fact that Eclipse is based on OSGi will mean that it would be much  
easier to write various Cocoon tools if we base Cocoon on OSGi. We  
allready have people in the community (Sylvain, maybe other Eclipse  
developers?) with previous experience in using it.

Getting it done
===
As about any component framework ever concieved, allready has been  
proposed to solve the real blocks problem but we still doesn't have  
them, I would like to be a little bit more specific about how we  
could actually get there with OSGi. My main design criteria (except  
for solving the problem ;) ) is that we should have an incremental,  
evolutionary aporach. No next generation, no new SVN branches, no  
major rewrites.

The rest will by nececity be more technical.
Cocoon bundles
--
The first step is to make Cocoon OSGi compliant by packaging the  
core and the blocks as (passive library) bundles. It just means  
that we add some meta info to the manifest files of the jars for  
core and blocks. Most of the info is allready available in the  
gump.xml and can be automatically added to the mainfests by the  
build system. For each block and core we need to decide what it  
exposes, which initially could be everything, and its dependecy on  
other block and libraries. It would also be and advantage to  
package all the jars that are used by more than one block as a bundle.

This step doesn't affect Cocoon as we know it at all. We just make  
it possible to load core and blocks into the OSGi environment (or  
Eclipse). OTH nothing will happen when we use OSGi this far.

The main sitemap

It should be as simple as possible for a user to add a webapp, so I  
think that a basic webapp bundle should be as simple as a directory  
with a sitemap in it and a WEB-INF with the basic configuration  
files. In the bundle scenario we don't need to put the core and  
block jars in WEB-INF as these are managed within the OSGi  
framework. What is needed is some meta info that says that the  
bundle contains the main sitemap, so that the (soon to be  
described) Cocoon service can search for the bundle and a list of  
what blocks (bundles) it depends on so that the Cocoon service can  
dynamically load all its 

[OT] Re: [RT] Cocoonlet

2005-05-01 Thread Thor Heinrichs-Wolpert

Actually, the idea of OSGi has been running in my head for a long 
time. I discovered OSGi when working on the embedded Cocoon, as we had 
to make an OSGi bundle with it so that it can be added to an 
OSGi-powered system in a car. OSGi is widely used in embedded systems, 
especially automotive and intelligent gateways.

Then came the interesting convergence between embedded system and 
normal systems when the Eclipse folks decided to trash their 
proprietary kernel in 3.0 in favor of OSGi. The resulting kernel has 
two layers: OSGi takes care of all classloading stuff whereas the 
Eclipse plugin system takes manages extension points and the 
associated plumbing.

When learning to write Eclipse plugins a while ago, I found some 
interesting similarities between what Eclipse provides and the Avalon 
semantics we're used to. Writing plugins is amazingly easy. Firstly 
because Eclipse provides an incredible PDE (plugin development 
environment) that guides you through the various tasks needed to write 
a plugin. And secondly because each plugin being isolated in its own 
classloader, there are a lot of issues that go away. For example, 
using static attributes is no more a problem!
Off topic, but it made me think of Sun's F1 work
http://www.jini.org/nonav/meetings/eighth/J8abstracts.html#Cars
There's also work on Jini and OSGi going on ... which may have an 
interesting boost now that Jini is being relicensed under the Apache 
2.0 license.

Cheers,
Thor HW


Re: [RT] On building on stone

2004-03-29 Thread Thor Heinrichs-Wolpert
I wasn't saying to trash it.  We had started to talk about a clean 
slate.

Since you can mark any and all items as view only in JMX, it removes 
your immutable problem.

I've never been a fan of recreating something just for the sake of 
recreating it myself, so I would've started with MX4J and then only 
removed it if it gave me problems.  Then I can focus on my problems and 
not ones that have been mostly solved for me in a way that many others 
are already used to working on.  I guess this comes from my perspective 
of working on fixed ceiling projects that must have a delivery (or I 
don't get paid) and as a contract researcher where I'm mostly 
interested in solving my problems rather than building everything from 
scratch all of the time.

I'd still be interested in seeing what the Geronimo folks are doing to 
see if we have cross community interest in ways that Apache projects 
will use MX4J.

I'm just talking about my approach, had we started from a clean slate 
and had an agreement on things we would want to try out and see which 
is better.  I would have proposed that we leverage MX4J and have it 
load in the tree processor and some other blocks and see how everyone 
likes that.  If not, we do something different.  If there is already 
something written, then maybe we just use that, or if the interfaces 
and implementation model are easy enough we could always replace the 
core and see what we think.  From the discussions here I didn't think 
the swapping, load/unload was solved based upon the questions coming 
from Pier and the discussions on how to do that.

If I'm wrong, great.  Then lets have a look at the new kernel and see 
what we want to view in it.

Cheers,
Thor HW
On 29-Mar-04, at 7:40 AM, Stefano Mazzocchi wrote:

Thor Heinrichs-Wolpert wrote:

The ability to swap in/out components and have the other components 
in the system use it without much in the way of hiccups is a more 
interesting design problem than the load/unload and config.
Yes, totally. And we already have it working and solved with this new 
container. Why would we trash it to move to JMX? what would that buy 
us?

--
Stefano, seriously curious.



Re: [RT] On building on stone

2004-03-29 Thread Thor Heinrichs-Wolpert
It could be we're talking about much the same things.
JMX provides instrumentation capabilities, and standard services, one 
of the standard services is the loading/un-load of archives using the 
MLet service.

There is a strong rumour you've already finished the kernel Pier ... if 
so share I'm dying to see it.

If not, then please have a look at the MLet capabilities for loading 
and unloading.  I think loading and MD5Url using the MLet loader would 
be a good thing and since it is already built for us in MX4J as a 
standard service ... why recreate it unless you find problems with the 
concept as defined by JMX.  JMX is a bit more than just an SNMP agent 
of sorts.

Cheers,
Thor HW
On 29-Mar-04, at 9:26 AM, Pier Fumagalli wrote:

On 29 Mar 2004, at 17:20, Hamilton Verissimo de Oliveira (Engenharia - 
SPO) wrote:
De: Stefano Mazzocchi [mailto:[EMAIL PROTECTED]

Yes, totally. And we already have it working and solved with this new
container. Why would we trash it to move to JMX? what would that buy 
us?
Would you put an instrumentation layer on it? If yes, then consider 
JMX.
I believe that we all agreed that the core should be instrumentable by 
JMX. I believe there's some confusion here between instrumenting the 
Cocoon kernel with JMX and using JMX _as_ the core kernel of Cocoon.

I still believe that for Cocoon the first solution is the optimal one, 
using a Cocoon-tailored kernel, instrumentable with JMX...

	Pier



Re: [RT] On building on stone

2004-03-29 Thread Thor Heinrichs-Wolpert
I was wondering what the Geronimo folks were up to.  The 14 seconds 
I've spent so far looks like they are using JMX to load in all of the 
services (a hint from the boot.mlet).  I'll try and get a better look 
asap and see how they are managing messaging between components.

Cheers,
Thor HW
On 29-Mar-04, at 7:14 PM, Davanum Srinivas wrote:

Since you guys are talking about JMX based stuff. Please take a look 
at the GBean/Kernel stuff in
Geronimo. It's very rich, comprehensice and based on JMX.

-- dims

--- Thor Heinrichs-Wolpert [EMAIL PROTECTED] wrote:
It could be we're talking about much the same things.
JMX provides instrumentation capabilities, and standard services, one
of the standard services is the loading/un-load of archives using the
MLet service.
There is a strong rumour you've already finished the kernel Pier ... 
if
so share I'm dying to see it.

If not, then please have a look at the MLet capabilities for loading
and unloading.  I think loading and MD5Url using the MLet loader would
be a good thing and since it is already built for us in MX4J as a
standard service ... why recreate it unless you find problems with the
concept as defined by JMX.  JMX is a bit more than just an SNMP agent
of sorts.
Cheers,
Thor HW
On 29-Mar-04, at 9:26 AM, Pier Fumagalli wrote:

On 29 Mar 2004, at 17:20, Hamilton Verissimo de Oliveira (Engenharia 
-
SPO) wrote:
De: Stefano Mazzocchi [mailto:[EMAIL PROTECTED]

Yes, totally. And we already have it working and solved with this 
new
container. Why would we trash it to move to JMX? what would that 
buy
us?
Would you put an instrumentation layer on it? If yes, then consider
JMX.
I believe that we all agreed that the core should be instrumentable 
by
JMX. I believe there's some confusion here between instrumenting 
the
Cocoon kernel with JMX and using JMX _as_ the core kernel of Cocoon.

I still believe that for Cocoon the first solution is the optimal 
one,
using a Cocoon-tailored kernel, instrumentable with JMX...

	Pier



=
Davanum Srinivas - http://webservices.apache.org/~dims/



Re: [RT] On building on stone

2004-03-28 Thread Thor Heinrichs-Wolpert
I'll have to say yes I do know about sys admin work (if we're going to 
count years, mine is 22 years now).  Matter of fact one of the JMX 
products I mentioned is a IT management tool (monitoring SNMP, 
services, even SSH accessible shell scripts and the internals of our 
Oracle, MySQL and other databases, Cisco routers, Linux routers, and 
even slot machines *g*).

I know there is no love lost between the Apache and JBoss communities, 
but have a look at their kernel.  It is not entirely based upon JMX, 
but JMX forms a very integral part of their kernel and is pretty good, 
all things considered.  WebOS uses it as well, a little differently but 
the concepts are similar, although they use Jini protocols for 
inter-component messaging.  Sun's network monitoring product works with 
JMX and Jini, although their OSS replacement framework is all Jini and 
custom kernel while JMX is being reviewed to be brought back in.

I'll leave out the OOP discussions as even the father of Java still 
hasn't made up his mind if inheritance or interfaces, or what the 
correct valances should really be.  I think there's an easy Ph.D. in 
the works for anyone that can give a definitive answer there.

In the products I've built using JMX, like I stated earlier, I use JMX 
to load/unload and to manage the config of the components.  I think the 
last one we did was better in that the global configuration is held in 
a centralized database and the framework itself messaged the new 
configuration to the servers (nodes) and the JMX system was used to get 
the new component (via Jini calls, but JMX has no idea how it gets 
there) and then load it in with the IoC config and swaps out the old 
component, which is then GC, or can be dropped by kicking the 
classloader.

Have a quick look at those systems for ideas.  When you swap in a 
component, you also have to consider how all of the other components 
are talking to it.  Almost every system out there that attempts this 
uses the concept of a messaging bus between components, so that you can 
actually swap out a component, which becomes difficult and language 
dependent when using direct references.

The ability to swap in/out components and have the other components in 
the system use it without much in the way of hiccups is a more 
interesting design problem than the load/unload and config.

Thoughts?

Thor HW (who is really enjoying these discussions!!!)

On 28-Mar-04, at 10:19 AM, Gianugo Rabellino wrote:

Thor Heinrichs-Wolpert wrote:

I think a big point (and that may be from never having used JMX) that 
is being missed.  When I was saying JMX and its style form part of a 
good kernel candidate, you have to look at how JMX is used.  It uses 
a standard reflection mechanism to talk to components.  Just to say 
it supports an MBean interface is missing quite a bit.  The main 
things it does it load, unload, start, stop and manage the config of 
components.  It does this all by reflection, which isn't a big deal, 
other than the method calls are standardized.  There are some basic 
lifecycle states that can determine a components current state.
Well, you have been missing a point in my reply, which was about ease 
of configuration not being necessarily a good thing. Anyway, I'm 
catching up on the whole JMX thing, reading specs and books. So far, I 
must confess that I'm *horrified* about the tangled web of reflection 
hacks that make up the spec, not to mention the xml support done 
*wrong* and the horrible procedural and non-OOP approach that is 
forced by MBeans interfaces (which quickly become huge switchboards 
made of if/else statements): no wonder this stuff came from EJB 
people.

But in any case, for now, I'll refrain to comment further: I'm curious 
to hear what is your plan of implementing/integrating JMX as a core 
part of the container, since the more I know about this stuff, the 
more I get convinced that we need to support it (given the goods it 
would buy us and given that it makes no sense to use another 
instrument manager) but as an external thing.

I'm not sure that there is an other-side of the fence where you 
don't need to make things work in real life if you are actually  a 
long-term living in this industry.  I'm going to assume that my JMX 
projects, others that are successful and the JBoss kernel argue the 
opposite of your other side of the fence comment.
I'm wondering if you've ever been on the other side of the fence, 
which is made of sys/network admins who don't quite give a damn about 
languages, OOP abstractions and design stuff (I've been there for 
almost 7 years, so let me state that I know my kids): they have to 
work with a web of multi-architectural, multi-language, 
multi-environment stuff which has the growing tendency of working 
badly together, and they don't want to know what a garbage collector 
is.

They need tools to give them an overview on how their IT system is 
working, and they need standard and language neutral stuff to make

Re: [RT] On building on stone

2004-03-28 Thread Thor Heinrichs-Wolpert
I know there is no love lost between the Apache and JBoss 
communities, but have a look at their kernel.  It is not entirely 
based upon JMX, but JMX forms a very integral part of their kernel 
and is pretty good, all things considered.
Well, you see, I don't want to do that because of licensing issues: 
it's very easy to become tainted, and I definitely don't want to do 
that with a GPL product (and yes, I know that sucks big time, but we 
have to face it).
Well, you can get a good glimpse of how things are done from Juha 
Lindfors book.  There are only so many ways you can call and use MLet 
services ... so there shouldn't be any tainting there (ISBN 
0-672-232288-9).  It might be a bit dated, but I found it to be a 
decent reference when I started using JMX.

In the products I've built using JMX, like I stated earlier, I use 
JMX to load/unload and to manage the config of the components.  I 
think the last one we did was better in that the global configuration 
is held in a centralized database and the framework itself messaged 
the new configuration to the servers (nodes) and the JMX system was 
used to get the new component (via Jini calls, but JMX has no idea 
how it gets there) and then load it in with the IoC config and swaps 
out the old component, which is then GC, or can be dropped by kicking 
the classloader.
Have a quick look at those systems for ideas.  When you swap in a 
component, you also have to consider how all of the other components 
are talking to it.  Almost every system out there that attempts this 
uses the concept of a messaging bus between components, so that you 
can actually swap out a component, which becomes difficult and 
language dependent when using direct references.
The ability to swap in/out components and have the other components 
in the system use it without much in the way of hiccups is a more 
interesting design problem than the load/unload and config.
Definitely. I'm still wondering how you might deal with this nightmare 
with JMX. It's pretty clear to me that JMX relations are not enough. 
Care to tell us more on how you solved or plan to solve this kind of 
issues?
I agree there.  I only use the MLet services to dynamically load and 
unload my services.  I pass in the configs a'la IOC style as that seems 
the most flexible was easier for me to comprehend (and dump 
configurations for debugging).  I'm going to be moving towards the 
MD5Url for getting my component services as this provides some security 
and version management for the archives I'm using.  To manage load 
dependencies I've opted for service strings.  I read about them in 
various implementations at CERN, but I think the only accessible 
example is the Rio project at Jini.org to see an implementation that 
they have tried.  They have an attached QoS that is better suited for 
distributed services.  My service string isn't so clever, in that I 
don't manage QoS, but I do tag in version ranges (e.g. so one component 
can say it is known to work with versions 1.x - 2.6 of the network 
alarm agent).  All components lookup the service they want from the JMX 
server, stat current version is compatible (if not they can request an 
update) and then call the service via a Proxy that is built when the 
service is loaded.  I haven't tried to totally encapsulate all of the 
indirect calls in a smart proxy, but I've been told that is a batter 
way to go by Rickard Oberg.  If there was an XDoclet like tool with the 
correct license, I'd say we should look at that for helping with JMX 
work ... but for our limited requirements we could probably use a 
transformer of some sort in Ant to create either the classes or if we 
want to create totally described ones in a configuration doc, with a 
set of MBeans that get instantiated from an XML descriptor file.  We 
lose some of the compile time checking, but I've found it easier to 
keep the required JMX info in my interfaces so that the sources are 
close together.

Does anyone here know how they are using MX4J in the Geronimo container?

Cheers,
Thor HW


Re: [RT] On building on stone

2004-03-27 Thread Thor Heinrichs-Wolpert
I think a big point (and that may be from never having used JMX) that 
is being missed.  When I was saying JMX and its style form part of a 
good kernel candidate, you have to look at how JMX is used.  It uses a 
standard reflection mechanism to talk to components.  Just to say it 
supports an MBean interface is missing quite a bit.  The main things it 
does it load, unload, start, stop and manage the config of components.  
It does this all by reflection, which isn't a big deal, other than the 
method calls are standardized.  There are some basic lifecycle states 
that can determine a components current state.

You can do reflection calls in many many styles.  It is sometimes nice 
to pick up and use one that is already well understood, and because of 
dynamic proxies can have extra behaviours added in.  You can't actually 
use JMX as a back plane for communication between components.  But you 
can follow the same style it uses throughout (consistency is sometimes 
a nice thing when there is no reason to alter it).

I'm not sure that there is an other-side of the fence where you don't 
need to make things work in real life if you are actually  a long-term 
living in this industry.  I'm going to assume that my JMX projects, 
others that are successful and the JBoss kernel argue the opposite of 
your other side of the fence comment.

These projects all focus on using standard interface, that any 
compatible, swappable component must implement.  They use reflection 
style calls to keep weak references between components so that the 
internal JVMs wont cache a method binding between components.  There is 
a small first time lag when the JVM has to get the lookup loaded for 
the reflection calls, but there are ways of speeding that up (by using 
something like BCEL) and after that the speed difference should be 
negligible.

Cheers,
Thor HW
On 26-Mar-04, at 7:13 AM, Gianugo Rabellino wrote:

Pier Fumagalli wrote:

On 25 Mar 2004, at 19:40, Thor Heinrichs-Wolpert wrote:
Hmmm ... I've never used JMX for remote loading as the security just 
isn't there for my tastes and there other mechanisms that work so 
much better.  It does do a fine job of loading/unloading components 
though.
Gianugo and I spent an hour on the phone (he paid the international 
rate :-) talking exactly about it...
He has a lot more practical experience on JMX than I have, and I 
believe that we got down to a pretty good rationale on how it can all 
work...
Who, me? Actually I have very little real life JMX experience, but 
indeed I've been for quite some time on the other side of the fence, 
where you have to really make things work (which is your situation as 
well, I reckon). Since I come from a network background, I consider 
myself an SNMP diehard, and I think there is a very good analogy here.

SNMP has three basic functionalities:

1. gather informations about a device (read an SNMP value);
2. configure a device (write an SNMP value);
3. handle anomalies and alarms (SNMP traps).
Of such functionalities, #2 is really never used in real life, and 
this is because of separation of concerns: SNMP is an _invaluable_ 
tool for people who need to keep things running, but such people 
aren't the ones configuring such things. Skills of monitoring are 
horizontal, while configuration skills are much more vertical. This is 
basically why the Cisco guy and the Apache guy operate on a CLI much 
better than on an OpenView console.

I think that the same applies to JMX, which really should be nothing 
more than an object oriented and Java aware SNMP (oh yes, it can do 
more than that, but it looks like forcing the paradigm to me).

Now, there _might_ be some goodies that are best managed via JMX, but 
overall I think that generic configuration tools are not the way you 
want to go since they'll give you a tool to, say, set an integer value 
but they won't tell you *why* and *how* you should do that, which 
makes it even more dangerous. A configuration system should be as 
complex as the needs it's solving: it should be usable and 
comfortable, but not necessarily easy per se.

So, bottom line, I'd say that JMX should be used for health monitoring 
and alarms. Activation/deactivation of components might be another 
issue that fits in JMX, but not much than that.

This said, the real issue is where to put JMX. There are three 
candidates:

1. The container itself

2. The block itself, as a whole;

3. The components inside a block.

I have very little doubt that the container should expose a JMX 
interface. I would like to see even blocks to expose some kind of JMX 
behaviour for resource management (something like a generic health 
status monitor, the number of times this block has been called, and 
the like). As per components, I used to think that it would have been 
great to design them as Mbeans, but Pier convinced me that there are a 
number of cases where this doesn't make sense and might just bring 
overcomplications.

Pier, did I summarize

Re: [RT] On building on stone

2004-03-26 Thread Thor Heinrichs-Wolpert
Cool, I'll wait to see the write-up.  I've used JMX (and its 
communication style) now on several small cores for products where I 
wanted to have flexibility of implementations, but wanted fixed 
descriptions of a particular service.

And yes, we did use Jini to manage the components and their security 
across nodes.  There are a few services that use Jini to manage cache, 
state, and messaging across JVMs, even if they are on the same physical 
server.  The nice thing there was that by using smart proxies we could 
have a service, or a cache between a pool of JVMs on a single host 
(because they can and will disappear) with the same effort and 
management across multiple servers.

The MD5Url for loading in versions of components that I mentioned 
earlier comes from the newer releases of Jini.

There's some pretty cool stuff out the there to look at and borrow 
ideas from.

Cheers,
Thor HW
On 25-Mar-04, at 3:23 PM, Pier Fumagalli wrote:

On 25 Mar 2004, at 19:40, Thor Heinrichs-Wolpert wrote:

Hmmm ... I've never used JMX for remote loading as the security just 
isn't there for my tastes and there other mechanisms that work so 
much better.  It does do a fine job of loading/unloading components 
though.
Gianugo and I spent an hour on the phone (he paid the international 
rate :-) talking exactly about it...

He has a lot more practical experience on JMX than I have, and I 
believe that we got down to a pretty good rationale on how it can all 
work...

Gianugo, would you do the honors (as you are more experienced than me 
in this field) to summarize what we chatted about this evening?

	Pier



Re: [RT] On building on stone

2004-03-25 Thread Thor Heinrichs-Wolpert
Possibly, but JMX is also used to load and unload components.  If the 
API matched, I can replace any component at runtime with a simple 
config change.

JMX forms the kernel of JBOSS to handle all of it's inter-core loading 
and can act as a runtime message-plane for communicating between 
components.

It has more to it than just the ability to see into objects or 
components through an MBean.

In the product I was referring to, we just update the config of the 
server in its datastore (JMX allows us to use flat files, XML files and 
databases if the API is consistent for the component) and then message 
the backplane to reload, change, alter, download new components in 
jars, etc. all at runtime.  The one thing we are adding in is MD5Urls 
which is a new thing in the Jini API allowing us to distinguish between 
different versions of similarly named jars.

So I think JMX can be a bolt on, or an underpinning depending on how 
you use it.  I'd be surprised if it didn't meet the core of what you 
described blocks needed.  If a block has a consistent API (Interface) 
then any block that supports that API could be loaded/unloaded and 
messaged to from other components using the JMX services and usage 
guidelines.

Thoughts?
Thor HW
On 25-Mar-04, at 7:16 AM, Stefano Mazzocchi wrote:

Thor Heinrichs-Wolpert wrote:

Sounds good ... as you may remember when you started to talk about 
Blocks in Ghent we started talking about JMX then as well.  It may 
not have everything you want or need, but it does have some good 
points and a basic lifecycle.  I believe that MX4J 
(http://mx4j.sourceforge.net/) is released under an Apache license 
and is also being used in Geronimo.  I think it would be worth 
examining it as a piece of the Core needed for blocks.
I can say that in a previous commercial endeavour I used JMX as the 
core to manage all of the component loading, monitoring and 
distribution.
I'm willing to pitch in here ... and after March 29th able to as well.
Thor,

there is a great deal of value in having a JMX wrapper around our 
blocks, but just to let JMX do what JMX is supposed to: remote 
management and configuration.

At the same time, I think JXM is not capable of doing what we need for 
cocoon blocks, so it cannot be used as the core and, even more 
important, it's not something we control.

The important thing is both stability of our foundations and the 
ability to evolve them (without passing thru a massive world-impacting 
political JSR).

--
Stefano.



Re: [RT] On building on stone

2004-03-25 Thread Thor Heinrichs-Wolpert
Hmmm ... I've never used JMX for remote loading as the security just 
isn't there for my tastes and there other mechanisms that work so much 
better.  It does do a fine job of loading/unloading components though.

Thor HW
On 25-Mar-04, at 9:17 AM, Pier Fumagalli wrote:
On 25 Mar 2004, at 16:58, Hamilton Verissimo de Oliveira (Engenharia - 
SPO) wrote:

-Mensagem original-
De: Thor Heinrichs-Wolpert [mailto:[EMAIL PROTECTED]
So I think JMX can be a bolt on, or an underpinning depending on how
you use it.  I'd be surprised if it didn't meet the core of what you
described blocks needed.  If a block has a consistent API (Interface)
then any block that supports that API could be loaded/unloaded and
messaged to from other components using the JMX services and usage
guidelines.
JMX really helps to develop a extremme loosely-coupled application. 
For
JBoss its was a necessity, for Cocoon I don't know. And my personal 
feeling
is that it can increase complexity.
That's why it should be an instrumentation layer on top of the kernel 
core, and not kernel core in itself.

Cocoon components are tightly coupled, reside in the same VM, on the 
same host (they're not remote) BUT they can be reloaded, so they need 
to have a certain degree of separation.

JMX allows you to componentize distributed applications, but that's 
not our focus, Cocoon (at the end of the day) is a servlet... You need 
something from elsewhere? go and fetch it using the 
HttpProxyGenerator.

From the perception of JMX what one should allow (if they want to 
write it) is to instrument the process of deploying, reloading and 
reconfiguration (remote) of live running blocks, but the first one who 
writes a network remote transformer, is eligible for severe 
beating...

We shouldn't over-indulge in generalization (I personally made that 
mistake already a while back).

	Pier



Re: Instrumentation, anyone?

2004-03-04 Thread Thor Heinrichs-Wolpert
XDoclet is a good generator, but the license is wrong for Apache.
Using straight Introspection can and will expose things that you do not 
wish to allow users to alter on the fly.

Unfortunately I'm completely snowed under until March 29th.  After that 
date I will get back to working on JMX for cocoon.  I suggested using 
the MX4J libraries as they are under an Apache license already, but 
haven't heard any thoughts here.

Aside:
We used some of the AdventNet products and they all seemed to work very 
well.  I only used their SNMP libraries, so I can't comment on their 
other tools.  I used XDoclet to generate MBeans instead.

We could use XML Descriptors that could be used to create dynamic 
MBeans.  As for generation, we could generate MBeans directly, or XML 
descriptors from comments in the code (a'la XDoclet), the downside to 
this requires having access to (and modifying) the source code.

Carsten, if your offering to answer questions on Avalon, that would be 
way cool.

Let me know which way you think will fit in the best way for cocoon'ers 
and I can start on that in April.

Cheers,
Thor HW
On 4-Mar-04, at 1:42 PM, Stefano Mazzocchi wrote:

Joerg Heinicke wrote:

We also had no problems with it. As Hamilton said you just must not 
commit generated code, but I think that's obvious. Do you have 
something else in mind? What does distributed workgroup change on 
the issue?
that there is more chance of people making mistakes ;-)

we *did* have problems with this in the past and we moved away from it.

not that I'm stopping people, just making sure they know there is a 
problem there.

--
Stefano.



Re: [OT] Mac Laptops

2003-12-07 Thread Thor Heinrichs-Wolpert
As a contractor my main bread  butter client is the BC Government in 
Canada.
To save money, the BC Gov't has standardized on M$ Windows for the 
desktop.  They've never actually looked to see if they have saved any 
money from this decision, but I can tell you about the thousands of 
work days lost due to security issues with that ... and that after 
spending millions in trying to convert off of Macs, that the 
legislative support crew is using Macs and saving time.

The standard mailer that comes with OSX can use exchange as one of it's 
servers.  You can also purchase M$ Entourage for OSX that is M$ new 
Exchange client.  The only thing I haven't found a free nice and easy 
tool for is Exchange calendaring.  Although if you're not using some 
ancient version of Exchange, meetings come in as little 
double-clickable attachments that put themselves into you OSX calendar.

I do 90% of my work against an Oracle DB, that I run locally on my 
laptop.

All of my applications are targeted either for Solaris, HP-UX or AIX.  
My current project, and the next one we're starting are the first Linux 
based apps to be used in gov't (they have linux for mail, dns, 
webmethods, etc.) ... so following suit Gov't has standardized on 
RedHat Linux.

In the 2 years that I have used my laptop as my 100% client, I have not 
found 1 issue that was not easily handled.  In my current project as 
people refresh their own workstations they are now opting for OSX 
machines and no one has had any problems.  On my last commercial 
product, even the CEO switched over to a Mac as well as the rest of the 
developer employees because everything just works.

All of the *nix tools you want are already in place so mounting or 
sharing SMB is there.  The built-in VPN is okay and ssh is stable, 
along with the Java side of things.  I also use the Cisco VPN client 
for OSX and it's been much better then the any of the Windows versions 
of the VPN that the rest of the office is using.

The rumour mill puts the G5 laptops out for next summer, given that 
they can solve the heat problems.

Have fun,
Thor HW
On 7-Dec-03, at 10:07 AM, Jorg Heymans wrote:

I would've gotten one of these a few months ago, but decided on a 
winxp instead because i wasn't sure about its standard office 
compatibility. I mean can you easily link up to an exchange server? 
See windows file servers, mount windows directories, open/edit/save 
m$-office word and excel files (and all this without recompiling 
kernel, configure samba, edit config files or *purchase* add-on 
software).

Any real-life experience to be shared? Can one survive with these 
things when getting dumped in an all microsoft office environment? I 
don't want to beg for pdfs all the time when word-docs are being sent 
around.

Taking this even further off-topic, sorry.
Jorg
Alain Javier Guarnieri del Gesu wrote:

* John Morrison [EMAIL PROTECTED] [2003-12-07 11:12]:
Hi All,

Sorry for the off topic posting, but I've been thinking about
upgrading my laptop and you guys have been saying how good
the modern Macs are :)
I was wondering if you'd tell me which Mac you have and
whether you'd buy one again :)
Thanks,

J.

PS, I've never owned a Mac before.
PPS, if you want to email me about this off list; that's
fine too :)
Must chime in. I switched a couple months ago to a G4 Powerbook.
What strikes me most is my new-found ability to pick up where I left
off. I've had laptops that knew how to go to sleep before, but never
one that knew how to wake up. Just open the screen and there's
everything just as I left it. I leave a line of code half-written
and pick it up first thing in the morning.
The G4 makes me realize the importances aesthetics, especially the
aethetics of something I am going to stare at for 14 hours a day.
That is is pleasant to look at also makes it easy to resume, to find
myself in that space where I forget the computer is there, where I
am just looking at my toughts.
And its Unix. I'm in vim, in mutt, in bash, in ssh, in bash, in
screen, right now. There are no Windows applications for me to miss.
I use Eclipse for Java. The JDK ships with OS X.
OS X politely patches itself from time to time. System maintence is
nothing I concern myself with.
The it just works factor is the trump. You will come to see your
Macintosh as a shrewd investment. You will see the commodity-priced
computing alteratives as false economy. When you buy a Mac you are
literally buying time.
Sorry, I know this is off topic.




Re: Load Balancing web applications with mod_proxy...

2003-12-07 Thread Thor Heinrichs-Wolpert
On 7-Dec-03, at 11:54 AM, Pier Fumagalli wrote:
You're talking about failover... Yeah, that's the problem... I've seen 
too many AAARRRGGGHHH when I said the word Perl, so, the solution 
proposed by Jules and Greg (Jetty) of doing it in a module might be 
better...
Is there any way to catch the
The proxy server received an invalid response from an upstream 
server.
from mod_proxy and redirect back into the rewrite for another JVM?

Cheers,
Thor HW


Re: [OT] Mac Laptops

2003-12-07 Thread Thor Heinrichs-Wolpert
On 7-Dec-03, at 4:08 PM, Antonio Gallardo wrote:

BTW, Mac OS X is a Linux based distribution for the Mac processors 
with a
price included. They use KDE as the desktop environment.
Sorry, I haven't responded to some other incorrect stuff, but 
OSX is GNU Mach from NeXT with a FreeBSD personality (the other OSS 
Unix).  I hadn't heard that cocoa was based on KDE before though, so 
I'll look, but based on the rest of the sentence where the info comes 
from I'd hazard a guess that's wrong as well.

It's a nice environment that runs Unix, for a laptop to be able to 
close the lid, dash off and have everything work when you pop open the 
lid it's great.  Not having to dual boot just because a client sends 
you an Excel spreadsheet or Word doc with OLE embedding is a nice time 
saver too.

If you become an Apple developer then you get a discount and I haven't 
seen the price difference be too big of an issue as in most cases it's 
pretty small.  Most people I know can account a great deal of time 
saving in using OSX over Linux.  Since I still have to use M$ products, 
dual booting is not an option since it takes too long, better to always 
run M$ and run Linux under a virtual PC ... if you need to have all of 
those tools at the same time ... or just go OSX.

Cheers,
Thor HW


Re: [OT] Mac Laptops

2003-12-07 Thread Thor Heinrichs-Wolpert
You're both wrong, guys... Mac OS/X is not based on Linux, nor on BSD. It's based on a derivate of CMU's original Mach kernel, which was a direct response to a architectural problem of the BSD kernel...

Getting closer ... Mach was the name chosen because MOOS sounded wrong and one of the other students had a thick accent and called it machhh.
Mach was worked on because you could only get OS research funding from the US gov't if was based on problems with Unix.  The part that looks like unix is an emulated BSD derivative.

Or at least that's the story that the creator of Mach, x-tad-biggerAvie Tevanian,/x-tad-bigger used to tell us when we worked at NeXT.

Cheers,
Thor HW


Re: [OT] Mac Laptops

2003-12-07 Thread Thor Heinrichs-Wolpert
On 7-Dec-03, at 7:00 PM, Antonio Gallardo wrote:
snip/snip
I'm glad you like your Linux based laptop, the fellow asked if we liked 
our OSX boxes ... which we do.

But for the record I never stop my OSX laptop, so I don't have to 
restart, (okay for your 1 minute).  Then I don't have to restart my 
database, app server, IDE, open files, ssh tunnels, etc., etc., etc.

My clients are mostly M$ based and I have to work with them, so dual 
booting as you suggest would be more like a 30 minute switch ... which 
I'd have to do several times a day, not just when I show up ... but 
while working on Code, then reviewing docs in Word etc.  I have run 
Linux laptops and they didn't for me due to those reasons already 
stated.

So for me it is a better fit, for you Linux is a good fit.

For me my Intel laptops have always cost close to what my current OSX 
laptop cost, by the time I get all of the RAM, decent drive size, 
external connections like Bluetooth, Wi-Fi, FIrewire, etc.  So each 
person has a different ROI based upon the work that they do.

For the fellow that asked ... he can use the info provided by both 
sides in determining his own path.

Cheers,
Thor HW


Re: [OT] Mac Laptops

2003-12-07 Thread Thor Heinrichs-Wolpert
On 7-Dec-03, at 8:37 PM, Antonio Gallardo wrote:

Antonio Gallardo dijo:
Why Apple fans always try to forgot the fact: Intel or AMD craps are
faster in the same space and time than Gx processors? And try to hide 
this
fact it behind the OS arena? I know, in hardware, it is not an MHz. 
issue.
But Intel and AMD craps are in fact faster and cheaper. In short you 
get
more for your bucks. :-D
To reinforce the above, check this:

http://barefeats.com/g5c.html

Best Regards,

Antonio Gallardo
Thanks ... according to this and related resellers the cost of their 
laptops was the same as mine for a slightly slower machine.  If I was 
to buy today, the Xeon 2.4 is the best I could get in a laptop (and not 
all of the sw I need is available), but as we all know (or should) hw 
cpu advantages don't last long.  In 6 mos maybe a faster Xeon will be 
available, but so might a faster G5.

Anyways ... I'm off this thread.



Re: Profiling Pipeline [was Re: [RT] Unit testing and CocoonUnit]

2003-11-04 Thread Thor Heinrichs-Wolpert
If you try to profile on a Unix environment based on timings you have 
to do multiple passes and get a statistical average to be meaningful.  
The timer functionality in the kernel or user space is at such a low 
priority that effectively it jumps between values ... that's why when 
tuning using tools like sar you have to set a decent sample size.  This 
is pretty much the case for any current non-realtime OS.  For long 
running tasks the time is probably OK, for small times (  10 ms  or 
even .1s timings ) on a busy system the times are just erroneous.

Cheers,
Thor HW
On 3-Nov-03, at 9:36 AM, Nicola Ken Barozzi wrote:

Berin Loritsch wrote:
...
I have a feeling the timings are useless largely because of the 
granularity
of the System.getTimeInMillis() method.  On Windows the granularity 
is 10 ms.
If the timing is less than that it registers as 0.  Add enough zeros 
and it
will always be less than the total time for the timing.
There is a way of getting around this by using JNI and calling the 
correct stuff. If you Google round you will sure find something. 
Probably there is something at www.javaworld.com.

--
Nicola Ken Barozzi   [EMAIL PROTECTED]
- verba volant, scripta manent -
   (discussions get forgotten, just code remains)
-





jmx libs and wrappers

2003-10-13 Thread Thor Heinrichs-Wolpert
intro
JMX is a standardized framework to uniformly instrument disparate 
chunks of Java code in a JVM.  JMX can be used to load, start, manage, 
monitor and stop software components in a standardized way.  There are 
more and more J2EE containers that continue to adopt JMX as the default 
instrumentation layer.  Several systems use the JMX environment to 
communicate between components and by doing so can load and unload 
components at runtime where the Bean interface is the same, but the 
implementations differ.
/intro

background
For basic JMX functionality the interfaces are well described and are 
therefore somewhat portable across implementations.

What is not well described nor portable are the JMX Connectors.  There 
is an emerging RI for remote management, but it is not released and 
therefore does not meet with standard for cocoon to only use production 
released components.  Implementing any JMX infrastructure now will 
require some customized code to utilize the specific connectors 
provided to access its JMX implementation.  The impacts can be 
mitigated by implementing a layer that mimics the JMX Remote API, which 
is looking as if it is somewhat stable.
/background

What are the restrictions for the libs we can use within cocoon?

Can I use Sun libs, or Sun RI libs for basic jmx functionality?  JMX 
libs are also available from IBM and jboss (although the jboss ones are 
gpl'ed).

Does it matter if I use an ANT get task to download the lib from 
ibiblio.org?

My first pass will probably be an XML descriptor that can be used to 
describe the properties that the MBean (Managed Bean) should expose.  I 
wont add in any of the auto-loading, distribution or module 
collaboration via a jmx back-plane until I see more of the 
cocoon-blocks.

Cheers all,
Thor HW


Re: jmx libs and wrappers

2003-10-13 Thread Thor Heinrichs-Wolpert
oh, and one library I really like is MX4J which I've just realized is 
distributed under an Apache license.  So unless anyone objects I'll use 
the MX4J as the core JMX services.

Now should it be an ANT get from sourceforge, ibiblio or just a local 
lib?

Cheers,
Thor HW
On Monday, October 13, 2003, at 12:02  PM, Thor Heinrichs-Wolpert wrote:

intro
JMX is a standardized framework to uniformly instrument disparate 
chunks of Java code in a JVM.  JMX can be used to load, start, manage, 
monitor and stop software components in a standardized way.  There are 
more and more J2EE containers that continue to adopt JMX as the 
default instrumentation layer.  Several systems use the JMX 
environment to communicate between components and by doing so can load 
and unload components at runtime where the Bean interface is the same, 
but the implementations differ.
/intro

background
For basic JMX functionality the interfaces are well described and are 
therefore somewhat portable across implementations.

What is not well described nor portable are the JMX Connectors.  There 
is an emerging RI for remote management, but it is not released and 
therefore does not meet with standard for cocoon to only use 
production released components.  Implementing any JMX infrastructure 
now will require some customized code to utilize the specific 
connectors provided to access its JMX implementation.  The impacts can 
be mitigated by implementing a layer that mimics the JMX Remote API, 
which is looking as if it is somewhat stable.
/background

What are the restrictions for the libs we can use within cocoon?

Can I use Sun libs, or Sun RI libs for basic jmx functionality?  JMX 
libs are also available from IBM and jboss (although the jboss ones 
are gpl'ed).

Does it matter if I use an ANT get task to download the lib from 
ibiblio.org?

My first pass will probably be an XML descriptor that can be used to 
describe the properties that the MBean (Managed Bean) should expose.  
I wont add in any of the auto-loading, distribution or module 
collaboration via a jmx back-plane until I see more of the 
cocoon-blocks.

Cheers all,
Thor HW




Re: [GT2003] Thank you

2003-10-09 Thread Thor Heinrichs-Wolpert
Thanks for the great sessions and hospitality.
Everyone involved put in a lot of work to make this a great even.  I 
sure appreciated.

I'm looking forward to next years!

Cheers all,
Thor HW