Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

1. Yes, you can buffer the whole object graph as long as it is small enough.
2. In the end threads are just abstraction on top of inherently serial 
machine that produces asynchronous events (CPU with interrupts)

providing a nice programming in languages that do not support monads.
It might be worth investigating http://www.paralleluniverse.co/quasar/ 
for River.


OTOH I have also read papers about handling C10M problem. These guys are 
serious :) .
The general conclusion is that any "general" abstraction (such as 
threads) breaks. Any context switches are no-no.
So you implement the full event (interrupt) driven network stack in user 
space - the architecture somewhat similar to an exokernel.

See https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4

But we are diverging...
Cheers,
Michal

Niclas Hedhman wrote:

Ok, but assuming that you are not talking about GB-sized object graphs, it
is more an issue with RMI than Serialization, because you can create
non-blocking I/O "on top", just like Jetty has non-blocking I/O "on top" of
the equally blocking Servlet API. Right? I assume that there is a similar
thing in Tomcat, because AFAIK Google AppEngine runs on Tomcat
It is not required (even it is) that the ObjectOutputStream is directly
connected to the underlying OS file descriptor. I am pretty sure that it
would be a mistake trying to re-design all software that writes to a stream
to have a non-blocking design.

Additionally, while looking into this, I came across
https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren_html/index.html,
which might be to your interest. Not totally relevant, but still an
interesting read.

Cheers

On Sun, Feb 5, 2017 at 2:04 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


You do not have to do any IO in readObject/writeObject.

The fact that you have readObject/writeObject methods means that you are
forced to do blocking IO.
It is simple:

readObject(...) {
   ois.defaultReadObject();
   //the above line MUST be blocking because
   verifyMyState();
   //this line expects the data to be read
}

Siilarly:

writeObject(ObjectOutputStream oos) {
   oos.writeInt(whateverField);
   //buffers full? You need to block, sorry
   oos.writeObject(...)

}

Thanks,
Michal

Niclas Hedhman wrote:

I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  wrote:


Don't know about other serialization uses but my issue with it is that it
precludes using it in non-blocking IO.
Sorry if I haven't been clear enough.


Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  
  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)"  

  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter  

  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.






















Re: OSGi

2017-02-04 Thread Niclas Hedhman
Cool about the ModBus - I originated in that space, have done some smaller
work on ModBus itself, and know exactly what you are talking about. (Once I
had the ability to read 8o1 serial data from digital (meant could store a
waveform) oscilloscopes... bit 0 to the right, 1 is low... without much
trouble).

And I agree with what you say... Inexperience is damaging, and combined
with "no constraints" it is devastating. But my view is that the main
problems when it comes to adoption is two-fold;

a) Multi-platform is a big deal, except for highly specialized shops, you
can't escape this, and it is fueled by every opinion-maker out there.

b) Designing for distributed sharing is difficult, and when code and data
is "the same thing" (unlike your ModBus example) the "difficult" part
escalates quickly. When Java code implementation details (say order of
ooi.readObject() in one type) becomes the only "documented" form of the
protocol (face it, that is what happened almost always), I can't blame
architects and other decision-makers when they say "Nah, don't think so.".
And, most people speak of these things in "abstracts" instead of concrete
use-cases, which makes it impossible to argue in favor of something that
has been berated by opinion-makers (back to point a)

So, what do I do nowadays; Well, I am currently working on larger systems
and there I am firmly in the HATEOAS camp, trying to stay true to
Fielding's intents (constraints, resources,...) when describing REST soon 2
decades ago. Even hear the "script kiddies" are complaining that it is too
hard, and go with a RPC-over-HTTP-with-JSON instead.

Cheers
Niclas

On Sun, Feb 5, 2017 at 2:35 AM, Gregg Wonderly  wrote:

>
> > On Feb 4, 2017, at 3:37 AM, Niclas Hedhman  wrote:
> >
> > Gregg,
> > I know that you can manage to "evolve" the binary format if you are
> > incredibly careful and not make mistakes. BUT, that seems really hard,
> > since EVEN Sun/Oracle state that using Serilazation for "long live
> objects"
> > are highly discouraged. THAT is a sign that it is not nearly as easy as
> you
> > make it sound to be, and it is definitely different from XML/JSON as once
> > the working codebase is lost (i.e. either literally lost (yes, I have
> been
> > involved trying to restore that), or modified so much that compatibility
> > broke, which happens when serialization is not the primary focus of a
> > project) then you are pretty much screwed forever, unlike XML/JSON.
>
> I think that there is some realistic issues as you describe here.
> Certainly if the XML or JSON can be “read”, you can get some of the data
> out of it.  Java Serialization or any binary structure requires more
> knowledge to extra the “data” from.  I am not going to really argue that
> point other than to say that for sure, you have to understand the
> implications of this failure mode and do the right things up front so that
> you do have documentation, a documented serial version id plan etc.  Not
> impossible, but indeed additional “work”.
>
> >
> > Now, you may say, that is for "long lived serialized states" but we are
> > dealing with "short lived" ones. However, in today's architectures and
> > platforms, almost no organization manages to keep all parts of a system
> > synchronized when it comes to versioning. Different parts of a system is
> > upgraded at different rates. And this is essentially the same as "long
> > lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
> > JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
> > similarity? If not, then I will not be able to convince you. If you do,
> > then ask "why did Sun/Oracle state that long-lived objects with Java
> > Serialization was a bad idea?", or were they also clueless on how to do
> it
> > right, which seems to be your actual argument.
>
> My actual argument is that “data” is “data”.  It doesn’t matter how it’s
> “structured”.  The only thing that JSON or XML has on “binary” is that you
> can “look” at it with your eyes and feel more comfortable with what you
> see.  If I typed the following two sets of byte sequences at you, what
> could you tell me about them?
>
> 00 01 00 00 00 06 01 03 00 00 00 02
> 00 01 00 00 00 06 01 04 42 28 00 00
>
> In the right context, you could tell me that this is a ModbusTCP request
> for two holding registers, 40001 and 40002.  Further, you’d look at the
> reply packet and say, it looks like the returned two registers are a
> floating point number because the first byte is 42.  Further, you could
> tell me that the float point number itself is actually the value 42.0.
>
> My point is that there is always context (a ModbusTCP conversation log),
> knowledge (I know Modbus like the back of my hand) and experience (I know
> what the general structure of IEEE floating point is and because I have
> stared at these byte streams when I knew there were float point numbers
> involved, I can recognize this).
>
> Would I be faster to know what I was looking

Re: Serialization issues

2017-02-04 Thread Niclas Hedhman
Ok, but assuming that you are not talking about GB-sized object graphs, it
is more an issue with RMI than Serialization, because you can create
non-blocking I/O "on top", just like Jetty has non-blocking I/O "on top" of
the equally blocking Servlet API. Right? I assume that there is a similar
thing in Tomcat, because AFAIK Google AppEngine runs on Tomcat
It is not required (even it is) that the ObjectOutputStream is directly
connected to the underlying OS file descriptor. I am pretty sure that it
would be a mistake trying to re-design all software that writes to a stream
to have a non-blocking design.

Additionally, while looking into this, I came across
https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren_html/index.html,
which might be to your interest. Not totally relevant, but still an
interesting read.

Cheers

On Sun, Feb 5, 2017 at 2:04 AM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> You do not have to do any IO in readObject/writeObject.
>
> The fact that you have readObject/writeObject methods means that you are
> forced to do blocking IO.
> It is simple:
>
> readObject(...) {
>   ois.defaultReadObject();
>   //the above line MUST be blocking because
>   verifyMyState();
>   //this line expects the data to be read
> }
>
> Siilarly:
>
> writeObject(ObjectOutputStream oos) {
>   oos.writeInt(whateverField);
>   //buffers full? You need to block, sorry
>   oos.writeObject(...)
>
> }
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> I am asking what Network I/O you are doing in the readObject/writeObject
> methods. Because to me I can't figure out any use-case where that is a
> problem...
>
> On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  wrote:
>
>
> Don't know about other serialization uses but my issue with it is that it
> precludes using it in non-blocking IO.
> Sorry if I haven't been clear enough.
>
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> And what I/O (network I/O I presume) are you doing during the serialization
> (without RMI)?
>
> On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  
>  wrote:
>
>
> It is not possible to do non-blocking as in "non blocking IO" - meaning -
> threads do not block on IO operations.
> Just google "C10K problem"
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> I don't follow. What does readObject/writeObject got to do with blocking or
> not? You could spin up executors to do the work in parallel if you so wish.
> And why is "something else" less blocking? And what are you doing that is
> "blocking" since the "work" is (or should be) CPU only, there is limited
> (still) opportunity to do that non-blocking (whatever that would mean in
> CPU-only circumstance). Feel free to elaborate... I am curious.
>
>
>
> On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>
> 
>  wrote:
>
>
> Unfortunately due to "writeObject" and "readObject" methods that have to
> be handled (to comply with the spec) - it is not possible to
> serialize/deserialize in a non-blocking fashion.
> So yes... - it is serialization per se.
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> Oh, well that is not "Serialization" per se... No wonder i didn't get it.
>
> On Sat, Feb 4, 2017 at 7:20 PM, Peter   
> 
> 
> 
>   wrote:
>
>
> On 4/02/2017 9:09 PM, Niclas Hedhman wrote:
>
>
> but rather with the APIs - it is inherently blocking by design.
>
> I am not sure I understand what you mean by that.
>
>
>
> He means the client thread that makes the remote call blocks waiting for
> the remote end to process the request and respond.
>
> Cheers,
>
> Peter.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Fwd: Fwd: IoT at the ASF -- ApacheCon and Project DOAPs [was: Does your project play in the IoT space?]

2017-02-04 Thread Patricia Shanahan

Does River have/need a position on the general Apache IoT push?


 Forwarded Message 
Subject: Fwd: IoT at the ASF -- ApacheCon and Project DOAPs [was: Does 
your project play in the IoT space?]

Date: Sat, 4 Feb 2017 14:31:31 -0600
From: Trevor Grant 
Reply-To: d...@community.apache.org
To: us...@apex.apache.org, u...@beam.incubator.apache.org, 
d...@brooklyn.apache.org, u...@cassandra.apache.org, 
d...@community.apache.org, d...@edgent.apache.org, u...@geode.apache.org, 
us...@iota.incubator.apache.org, us...@kafka.apache.org, 
u...@mahout.apache.org, d...@mynewt.incubator.apache.org, 
us...@nifi.apache.org, us...@tomcat.apache.org, u...@zeppelin.apache.org


The first Apache IoT mini-con is happening this year at ApacheCon, Miami!!

http://us.apacheiot.org/

The following is a snipped from Roman Shaponshnik on the dev@community
list, perfectly describes the spirit of the mini-con:

"The whole premise of the track will be "Not your gramps IoT" which means
that unlike IoT events that grew out of the embedded industry we're talking
a very holistic, system view on IoT. Our hope is that Apache IoT will be a
meeting place for next generation IoT 2.0 built by developer, for
developers under the Apache Way governance model.

ASF's breadth starts making a lot of sense when you consider what kind of
technology is needed to build an end-to-end user experience in IoT 2.0: you
start at the edge, you consider the gateways, go to a data center and end
up on a client mobile device. All technology providers are now realizing
that the key to success is allowing developers unprecedented ease of
management and deployment of their business logic all throughout these
layers. Just look at what Amazon is doing with Lambda on the edge
(Amazon's Greengrass)!

The good news is that at ASF we've got all the building blocks available to
us in various communities. So regardless of whether you're an Apache
Mynewt (incubating) developer working on the far fringes of the edge, or
you are a Apache Brooklyn developer automating microservices provisioning
or you're plumbing data streams with Kafka, NiFi or Geode or
you're analyzing that data with Hadoop or you're a Tomcat or httpd
guru facilitating the end-user experience -- we all have pieces to
contribute to the IoT 2.0 puzzle."

I apologize for the mass email (and if you got this multiple times)
however, the Call for Submissions closes February 11th.  All of the
communities copied have something of significance to contribute to IoT, and
the list is not exhaustive- please feel free to forward to any who might be
interested.

Thanks and see you in Miami!


Trevor Grant
Data Scientist
https://github.com/rawkintrevo
http://stackexchange.com/users/3002022/rawkintrevo
http://trevorgrant.org

*"Fortunate is he, who is able to know the causes of things."  -Virgil*



Re: OSGi

2017-02-04 Thread Gregg Wonderly

> On Feb 4, 2017, at 5:09 AM, Niclas Hedhman  wrote:
> 
> see below
> 
> On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
> michal.klec...@xpro.biz> wrote:
>> In the end all of the arguments against Java Object Serialization boil
> down to:
>> "It is easy to use but if not used carefully it will bite you - so it is
> too easy to use"
> 
> Well, kind of...
> If you ever need to deserialize a serialVersionUid=1 with a codebase where
> it is now serialVersionUid != 1, I wouldn't call it "easy to use" anymore.
> Back in the days when I used this stuff heavily, I ended up never change
> serialVersionUid. If I needed to refactor it enough to loose compatibility,
> I would create a new class and make an adapter.

And this is one of the patterns that you had to learn.  I often never change 
serialVersionUid beyond 1 as you suggest here.  Instead, I use an internal, 
private version number in a class field to help me know how to evolve the data. 
 For each version, I know which “data” will not be initialized.  I can have a 
plan for a version 10 object to know how to initialize data introduced in 
version 4, 7 and 8 which will be null references or otherwise unusable.  The 
readObject() can initialize, manufacture or otherwise evolve that object 
correctly.

Gregg

Re: OSGi

2017-02-04 Thread Gregg Wonderly

> On Feb 4, 2017, at 4:21 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Once you transfer the code with your data - the issue of code version 
> synchronization disappears, doesn't it?
> It also makes the wire data format irrelevant. At least for "short lived 
> serialized states".
> 
> I fail to understand how JSON or XML changes anything here.
> 
> In the end all of the arguments against Java Object Serialization boil down 
> to:
> "It is easy to use but if not used carefully it will bite you - so it is too 
> easy to use"
> 
> What I do not like about Java Object Serialization has nothing to do with the 
> format of persistent data
> but rather with the APIs - it is inherently blocking by design.

Yes, it is computationally involved and that can cause some problems with the 
thread of execution that encounters it.  My work around delayed unmarshalling 
and the notion of never preferred classes are precisely targeting this issue so 
that you can encounter that “blocking” at the moment you have to.

Gregg

> 
> Thanks,
> Michal
> 
> Niclas Hedhman wrote:
>> Gregg,
>> I know that you can manage to "evolve" the binary format if you are
>> incredibly careful and not make mistakes. BUT, that seems really hard,
>> since EVEN Sun/Oracle state that using Serilazation for "long live objects"
>> are highly discouraged. THAT is a sign that it is not nearly as easy as you
>> make it sound to be, and it is definitely different from XML/JSON as once
>> the working codebase is lost (i.e. either literally lost (yes, I have been
>> involved trying to restore that), or modified so much that compatibility
>> broke, which happens when serialization is not the primary focus of a
>> project) then you are pretty much screwed forever, unlike XML/JSON.
>> 
>> Now, you may say, that is for "long lived serialized states" but we are
>> dealing with "short lived" ones. However, in today's architectures and
>> platforms, almost no organization manages to keep all parts of a system
>> synchronized when it comes to versioning. Different parts of a system is
>> upgraded at different rates. And this is essentially the same as "long
>> lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
>> JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
>> similarity? If not, then I will not be able to convince you. If you do,
>> then ask "why did Sun/Oracle state that long-lived objects with Java
>> Serialization was a bad idea?", or were they also clueless on how to do it
>> right, which seems to be your actual argument.
>> 
>> And I think (purely speculative) that many people saw exactly this problem
>> quite early on, whereas myself I was at the time mostly in relatively small
>> confined and controlled environments, where up-to-date was managed. And
>> took me much longer to realize the downsides that are inherent.
>> 
>> Cheers
>> Niclas
>> 
>> 
> 



Re: OSGi

2017-02-04 Thread Gregg Wonderly

> On Feb 4, 2017, at 3:37 AM, Niclas Hedhman  wrote:
> 
> Gregg,
> I know that you can manage to "evolve" the binary format if you are
> incredibly careful and not make mistakes. BUT, that seems really hard,
> since EVEN Sun/Oracle state that using Serilazation for "long live objects"
> are highly discouraged. THAT is a sign that it is not nearly as easy as you
> make it sound to be, and it is definitely different from XML/JSON as once
> the working codebase is lost (i.e. either literally lost (yes, I have been
> involved trying to restore that), or modified so much that compatibility
> broke, which happens when serialization is not the primary focus of a
> project) then you are pretty much screwed forever, unlike XML/JSON.

I think that there is some realistic issues as you describe here.  Certainly if 
the XML or JSON can be “read”, you can get some of the data out of it.  Java 
Serialization or any binary structure requires more knowledge to extra the 
“data” from.  I am not going to really argue that point other than to say that 
for sure, you have to understand the implications of this failure mode and do 
the right things up front so that you do have documentation, a documented 
serial version id plan etc.  Not impossible, but indeed additional “work”.

> 
> Now, you may say, that is for "long lived serialized states" but we are
> dealing with "short lived" ones. However, in today's architectures and
> platforms, almost no organization manages to keep all parts of a system
> synchronized when it comes to versioning. Different parts of a system is
> upgraded at different rates. And this is essentially the same as "long
> lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
> JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
> similarity? If not, then I will not be able to convince you. If you do,
> then ask "why did Sun/Oracle state that long-lived objects with Java
> Serialization was a bad idea?", or were they also clueless on how to do it
> right, which seems to be your actual argument.

My actual argument is that “data” is “data”.  It doesn’t matter how it’s 
“structured”.  The only thing that JSON or XML has on “binary” is that you can 
“look” at it with your eyes and feel more comfortable with what you see.  If I 
typed the following two sets of byte sequences at you, what could you tell me 
about them?

00 01 00 00 00 06 01 03 00 00 00 02
00 01 00 00 00 06 01 04 42 28 00 00

In the right context, you could tell me that this is a ModbusTCP request for 
two holding registers, 40001 and 40002.  Further, you’d look at the reply 
packet and say, it looks like the returned two registers are a floating point 
number because the first byte is 42.  Further, you could tell me that the float 
point number itself is actually the value 42.0.

My point is that there is always context (a ModbusTCP conversation log), 
knowledge (I know Modbus like the back of my hand) and experience (I know what 
the general structure of IEEE floating point is and because I have stared at 
these byte streams when I knew there were float point numbers involved, I can 
recognize this).

Would I be faster to know what I was looking at, if I saw

{ “downhole_temp” : 42 }

instead, sure.  But, that “costs” bandwidth across my cellular modem link, and 
will further decrease the total number of requests I can send across that fixed 
bandwidth, if I just sent JSON instead of binary data.

My point is that it’s just data, but it satisfies another need I have, reducing 
bandwidth between the source of the data and the user of the data improves 
system performance.

Additionally, it is not “free” to marshal and unmarshal JSON or XML for use by 
an application.  I use large (100k or more) XML documents to “describe” the 
details of devices that use Modbus communications.  I do that because I can 
then use XSLT to transform them into HTML documents for human consumption to 
review these technical descriptions visually where it is easier to depict the 
details.  Thus 4228 becomes something like { “downhole_temp” : 42 } to ease 
consumption for those who don’t have the training, experience and knowledge I 
have.

Java Serialization has adequate control points to manage evolution of the data 
in ways that are “evolution”.  You do have to understand precisely what the 
effect of you changes to the “data” in your object and how code referencing 
that “data” either directly or “functionally” can cope with what is going on.

It is important detail.  It does require training and experience.  You do have 
to understand some basic patterns for data evolution which will allow you to be 
successful rather than frustrated by your inexperience or lack of knowledge 
leading to failure.

A large majority of the web development and evolution happened because 
inexperienced people were left in charge of the new platform.  All the other 
software developers were already working on other things where th

Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

You do not have to do any IO in readObject/writeObject.

The fact that you have readObject/writeObject methods means that you are 
forced to do blocking IO.

It is simple:

readObject(...) {
  ois.defaultReadObject();
  //the above line MUST be blocking because
  verifyMyState();
  //this line expects the data to be read
}

Siilarly:

writeObject(ObjectOutputStream oos) {
  oos.writeInt(whateverField);
  //buffers full? You need to block, sorry
  oos.writeObject(...)
}

Thanks,
Michal

Niclas Hedhman wrote:

I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Don't know about other serialization uses but my issue with it is that it
precludes using it in non-blocking IO.
Sorry if I haven't been clear enough.


Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  
  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter  

  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.



















Re: Serialization Formats, Previously: OSGi

2017-02-04 Thread Niclas Hedhman
Thumbs Up! Agree...

On Sun, Feb 5, 2017 at 1:33 AM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> I cannot disagree with rants about software industry state.
>
> Let's get back to technical solutions to non-technical problems. I am
> interested in providing tools - whether will be used... is a different
> story.
>
> That said...
> IMHO Jini - in all its greatness - DID NOT solve the problem of Java code
> mobility in any way.
> As has been discussed on this list several time the way how it "solved" it
> is:
> - inherently insecure (because object validation is done _after_ code
> execution)
> - is not capable of transferring complicated object graphs - hence it
> cannot be used in many different interesting scenarios.
>
> Partial solutions are worse than lack of solutions - they confuse users
> (in our case programmers) and in the end people loose interest.
>
> I am not a big fan of Java containers - be it JEE or any other (OSGI
> included)
> The industry seems to understand they are a dead end - escpecially in the
> age of Docker etc - and is moving away from them (not that in a very
> meaningful direction :) ).
>
> I have worked with OSGI for several years and it was a difficult
> relationship :)
> Today I prefer simpler solutions: "java -jar 
> my-very-complicated-and-important-service.jar"
> is the way to go.
>
> Thanks,
> Michal
>
>
>
> Niclas Hedhman wrote:
>
> (I think wrong thread, so to please Peter, I copied it all into here)
>
> Correct, it is not different. But you are missing the point; CONSTRAINTS.
> And without constraints, developers are capable of doing these good deeds
> (such as your example) and many very poor ones. The knife cuts your meat or
> butcher your neighbor... It is all about the constraints, something that
> few developers are willing to admit that makes our work better.
>
> As for the "leasable and you have..."; The problem is that you are probably
> wrong on that front too, like the OSGi community have learned the hard way.
> There are too many ways software entangle classloading. All kinds of shit
> "registers itself" in the bowels of the runtime, such as with the
> DriverManager, Loggers, URLHandlers or PermGenSpace (might be gone in Java
> 8). Then add 100s of common libraries that also does a poor job in
> releasing "permanent" resources/instances/classes... The stain sticks, but
> the smell is weak, so often we can't tell other than memory leak during
> class updates.
> And why do we have all this mess? IMHO; Lack of constraints, lack of
> lifecycle management in "everything Java" (and most languages) and lack of
> discipline (something Gregg has, and I plus 5 million other Java devs don't
> have). OSGi is not as successful as it "should" (SpringSource gave up)
> because it makes the "stain" stink really badly. OSGi introduces
> constraints and fails spectacular when we try to break or circumvent them.
>
> River as it currently stands has "solved" Java code mobility, Java leases,
> dynamic service registry with query capabilities and much more. But as with
> a lot of good technology, the masses don't buy it. The ignorant masses are
> now in Peter Deutsch's Fallacies of Distributed Computing territory,
> thinking that microservices on JAX-RS is going to save the day (it isn't, I
> am rescuing a project out of it now).
> Distributed OSGi tried to solve this problem, and AFAICT has serious
> problems to work reliably in production environments. What do I learn? This
> is hard, but every 5 years we double in numbers, so half the developer
> population is inexperienced and repeat the same mistakes again and again.
>
> Sorry for highlighting problems, mostly psychological/philosophical rather
> than technological. I don't have the answers, other than; Without
> Constraints Technology Fails. And the better the constraints are defined,
> the better likelihood that it can succeed.
>
>
>
>
> On Sat, Feb 4, 2017 at 8:59 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  wrote:
>
>
> Comments below.
>
> Niclas Hedhman wrote:
>
> see below
>
> On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>   wrote:
>
> Once you transfer the code with your data - the issue of code version
>
> synchronization disappears, doesn't it?
>
> It also makes the wire data format irrelevant. At least for "short lived
>
> serialized states".
>
> Only works if you have no exchange with the environment it is executing.
> And this is where "sandboxing" concern kicks in. What is the sandbox? In a
> web browser they try to define it to DOM + handful of other well-defined
> objects. In case of Java Serialization, it is all classes reachable from
> the loading classloader. And I think Gregg is trying to argue that if one
> is very prudent, one need to manage this well.
>
>
> But how is "exchange with the environment it is executing"
> actually different when installing code on demand from installing it in
> advance???
>
> The whole point IMHO is to shift thinking from "mov

Re: Serialization issues

2017-02-04 Thread Niclas Hedhman
I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> Don't know about other serialization uses but my issue with it is that it
> precludes using it in non-blocking IO.
> Sorry if I haven't been clear enough.
>
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> And what I/O (network I/O I presume) are you doing during the serialization
> (without RMI)?
>
> On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  wrote:
>
>
> It is not possible to do non-blocking as in "non blocking IO" - meaning -
> threads do not block on IO operations.
> Just google "C10K problem"
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> I don't follow. What does readObject/writeObject got to do with blocking or
> not? You could spin up executors to do the work in parallel if you so wish.
> And why is "something else" less blocking? And what are you doing that is
> "blocking" since the "work" is (or should be) CPU only, there is limited
> (still) opportunity to do that non-blocking (whatever that would mean in
> CPU-only circumstance). Feel free to elaborate... I am curious.
>
>
>
> On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  
>  wrote:
>
>
> Unfortunately due to "writeObject" and "readObject" methods that have to
> be handled (to comply with the spec) - it is not possible to
> serialize/deserialize in a non-blocking fashion.
> So yes... - it is serialization per se.
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> Oh, well that is not "Serialization" per se... No wonder i didn't get it.
>
> On Sat, Feb 4, 2017 at 7:20 PM, Peter   
> 
>   wrote:
>
>
> On 4/02/2017 9:09 PM, Niclas Hedhman wrote:
>
>
> but rather with the APIs - it is inherently blocking by design.
>
> I am not sure I understand what you mean by that.
>
>
>
> He means the client thread that makes the remote call blocks waiting for
> the remote end to process the request and respond.
>
> Cheers,
>
> Peter.
>
>
>
>
>
>
>
>
>
>
>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: OSGi

2017-02-04 Thread Niclas Hedhman
On Sun, Feb 5, 2017 at 1:23 AM, Gregg Wonderly  wrote:
> Okay, rant completed.

Gregg, you are probably right about the web, endless problems and sheer
will-power has made it to apparently work.


> What other details of Jini vs the Web are problem areas
> to you?  What makes the “web” better than Jini as a mobile
> code platform?  The “browser” is a platform.  The notions of
> serviceUI were developed to create a “browser” like platform
> as a starting point for services to export a complete client
> experience as a web page is today.  Are you using
> ServiceUI to take advantage of that, or do your clients
> have to create their own page or task to talk to your services?

If you are asking me personally; I have not used Jini in any meaningful way
for ~10 years now. I still think it is a fascinating concept, and have a
lot of fond memories of "impossible" achievements we did back then
(2000-2005). And no, we were not using anything beyond Reggie, and had our
own system for enabling functionality as services came up or went down. And
once we figured out all the small traps, it was smooth sailing. BUT, our
usecase didn't actually need it. And I never manage to sell Jini to anyone
else later, primarily because it demands Java clients

But Voyager was another of those incredibly fascinating technologies that I
experimented with in 1997/98 somewhere. Again, couldn't find use-cases to
use it for real, though.

In much broader terms, Data is king nowadays and anything about technology
platforms are subservient and somewhat uninteresting to decision-makers.
Sad indeed, but for a majority of us, that is what we need to deal with.

Cheers
--
Niclas Hedhman, Software Developer
http://polygene.apache.org - New Energy for Java


Draft of February 2017 report

2017-02-04 Thread Patricia Shanahan

## Description:

 - Apache River software provides a standards-compliant JINI service.

## Issues:

 - There are no issues requiring board attention at this time.

## Activity:

 - Continued discussion of River's future direction, this quarter 
mainly on OSGi and serialization.


 - Zsolt Kúti reworked the River web site.

## Health report:

 - The future directions discussion continues.

 - Attracting new developers will remain difficult until the future 
direction is firmed up and made visible. We did add one committer this 
quarter.


## PMC changes:

 - Currently 11 PMC members.
 - No new PMC members added in the last 3 months
 - Last PMC addition was Bryan Thompson on Sun Aug 30 2015

## Committer base changes:

 - Currently 14 committers.
 - Zsolt Kúti was added as a committer on Wed Dec 07 2016

## Releases:

 - River-3.0.0 was released on Wed Oct 05 2016

## Mailing list activity:

 - There has been a recent increase in dev@ activity due to a lively 
technical discussion of OSGi and serialization in the context of River. 
That discussion is on-going.


 - dev@river.apache.org:
- 95 subscribers (up 0 in the last 3 months):
- 201 emails sent to list (114 in previous quarter)

 - u...@river.apache.org:
- 96 subscribers (up 0 in the last 3 months):
- 2 emails sent to list (3 in previous quarter)


## JIRA activity:

 - 2 JIRA tickets created in the last 3 months
 - 1 JIRA tickets closed/resolved in the last 3 months


Re: Serialization Formats, Previously: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

I cannot disagree with rants about software industry state.

Let's get back to technical solutions to non-technical problems. I am 
interested in providing tools - whether will be used... is a different 
story.


That said...
IMHO Jini - in all its greatness - DID NOT solve the problem of Java 
code mobility in any way.
As has been discussed on this list several time the way how it "solved" 
it is:
- inherently insecure (because object validation is done _after_ code 
execution)
- is not capable of transferring complicated object graphs - hence it 
cannot be used in many different interesting scenarios.


Partial solutions are worse than lack of solutions - they confuse users 
(in our case programmers) and in the end people loose interest.


I am not a big fan of Java containers - be it JEE or any other (OSGI 
included)
The industry seems to understand they are a dead end - escpecially in 
the age of Docker etc - and is moving away from them (not that in a very 
meaningful direction :) ).


I have worked with OSGI for several years and it was a difficult 
relationship :)
Today I prefer simpler solutions: "java -jar 
my-very-complicated-and-important-service.jar" is the way to go.


Thanks,
Michal


Niclas Hedhman wrote:

(I think wrong thread, so to please Peter, I copied it all into here)

Correct, it is not different. But you are missing the point; CONSTRAINTS.
And without constraints, developers are capable of doing these good deeds
(such as your example) and many very poor ones. The knife cuts your meat or
butcher your neighbor... It is all about the constraints, something that
few developers are willing to admit that makes our work better.

As for the "leasable and you have..."; The problem is that you are probably
wrong on that front too, like the OSGi community have learned the hard way.
There are too many ways software entangle classloading. All kinds of shit
"registers itself" in the bowels of the runtime, such as with the
DriverManager, Loggers, URLHandlers or PermGenSpace (might be gone in Java
8). Then add 100s of common libraries that also does a poor job in
releasing "permanent" resources/instances/classes... The stain sticks, but
the smell is weak, so often we can't tell other than memory leak during
class updates.
And why do we have all this mess? IMHO; Lack of constraints, lack of
lifecycle management in "everything Java" (and most languages) and lack of
discipline (something Gregg has, and I plus 5 million other Java devs don't
have). OSGi is not as successful as it "should" (SpringSource gave up)
because it makes the "stain" stink really badly. OSGi introduces
constraints and fails spectacular when we try to break or circumvent them.

River as it currently stands has "solved" Java code mobility, Java leases,
dynamic service registry with query capabilities and much more. But as with
a lot of good technology, the masses don't buy it. The ignorant masses are
now in Peter Deutsch's Fallacies of Distributed Computing territory,
thinking that microservices on JAX-RS is going to save the day (it isn't, I
am rescuing a project out of it now).
Distributed OSGi tried to solve this problem, and AFAICT has serious
problems to work reliably in production environments. What do I learn? This
is hard, but every 5 years we double in numbers, so half the developer
population is inexperienced and repeat the same mistakes again and again.

Sorry for highlighting problems, mostly psychological/philosophical rather
than technological. I don't have the answers, other than; Without
Constraints Technology Fails. And the better the constraints are defined,
the better likelihood that it can succeed.




On Sat, Feb 4, 2017 at 8:59 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Comments below.

Niclas Hedhman wrote:

see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  wrote:

Once you transfer the code with your data - the issue of code version

synchronization disappears, doesn't it?

It also makes the wire data format irrelevant. At least for "short lived

serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.


But how is "exchange with the environment it is executing"
actually different when installing code on demand from installing it in
advance???

The whole point IMHO is to shift thinking from "moving data" to "exchange
configured software" -
think Java specific Docker on steroids.

Transferable objects allow you for example to do things like
downloading your JDBC driver automagically without the fuss of installing
it and managing upgrades.
Just publish a DataSource object in

Re: OSGi

2017-02-04 Thread Gregg Wonderly
“The web” is, in fact, an example of a mobile code platform different from 
Jini.  It doesn’t work better and in many cases I find it worse than Jini.  It 
has the same problems set we have.  The JSON or XML or whatever “data” you send 
must be in sync with the Javascript running in the browser.  Browser caches 
create problems with old code and new data.  You have to put versioning in your 
service calls to cleanly evolve your services so that they force the client to 
reload and get the new code so that it is in sync with the new data.

With Jini, you have to get a new service to get “different” data, and thus you 
will get a new proxy that can use the new data correctly.

 Different browsers are like different Java platforms, each has some 
interesting and often frustrating nuances of what the “platform” provides with 
correct implementation of Javascript accessible browser services.

The idea of Ajax for RPC out of the application is exactly in line with using a 
multi-threaded Java application to use Jini services, but it doesn’t work if 
that service can’t force the browser to get new code in the browser that knows 
how to use the new data coming from the service after it was restarted.

There is a wide range of “packages” of massive Javascript which try to “fix” 
problems with different ways that inexperienced developers have tried to create 
web applications.  Large web platforms like Facebook have their own platform of 
massive Javascript that their application is based on.  Everything is different 
in each of these platforms.  It’s really all, completely broken from a 
“developer” perspective because where ever you go, you will likely experience a 
completely different platform all running as Javascript.

I agree that we, as users tend to have a fairly good experience with all the 
mess, but realistically, there is still so much churn in “how” to do web apps 
correctly, that I am not convinced, at all, that there is success as a concept 
in the Javascript platform.  It’s entirely too flexible, too primitive and 
precisely inefficient at directing development to create reusable, portable 
code between these platforms.  Every “page”, “panel” or “form” has huge amounts 
of wiring into the “platform” that was used.  You can not simply take a page 
from Facebook and reuse it on your own.  Single page applications like that are 
just mammoth collections of tightly coupled code.

The only reusable code items are things that are simple blocks of code which 
use local data items in my experience…

Okay, rant completed.

But still, the difference between the web and Jini is the concept of the Jini 
Proxy where the correct code for the service endpoint is always running in the 
client.  That’s a dramatic simplifying detail compared to what  you have to 
worry about on the web.  On the web, the problem is knowing what the client is 
running compared to what data the service is providing.

Endpoints using non-fixed ports makes the Proxy stop working when the service 
is restarted so that the client can be forced to rediscover the service and 
reconnect with a working service and get the proxy for that working service so 
that the client doesn’t have the wrong data for a new version of the service.  
The service can have a new readObject() for that class which can migrate the 
old data format to the new data format needed by the service.

What other details of Jini vs the Web are problem areas to you?  What makes the 
“web” better than Jini as a mobile code platform?  The “browser” is a platform. 
 The notions of serviceUI were developed to create a “browser” like platform as 
a starting point for services to export a complete client experience as a web 
page is today.  Are you using ServiceUI to take advantage of that, or do your 
clients have to create their own page or task to talk to your services?

Gregg

> On Feb 4, 2017, at 3:14 AM, Niclas Hedhman  wrote:
> 
> The latter...
> 
> It works rather well for JavaScript in web browsers. I think that is the
> most interesting "mobile code" platform to review as a starting point.
> 
> On Sat, Feb 4, 2017 at 2:54 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
> michal.klec...@xpro.biz> wrote:
> 
>> Are you opposing the whole idea of sending data and code (or instructions
>> how to download it) bundled together? (the spec)
>> Or just the way how it is done in Java today. (the impl)
>> 
>> If it is the first - we are in an absolute disagreement.
>> If the second - I agree wholeheartedly.
>> 
>> Thanks,
>> Michal
>> 
>> Niclas Hedhman wrote:
>> 
>>> FYI in case you didn't know; Jackson ObjectMapper takes a POJO structure
>>> and creates a (for instance) JSON document, or the other way around. It is
>>> not meant for "any object to binary and back".
>>> My point was, Java Serialization (and by extension JERI) has a scope that
>>> is possibly wrongly defined in the first place. More constraints back then
>>> might have been a good thing...
>>> 
>>> 
>>> 
>> 
> 
> 
> -- 
>

Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Don't know about other serialization uses but my issue with it is that 
it precludes using it in non-blocking IO.

Sorry if I haven't been clear enough.

Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter
wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.
















Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

For those not following my emails on this list :) :

My "codebase annotations" are actually objects of (sub)classes of:

abstract class CodeBase implements Serializable {
...
  abstract Class loadClass(String name, ClassLoader defaultLoader) 
throws IOException, ClassNotFoundException;

...
}

The interface is actually different for several reasons but the idea is 
the same.


So my AnnotatedInputStream is something like:

class AnnotatedInputStream extends ObjectInputStream {

  protected Class resolveClass(...) {
((CodeBase)readObject()).loadClass(...);
  }

}

Simply speaking I allow the _service_ to provide an object that can 
download the code.


Peter proposed to provide serialized CodeBase instances as Base64 
encoded strings (or something similar) - to maintain the assumption that 
codebase annotation is String.
But I do not see it as important at this moment - if needed - might be 
implemented.


Thanks,
Michal

Gregg Wonderly wrote:

Okay, then I think you should investigate my replacement of the 
RMIClassLoaderSPI implementation with a pluggable mechanism.

public interface CodebaseClassAccess {
public Class loadClass( String codebase,
  String name ) throws IOException, 
ClassNotFoundException;
public Class loadClass(String codebase,
  String name,
  ClassLoader defaultLoader) throws 
IOException,ClassNotFoundException;
 public Class loadProxyClass(String codebase,
   String[] interfaceNames,
   ClassLoader defaultLoader ) throws 
IOException,ClassNotFoundException;
public String getClassAnnotation( Class cls );
public ClassLoader getClassLoader(String codebase) throws IOException;
public ClassLoader createClassLoader( URL[] urls,
ClassLoader parent,
boolean requireDlPerm,
AccessControlContext ctx );
/**
 * This should return the class loader that represents the system
 * environment.  This might often be the same as {@link 
#getSystemContextClassLoader()}
 * but may not be in certain circumstances where container mechanisms 
isolate certain
 * parts of the classpath between various contexts.
 * @return
 */
 public ClassLoader getParentContextClassLoader();
/**
 * This should return the class loader that represents the local system
 * environment that is associated with never-preferred classes
 * @return
 */
 public ClassLoader getSystemContextClassLoader( ClassLoader defaultLoader 
);
}

I have forgotten what Peter renamed it to.  But this base interface is what all 
of the Jini codebase uses to load classes.  The annotation is in the “codebase” 
parameter.  From this you can explore how the annotation can move from being a 
URL, which you could recognize and still use, but substitute your own indicator 
for another platform such as a maven or OSGi targeted codebase.

Thus, you can still use the annotation, but use it to specify the type of 
stream instead of what to download via HTTP.

Gregg



On Feb 4, 2017, at 2:02 AM, Michał Kłeczek (XPro Sp. z o. 
o.)  wrote:

My annotated streams replace codebase resolution with object based one (ie - 
not using RMIClassLoader).

Michal

Gregg Wonderly wrote:

Why specific things do you want your AnnotatedStream to provide?

Gregg









Re: OSGi

2017-02-04 Thread Gregg Wonderly
Okay, then I think you should investigate my replacement of the 
RMIClassLoaderSPI implementation with a pluggable mechanism.  

public interface CodebaseClassAccess {
public Class loadClass( String codebase,
  String name ) throws IOException, 
ClassNotFoundException;
public Class loadClass(String codebase,
  String name,
  ClassLoader defaultLoader) throws 
IOException,ClassNotFoundException;
public Class loadProxyClass(String codebase,
   String[] interfaceNames,
   ClassLoader defaultLoader ) throws 
IOException,ClassNotFoundException;
public String getClassAnnotation( Class cls );
public ClassLoader getClassLoader(String codebase) throws IOException;
public ClassLoader createClassLoader( URL[] urls,
ClassLoader parent,
boolean requireDlPerm,
AccessControlContext ctx );
/**
 * This should return the class loader that represents the system
 * environment.  This might often be the same as {@link 
#getSystemContextClassLoader()}
 * but may not be in certain circumstances where container mechanisms 
isolate certain
 * parts of the classpath between various contexts.
 * @return
 */
public ClassLoader getParentContextClassLoader();
/**
 * This should return the class loader that represents the local system
 * environment that is associated with never-preferred classes
 * @return
 */
public ClassLoader getSystemContextClassLoader( ClassLoader defaultLoader );
}

I have forgotten what Peter renamed it to.  But this base interface is what all 
of the Jini codebase uses to load classes.  The annotation is in the “codebase” 
parameter.  From this you can explore how the annotation can move from being a 
URL, which you could recognize and still use, but substitute your own indicator 
for another platform such as a maven or OSGi targeted codebase.

Thus, you can still use the annotation, but use it to specify the type of 
stream instead of what to download via HTTP.

Gregg


> On Feb 4, 2017, at 2:02 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> My annotated streams replace codebase resolution with object based one (ie - 
> not using RMIClassLoader).
> 
> Michal
> 
> Gregg Wonderly wrote:
>> Why specific things do you want your AnnotatedStream to provide?
>> 
>> Gregg
>> 
>> 
> 



Re: Serialization issues

2017-02-04 Thread Niclas Hedhman
And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> It is not possible to do non-blocking as in "non blocking IO" - meaning -
> threads do not block on IO operations.
> Just google "C10K problem"
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> I don't follow. What does readObject/writeObject got to do with blocking or
> not? You could spin up executors to do the work in parallel if you so wish.
> And why is "something else" less blocking? And what are you doing that is
> "blocking" since the "work" is (or should be) CPU only, there is limited
> (still) opportunity to do that non-blocking (whatever that would mean in
> CPU-only circumstance). Feel free to elaborate... I am curious.
>
>
>
> On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  wrote:
>
>
> Unfortunately due to "writeObject" and "readObject" methods that have to
> be handled (to comply with the spec) - it is not possible to
> serialize/deserialize in a non-blocking fashion.
> So yes... - it is serialization per se.
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> Oh, well that is not "Serialization" per se... No wonder i didn't get it.
>
> On Sat, Feb 4, 2017 at 7:20 PM, Peter   
>   wrote:
>
>
> On 4/02/2017 9:09 PM, Niclas Hedhman wrote:
>
>
> but rather with the APIs - it is inherently blocking by design.
>
> I am not sure I understand what you mean by that.
>
>
>
> He means the client thread that makes the remote call blocks waiting for
> the remote end to process the request and respond.
>
> Cheers,
>
> Peter.
>
>
>
>
>
>
>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
It is not possible to do non-blocking as in "non blocking IO" - meaning 
- threads do not block on IO operations.

Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter
wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.












Re: Serialization issues

2017-02-04 Thread Niclas Hedhman
I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> Unfortunately due to "writeObject" and "readObject" methods that have to
> be handled (to comply with the spec) - it is not possible to
> serialize/deserialize in a non-blocking fashion.
> So yes... - it is serialization per se.
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
> Oh, well that is not "Serialization" per se... No wonder i didn't get it.
>
> On Sat, Feb 4, 2017 at 7:20 PM, Peter   
> wrote:
>
>
> On 4/02/2017 9:09 PM, Niclas Hedhman wrote:
>
>
> but rather with the APIs - it is inherently blocking by design.
>
> I am not sure I understand what you mean by that.
>
>
>
> He means the client thread that makes the remote call blocks waiting for
> the remote end to process the request and respond.
>
> Cheers,
>
> Peter.
>
>
>
>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: Serialization Formats, Previously: OSGi

2017-02-04 Thread Niclas Hedhman
(I think wrong thread, so to please Peter, I copied it all into here)

Correct, it is not different. But you are missing the point; CONSTRAINTS.
And without constraints, developers are capable of doing these good deeds
(such as your example) and many very poor ones. The knife cuts your meat or
butcher your neighbor... It is all about the constraints, something that
few developers are willing to admit that makes our work better.

As for the "leasable and you have..."; The problem is that you are probably
wrong on that front too, like the OSGi community have learned the hard way.
There are too many ways software entangle classloading. All kinds of shit
"registers itself" in the bowels of the runtime, such as with the
DriverManager, Loggers, URLHandlers or PermGenSpace (might be gone in Java
8). Then add 100s of common libraries that also does a poor job in
releasing "permanent" resources/instances/classes... The stain sticks, but
the smell is weak, so often we can't tell other than memory leak during
class updates.
And why do we have all this mess? IMHO; Lack of constraints, lack of
lifecycle management in "everything Java" (and most languages) and lack of
discipline (something Gregg has, and I plus 5 million other Java devs don't
have). OSGi is not as successful as it "should" (SpringSource gave up)
because it makes the "stain" stink really badly. OSGi introduces
constraints and fails spectacular when we try to break or circumvent them.

River as it currently stands has "solved" Java code mobility, Java leases,
dynamic service registry with query capabilities and much more. But as with
a lot of good technology, the masses don't buy it. The ignorant masses are
now in Peter Deutsch's Fallacies of Distributed Computing territory,
thinking that microservices on JAX-RS is going to save the day (it isn't, I
am rescuing a project out of it now).
Distributed OSGi tried to solve this problem, and AFAICT has serious
problems to work reliably in production environments. What do I learn? This
is hard, but every 5 years we double in numbers, so half the developer
population is inexperienced and repeat the same mistakes again and again.

Sorry for highlighting problems, mostly psychological/philosophical rather
than technological. I don't have the answers, other than; Without
Constraints Technology Fails. And the better the constraints are defined,
the better likelihood that it can succeed.




On Sat, Feb 4, 2017 at 8:59 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> Comments below.
>
> Niclas Hedhman wrote:
>
> see below
>
> On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)" 
>  wrote:
>
> Once you transfer the code with your data - the issue of code version
>
> synchronization disappears, doesn't it?
>
> It also makes the wire data format irrelevant. At least for "short lived
>
> serialized states".
>
> Only works if you have no exchange with the environment it is executing.
> And this is where "sandboxing" concern kicks in. What is the sandbox? In a
> web browser they try to define it to DOM + handful of other well-defined
> objects. In case of Java Serialization, it is all classes reachable from
> the loading classloader. And I think Gregg is trying to argue that if one
> is very prudent, one need to manage this well.
>
>
> But how is "exchange with the environment it is executing"
> actually different when installing code on demand from installing it in
> advance???
>
> The whole point IMHO is to shift thinking from "moving data" to "exchange
> configured software" -
> think Java specific Docker on steroids.
>
> Transferable objects allow you for example to do things like
> downloading your JDBC driver automagically without the fuss of installing
> it and managing upgrades.
> Just publish a DataSource object in your ServiceRegistrar and you are done.
> Make it leasable and you have automatic upgrades and/or reconfiguration.
>
> Thanks,
> Michal
>



-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

Comments below.

Niclas Hedhman wrote:

see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:

Once you transfer the code with your data - the issue of code version

synchronization disappears, doesn't it?

It also makes the wire data format irrelevant. At least for "short lived

serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.


But how is "exchange with the environment it is executing"
actually different when installing code on demand from installing it in 
advance???


The whole point IMHO is to shift thinking from "moving data" to 
"exchange configured software" -

think Java specific Docker on steroids.

Transferable objects allow you for example to do things like
downloading your JDBC driver automagically without the fuss of 
installing it and managing upgrades.

Just publish a DataSource object in your ServiceRegistrar and you are done.
Make it leasable and you have automatic upgrades and/or reconfiguration.

Thanks,
Michal


Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Unfortunately due to "writeObject" and "readObject" methods that have to 
be handled (to comply with the spec) - it is not possible to 
serialize/deserialize in a non-blocking fashion.

So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.
I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.









Serialization Formats, Previously: OSGi

2017-02-04 Thread Peter
I think we're getting off topic, can we create a new thread, as we're no 
longer discussing OSGi, but the virtues of various serialization formats?


Cheers,

Peter.

On 4/02/2017 9:09 PM, Niclas Hedhman wrote:

see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:

Once you transfer the code with your data - the issue of code version

synchronization disappears, doesn't it?

It also makes the wire data format irrelevant. At least for "short lived

serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.


I fail to understand how JSON or XML changes anything here.

It doesn't. It makes the issues more visible, and more easy to recover when
something goes wrong. Java Serialization is sold "too generic" and hence
abused by everyone (myself included) and we (except Gregg and other gurus)
get burned badly somewhere along the way.


In the end all of the arguments against Java Object Serialization boil

down to:

"It is easy to use but if not used carefully it will bite you - so it is

too easy to use"

Well, kind of...
If you ever need to deserialize a serialVersionUid=1 with a codebase where
it is now serialVersionUid != 1, I wouldn't call it "easy to use" anymore.
Back in the days when I used this stuff heavily, I ended up never change
serialVersionUid. If I needed to refactor it enough to loose compatibility,
I would create a new class and make an adapter.


What I do not like about Java Object Serialization has nothing to do with

the format of persistent data

but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.

Cheers
--
Niclas Hedhman, Software Developer
http://polygene.apache.org - New Energy for Java





Re: OSGi

2017-02-04 Thread Niclas Hedhman
Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter  wrote:

> On 4/02/2017 9:09 PM, Niclas Hedhman wrote:
>
>>
>> but rather with the APIs - it is inherently blocking by design.
>>>
>> I am not sure I understand what you mean by that.
>>
>>
> He means the client thread that makes the remote call blocks waiting for
> the remote end to process the request and respond.
>
> Cheers,
>
> Peter.
>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: OSGi

2017-02-04 Thread Peter

On 4/02/2017 9:09 PM, Niclas Hedhman wrote:



but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for 
the remote end to process the request and respond.


Cheers,

Peter.



Re: OSGi

2017-02-04 Thread Niclas Hedhman
see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:
> Once you transfer the code with your data - the issue of code version
synchronization disappears, doesn't it?
> It also makes the wire data format irrelevant. At least for "short lived
serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.

> I fail to understand how JSON or XML changes anything here.

It doesn't. It makes the issues more visible, and more easy to recover when
something goes wrong. Java Serialization is sold "too generic" and hence
abused by everyone (myself included) and we (except Gregg and other gurus)
get burned badly somewhere along the way.

> In the end all of the arguments against Java Object Serialization boil
down to:
> "It is easy to use but if not used carefully it will bite you - so it is
too easy to use"

Well, kind of...
If you ever need to deserialize a serialVersionUid=1 with a codebase where
it is now serialVersionUid != 1, I wouldn't call it "easy to use" anymore.
Back in the days when I used this stuff heavily, I ended up never change
serialVersionUid. If I needed to refactor it enough to loose compatibility,
I would create a new class and make an adapter.

> What I do not like about Java Object Serialization has nothing to do with
the format of persistent data
> but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.

Cheers
--
Niclas Hedhman, Software Developer
http://polygene.apache.org - New Energy for Java


Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Once you transfer the code with your data - the issue of code version 
synchronization disappears, doesn't it?
It also makes the wire data format irrelevant. At least for "short lived 
serialized states".


I fail to understand how JSON or XML changes anything here.

In the end all of the arguments against Java Object Serialization boil 
down to:
"It is easy to use but if not used carefully it will bite you - so it is 
too easy to use"


What I do not like about Java Object Serialization has nothing to do 
with the format of persistent data

but rather with the APIs - it is inherently blocking by design.

Thanks,
Michal

Niclas Hedhman wrote:

Gregg,
I know that you can manage to "evolve" the binary format if you are
incredibly careful and not make mistakes. BUT, that seems really hard,
since EVEN Sun/Oracle state that using Serilazation for "long live objects"
are highly discouraged. THAT is a sign that it is not nearly as easy as you
make it sound to be, and it is definitely different from XML/JSON as once
the working codebase is lost (i.e. either literally lost (yes, I have been
involved trying to restore that), or modified so much that compatibility
broke, which happens when serialization is not the primary focus of a
project) then you are pretty much screwed forever, unlike XML/JSON.

Now, you may say, that is for "long lived serialized states" but we are
dealing with "short lived" ones. However, in today's architectures and
platforms, almost no organization manages to keep all parts of a system
synchronized when it comes to versioning. Different parts of a system is
upgraded at different rates. And this is essentially the same as "long
lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
similarity? If not, then I will not be able to convince you. If you do,
then ask "why did Sun/Oracle state that long-lived objects with Java
Serialization was a bad idea?", or were they also clueless on how to do it
right, which seems to be your actual argument.

And I think (purely speculative) that many people saw exactly this problem
quite early on, whereas myself I was at the time mostly in relatively small
confined and controlled environments, where up-to-date was managed. And
took me much longer to realize the downsides that are inherent.

Cheers
Niclas






Re: OSGi

2017-02-04 Thread Niclas Hedhman
Gregg,
I know that you can manage to "evolve" the binary format if you are
incredibly careful and not make mistakes. BUT, that seems really hard,
since EVEN Sun/Oracle state that using Serilazation for "long live objects"
are highly discouraged. THAT is a sign that it is not nearly as easy as you
make it sound to be, and it is definitely different from XML/JSON as once
the working codebase is lost (i.e. either literally lost (yes, I have been
involved trying to restore that), or modified so much that compatibility
broke, which happens when serialization is not the primary focus of a
project) then you are pretty much screwed forever, unlike XML/JSON.

Now, you may say, that is for "long lived serialized states" but we are
dealing with "short lived" ones. However, in today's architectures and
platforms, almost no organization manages to keep all parts of a system
synchronized when it comes to versioning. Different parts of a system is
upgraded at different rates. And this is essentially the same as "long
lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
similarity? If not, then I will not be able to convince you. If you do,
then ask "why did Sun/Oracle state that long-lived objects with Java
Serialization was a bad idea?", or were they also clueless on how to do it
right, which seems to be your actual argument.

And I think (purely speculative) that many people saw exactly this problem
quite early on, whereas myself I was at the time mostly in relatively small
confined and controlled environments, where up-to-date was managed. And
took me much longer to realize the downsides that are inherent.

Cheers
Niclas

On Sat, Feb 4, 2017 at 3:35 PM, Gregg Wonderly  wrote:

>
> > On Feb 3, 2017, at 8:43 PM, Niclas Hedhman  wrote:
> >
> > On Fri, Feb 3, 2017 at 12:23 PM, Peter  wrote:
> >
> >>
> >> No serialization or Remote method invocation framework currently
> supports
> >> OSGi very well, one that works well and can provide security might gain
> a
> >> lot of new interest from that user base.
> >
> >
> > What do you mean by this? Jackson's ObjectMapper doesn't have problems on
> > OSGi. You are formulating the problem wrongly, and if formulated
> correctly,
> > perhaps one realizes why Java Serialization fell out of fashion rather
> > quickly 10-12 years ago, when people realized that code mobility (as done
> > in Java serialization/RMI) caused a lot of problems.
>
> I’ve seen and heard of many poorly designed pieces of software.  But, the
> serialization for Java has some very easily managed details which can
> trivially allow you to be 100% successful with the use of Serialization.
> I’ve never encountered problems with serialization.  I learned early on
> about using explicit versioning for any serialization format, and then
> providing evolution based changes instead of replacement based changes.  It
> takes some experience and thought for sure.  But, in the end, it’s really
> no different from using JSON, XML or anything else.  The format of what you
> send has to be able to change, the content which must remain in a
> compatible way has to remain accessible in the same way.  I really am
> saddened by the thought that so many people never learn about binary
> structured data in their classes or through materials they might read to
> learn about such things.
>
> What generally happens is that people forget to design extensibility into
> their data systems, and then end up with all kinds of problems.   Here’s
> some of the rules I always try to follow.
>
> 1. Remote interfaces should almost always pass non native type objects
> that wrap the data needed.  This will make sure you can seamlessly add more
> data without changing method signatures.
> 2. Always put a serial version id on your serialized classes.  Start with
> 1, and increment it as you make changes by more than just ‘1’.
> 3. When you are going to add a new value, think about how you can make
> that independent of existing serialized data.  For example, when you
> override readObject or writeObject methods, how will you make sure that
> those methods can cast the data for “this” version of the data without
> breaking past or future versions of the object.
> 4. Data values inside of serialized classes should be carefully designed
> so that there is a “not present” value that is in line with a “not
> initialized” value so that you can always insert a new format in between
> those two (see rule 2 above about leaving holes in the versions).
>
> The purpose of serializing objects is so that you can also send the
> correct code.  If you can’t send the correct code (you are just sending
> JSON), and instead have to figure out how to make your new data compatible
> with code that can’t change, how is that any less complex than designing
> readObject and writeObject implementations that must do the same thing when
> you load an old serialization of an object into

Re: OSGi

2017-02-04 Thread Niclas Hedhman
The latter...

It works rather well for JavaScript in web browsers. I think that is the
most interesting "mobile code" platform to review as a starting point.

On Sat, Feb 4, 2017 at 2:54 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
michal.klec...@xpro.biz> wrote:

> Are you opposing the whole idea of sending data and code (or instructions
> how to download it) bundled together? (the spec)
> Or just the way how it is done in Java today. (the impl)
>
> If it is the first - we are in an absolute disagreement.
> If the second - I agree wholeheartedly.
>
> Thanks,
> Michal
>
> Niclas Hedhman wrote:
>
>> FYI in case you didn't know; Jackson ObjectMapper takes a POJO structure
>> and creates a (for instance) JSON document, or the other way around. It is
>> not meant for "any object to binary and back".
>> My point was, Java Serialization (and by extension JERI) has a scope that
>> is possibly wrongly defined in the first place. More constraints back then
>> might have been a good thing...
>>
>>
>>
>


-- 
Niclas Hedhman, Software Developer
http://polygene.apache.org  - New Energy for Java


Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
My annotated streams replace codebase resolution with object based one 
(ie - not using RMIClassLoader).


Michal

Gregg Wonderly wrote:

Why specific things do you want your AnnotatedStream to provide?

Gregg