Re: Security Documentation / Information

2017-10-10 Thread Peter

Hi Leon,

There are two discovery protocols, v1 and v2.  V1 is deprecated.  
ProxyTrust is in the process of being deprecated.


There are multiple discovery providers with various protocols:

Kerberos
TLS
https
X500

Some are not designed to be secure:
http, tcp and udp.

Multicast discovery is performed first, followed by unicast discovery 
(multiple providers for each).


Security was a contraversial topic in the past, which has unfortunately 
resulted in neglect of River's secure discovery protocols, we have 
support nowadays to address security issues.


Security issues we're aware of in TLS, Https and X500 providers 
(Kerberos pending) have been addressed in an external project fork along 
with support for IPv6 and atomic input validation for deserialization, 
this code is in the process of being donated back to River, but before 
that can happen, River must be made modular, in order to allow the code 
to be integrated in reviewable chunks, module by module.


It's a lot easier to understand the discovery protocols in the modular 
build; since there's less code to digest and dependencies are easier to 
understand.   Best start with the package.html files, then work your way 
though the code.  The code is well documented.


https://github.com/pfirmstone/JGDMS/tree/trunk/JGDMS/jgdms-discovery-providers/src/main/java/org/apache/river/discovery
https://github.com/pfirmstone/JGDMS/tree/trunk/JGDMS/jgdms-platform/src/main/java/org/apache/river/discovery
https://github.com/pfirmstone/JGDMS/tree/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/discovery
https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/org/apache/river/api/io/AtomicMarshalInputStream.java

If you'd like to assist reviewing code or participating in River, jump 
in.  We'd certainly welcome third party review.


Cheers,

Peter.

On 10/10/2017 6:20 PM, Lion Hellstern wrote:

Hello,

I was not able to find out much information about the security of the service 
discovery process in the apache river project.
Did I miss them and can you provide me links for them?


More Information and the background:
I study the security of service discovery protocols and right now I want to 
have a look at apache river.
I read the documentation about the service discovery but there weren’t much 
information. It says that you can protect the Standart Discovery Format Data 
with encryption and a mac. What I am missing is a general security concept and 
not just some encryption formats.


Best,
Lion




Security Documentation / Information

2017-10-10 Thread Lion Hellstern
Hello,

I was not able to find out much information about the security of the service 
discovery process in the apache river project.
Did I miss them and can you provide me links for them?


More Information and the background:
I study the security of service discovery protocols and right now I want to 
have a look at apache river.
I read the documentation about the service discovery but there weren’t much 
information. It says that you can protect the Standart Discovery Format Data 
with encryption and a mac. What I am missing is a general security concept and 
not just some encryption formats.


Best,
Lion

Re: [DISCUSS] [vote] should we fix security flaws?

2016-04-08 Thread Patricia Shanahan
Thank you. After the release, during the future direction discussion, 
I'll support discussing this issue to try to at least get mutual 
understanding, if not consensus.


On 4/8/2016 9:59 AM, Peter wrote:

you're right no need for this to happen now, consider it postponed.

Sent from my Samsung device.

   Include original message
 Original message 
From: Patricia Shanahan <p...@acm.org>
Sent: 08/04/2016 11:15:44 pm
To: dev@river.apache.org
Subject: Re: [DISCUSS] [vote] should we fix security flaws?

I am curious not so much about why a vote as about why a vote at this
particular time

I thought we had a consensus in favor of a future direction discussion
after the River 3.0 release. I was thinking about how to facilitate
constructive communication with a view to reaching a consensus wherever
possible. That should include everyone listening to your security
concerns, and considering them in the light of actual use-cases for River.

Even though you have time available now that cannot be applied to River
3.0, I am not at all sure that is true for everyone. I attributed the
lack of release progress to people being too busy.

Is there any way you could consider delaying this vote until the end of
the post-release future direction discussion, and then only holding it
if we fail to reach consensus?

On 4/8/2016 12:29 AM, Peter wrote:

  To provide some context on why I've put this to  a vote:

  Previous arguments against fixing security have suggested it's not relevant 
to local networks where River is deployed.

  But I've received some mixed messages regarding security recently.

  Although we can never fully guarantee complete security, we can address known 
issues if we choose to.

  Having this vote will help clarify whether security is important or not to 
the community.

  Once that is determined it will be easier to guage whether the time and 
effort in creating proofs for the existance of vulnerabilities is worthwhile.

  Regards,

  Peter.

  Sent from my Samsung device.

 Include original message
   Original message 
  From: Peter <j...@zeus.net.au>
  Sent: 08/04/2016 11:38:40 am
  To: dev@river.apache.org <d...@riverapache.org>
  Subject: Re: [DISCUSS] [vote] should we fix security flaws?


  I don't think we should delay the release to fix security.

  You have your reasons for not voting and I respect that.

    Fixing security isn't technically difficult and I  have fixes available, 
I'm hoping for collaborative development, so they receive peer review / 
modification / alternate solutions / suggestions / feedback / rejection etc.

I haven't been successful communicating / discussing security and I think 
that will take some time to sort out.

  The ability to take down servers using dos is annoying and easily 
demonstrated (I've started writing some code to do so), however Gadget attacks 
allow an attacker to take over systems, steal data etc, but are less easily 
demonstrated.  While there are existing known gadget attacks, the ones I'm 
aware of have fixes, so I'll be looking for a zero day to demonstrate.  While 
whack a mole is one approach to fixes, it would be better to provide an api to 
support input validation.

  http://frohoff.github.io/appseccali-marshalling-pickles/

  Gadget attacks create object graphs using existing local classes to create 
execution paths that perform malicious actions during deserialization, this is 
a relatively recent development.  Security advisories recommend against 
deserializing from untrusted sources.

  The intent of the vote request is to determine whether fixing security issues 
is an option in future.

  If the result is no, it's my intention is to focus on getting River off svn 
into git, so it's easier to maintain my own branch while sharing and 
contributing to a common code base.

  If yes then I'll work on improving my communication skills for discussing  
security related issue's.

  Discussing this won't hold up a release as the time windows available for me 
to work on producing a release are weekends only.  I'm going to have to create 
the release artifacts on MSWindows, so need to check the scripts work properly 
and understand recent build changes.

  I also have other goals, I'll be ready to set up a public service registrar, 
discoverable over ipv6 in the near future.

  If the no vote wins, I promise not to mention security on this list again.

  Regards,

  Peter.

  Sent from my Samsung device.

 Include original message
   Original message 
  From: Patricia Shanahan <p...@acm.org>
  Sent: 08/04/2016 06:34:23 am
  To: dev@riverapacheorg
  Subject: [DISCUSS] [vote] should we fix security flaws?

  I am not prepared to vote on this.

  First of all, I would need, on a private list where we can go into
  details of security issues, to get a feeling for the seriousness of the
  flaws in question. A denial of service is, in many contexts, less
  serious than file corruption.

 

Re: [DISCUSS] [vote] should we fix security flaws?

2016-04-08 Thread Peter
you're right no need for this to happen now, consider it postponed.

Sent from my Samsung device.
 
  Include original message
 Original message 
From: Patricia Shanahan <p...@acm.org>
Sent: 08/04/2016 11:15:44 pm
To: dev@river.apache.org
Subject: Re: [DISCUSS] [vote] should we fix security flaws?

I am curious not so much about why a vote as about why a vote at this  
particular time 

I thought we had a consensus in favor of a future direction discussion  
after the River 3.0 release. I was thinking about how to facilitate  
constructive communication with a view to reaching a consensus wherever  
possible. That should include everyone listening to your security  
concerns, and considering them in the light of actual use-cases for River. 

Even though you have time available now that cannot be applied to River  
3.0, I am not at all sure that is true for everyone. I attributed the  
lack of release progress to people being too busy. 

Is there any way you could consider delaying this vote until the end of  
the post-release future direction discussion, and then only holding it  
if we fail to reach consensus? 

On 4/8/2016 12:29 AM, Peter wrote: 
> To provide some context on why I've put this to  a vote: 
> 
> Previous arguments against fixing security have suggested it's not relevant 
>to local networks where River is deployed. 
> 
> But I've received some mixed messages regarding security recently. 
> 
> Although we can never fully guarantee complete security, we can address known 
>issues if we choose to. 
> 
> Having this vote will help clarify whether security is important or not to 
>the community. 
> 
> Once that is determined it will be easier to guage whether the time and 
>effort in creating proofs for the existance of vulnerabilities is worthwhile. 
> 
> Regards, 
> 
> Peter. 
> 
> Sent from my Samsung device. 
> 
>Include original message 
>  Original message  
> From: Peter <j...@zeus.net.au> 
> Sent: 08/04/2016 11:38:40 am 
> To: dev@river.apache.org <d...@riverapache.org> 
> Subject: Re: [DISCUSS] [vote] should we fix security flaws? 
> 
> 
> I don't think we should delay the release to fix security. 
> 
> You have your reasons for not voting and I respect that. 
> 
>   Fixing security isn't technically difficult and I  have fixes available, 
>I'm hoping for collaborative development, so they receive peer review / 
>modification / alternate solutions / suggestions / feedback / rejection etc. 
> 
>   I haven't been successful communicating / discussing security and I think 
>that will take some time to sort out. 
> 
> The ability to take down servers using dos is annoying and easily 
>demonstrated (I've started writing some code to do so), however Gadget attacks 
>allow an attacker to take over systems, steal data etc, but are less easily 
>demonstrated.  While there are existing known gadget attacks, the ones I'm 
>aware of have fixes, so I'll be looking for a zero day to demonstrate.  While 
>whack a mole is one approach to fixes, it would be better to provide an api to 
>support input validation. 
> 
> http://frohoff.github.io/appseccali-marshalling-pickles/ 
> 
> Gadget attacks create object graphs using existing local classes to create 
>execution paths that perform malicious actions during deserialization, this is 
>a relatively recent development.  Security advisories recommend against 
>deserializing from untrusted sources. 
> 
> The intent of the vote request is to determine whether fixing security issues 
>is an option in future. 
> 
> If the result is no, it's my intention is to focus on getting River off svn 
>into git, so it's easier to maintain my own branch while sharing and 
>contributing to a common code base. 
> 
> If yes then I'll work on improving my communication skills for discussing  
>security related issue's. 
> 
> Discussing this won't hold up a release as the time windows available for me 
>to work on producing a release are weekends only.  I'm going to have to create 
>the release artifacts on MSWindows, so need to check the scripts work properly 
>and understand recent build changes. 
> 
> I also have other goals, I'll be ready to set up a public service registrar, 
>discoverable over ipv6 in the near future. 
> 
> If the no vote wins, I promise not to mention security on this list again. 
> 
> Regards, 
> 
> Peter. 
> 
> Sent from my Samsung device. 
> 
>Include original message 
>  Original message  
> From: Patricia Shanahan <p...@acm.org> 
> Sent: 08/04/2016 06:34:23 am 
> To: dev@riverapacheorg 
> Subject: [DISCUSS] [vote] should we fix security flaws? 
> 
> I am not prepared to vote on this. 
> 
> First of all, I would

Re: [DISCUSS] [vote] should we fix security flaws?

2016-04-07 Thread Peter
 
I don't think we should delay the release to fix security. 

You have your reasons for not voting and I respect that.

 Fixing security isn't technically difficult and I  have fixes available, I'm 
hoping for collaborative development, so they receive peer review / 
modification / alternate solutions / suggestions / feedback / rejection etc.

 I haven't been successful communicating / discussing security and I think that 
will take some time to sort out.

The ability to take down servers using dos is annoying and easily demonstrated 
(I've started writing some code to do so), however Gadget attacks allow an 
attacker to take over systems, steal data etc, but are less easily 
demonstrated.  While there are existing known gadget attacks, the ones I'm 
aware of have fixes, so I'll be looking for a zero day to demonstrate.  While 
whack a mole is one approach to fixes, it would be better to provide an api to 
support input validation.

http://frohoff.github.io/appseccali-marshalling-pickles/

Gadget attacks create object graphs using existing local classes to create 
execution paths that perform malicious actions during deserialization, this is 
a relatively recent development.  Security advisories recommend against 
deserializing from untrusted sources.

The intent of the vote request is to determine whether fixing security issues 
is an option in future.   

If the result is no, it's my intention is to focus on getting River off svn 
into git, so it's easier to maintain my own branch while sharing and 
contributing to a common code base.

If yes then I'll work on improving my communication skills for discussing  
security related issue's.

Discussing this won't hold up a release as the time windows available for me to 
work on producing a release are weekends only.  I'm going to have to create the 
release artifacts on MSWindows, so need to check the scripts work properly and 
understand recent build changes.

I also have other goals, I'll be ready to set up a public service registrar, 
discoverable over ipv6 in the near future. 

If the no vote wins, I promise not to mention security on this list again.

Regards,

Peter.

Sent from my Samsung device.
 
  Include original message
 Original message 
From: Patricia Shanahan <p...@acm.org>
Sent: 08/04/2016 06:34:23 am
To: dev@river.apache.org
Subject: [DISCUSS] [vote] should we fix security flaws?

I am not prepared to vote on this. 

First of all, I would need, on a private list where we can go into  
details of security issues, to get a feeling for the seriousness of the  
flaws in question. A denial of service is, in many contexts, less  
serious than file corruption. 

We may want to consider investigating the actual and proposed use-cases  
for River before deciding this. 

Do you feel any of the security flaws in question are release-blockers  
for River 3.0? How long would fixing them first delay the release? 

On 4/7/2016 12:36 PM, Peter wrote: 
> How do people on this project feel about security flaws? 
> 
> Should we be fixing them? 
> 
> I can provide evidence of vulnerabilities, I'm not proposing my fixes be 
>adopted. 
> 
> Vote: 
> 
>   +1 Yes we should aim to fix security flaws. 
> 0 don't care. 
> -1 No. 
> 
> Regards, 
> 
> Peter. 
> 
> 
> 
> Sent from my Samsung device. 
> 
> 





[DISCUSS] [vote] should we fix security flaws?

2016-04-07 Thread Patricia Shanahan

I am not prepared to vote on this.

First of all, I would need, on a private list where we can go into 
details of security issues, to get a feeling for the seriousness of the 
flaws in question. A denial of service is, in many contexts, less 
serious than file corruption.


We may want to consider investigating the actual and proposed use-cases 
for River before deciding this.


Do you feel any of the security flaws in question are release-blockers 
for River 3.0? How long would fixing them first delay the release?


On 4/7/2016 12:36 PM, Peter wrote:

How do people on this project feel about security flaws?

Should we be fixing them?

I can provide evidence of vulnerabilities, I'm not proposing my fixes be 
adopted.

Vote:

  +1 Yes we should aim to fix security flaws.
0 don't care.
-1 No.

Regards,

Peter.



Sent from my Samsung device.




[vote] should we fix security flaws?

2016-04-07 Thread Peter
How do people on this project feel about security flaws?

Should we be fixing them? 

I can provide evidence of vulnerabilities, I'm not proposing my fixes be 
adopted.

Vote:

 +1 Yes we should aim to fix security flaws.
0 don't care.
-1 No.

Regards,

Peter.



Sent from my Samsung device.
 


VOTE: Take Security seriously or my resignation.

2016-01-06 Thread Peter Firmstone
Option 1.  I propose that we take security seriously, no security patches are 
to be rejected prior to review, that we review and analyse them properly based 
on merit. That discussions about security issues be taken seriously.

Option 2.  Alternatively I resign my River committer status

Please cast your vote, the vote is open for 7 days.

Let the community decide.

Regards,

Peter

Sent from my Samsung device.
 


Re: VOTE: Take Security seriously or my resignation.

2016-01-06 Thread Patricia Shanahan

Please, please cancel this.

We do need to have a serious discussion of River future direction. I
expect that discussion to take a lot longer than a week, and hope it
will involve as many users and potential users of River as possible. For
example, we may need to canvas other project mailing lists to find out
whether a River with specific changes would be useful to them.

It will certainly take me more than a week to study the subject, and the
various opinions about it, sufficiently to be prepared to vote.

I feel, very strongly, that we need to get River 3.0 out the door ASAP.
Even with enough time for proper study, holding the River future
discussion first will inevitably distract from that objective and delay
the release. I thought that was also the PMC consensus.

My preferred plan is get existing changes out as River 3.0 first, then
discussion and study, then vote on future direction. I am sorely tempted
to resign if this premature vote goes ahead, regardless of the outcome,
but will not because I don't think such threats are an appropriate way
of influencing PMC votes.

Patricia

On 1/6/2016 4:21 AM, Peter Firmstone wrote:

Option 1.  I propose that we take security seriously, no security patches are 
to be rejected prior to review, that we review and analyse them properly based 
on merit. That discussions about security issues be taken seriously.

Option 2.  Alternatively I resign my River committer status

Please cast your vote, the vote is open for 7 days.

Let the community decide.

Regards,

Peter

Sent from my Samsung device.




Re: VOTE: Take Security seriously or my resignation.

2016-01-06 Thread James Hurley

+1

-Jim

On Jan 06, 2016, at 10:13 AM, Patricia Shanahan <p...@acm.org> wrote:
Please, please cancel this.

We do need to have a serious discussion of River future direction. I
expect that discussion to take a lot longer than a week, and hope it
will involve as many users and potential users of River as possible. For
example, we may need to canvas other project mailing lists to find out
whether a River with specific changes would be useful to them.

It will certainly take me more than a week to study the subject, and the
various opinions about it, sufficiently to be prepared to vote.

I feel, very strongly, that we need to get River 3.0 out the door ASAP.
Even with enough time for proper study, holding the River future
discussion first will inevitably distract from that objective and delay
the release. I thought that was also the PMC consensus.

My preferred plan is get existing changes out as River 3.0 first, then
discussion and study, then vote on future direction. I am sorely tempted
to resign if this premature vote goes ahead, regardless of the outcome,
but will not because I don't think such threats are an appropriate way
of influencing PMC votes.

Patricia

On 1/6/2016 4:21 AM, Peter Firmstone wrote:
Option 1. I propose that we take security seriously, no security patches are to 
be rejected prior to review, that we review and analyse them properly based on 
merit. That discussions about security issues be taken seriously.

Option 2. Alternatively I resign my River committer status

Please cast your vote, the vote is open for 7 days.

Let the community decide.

Regards,

Peter

Sent from my Samsung device.




Re: VOTE: Take Security seriously or my resignation.

2016-01-06 Thread Greg Trasuk
Hi Jim:

Good to see you back here!

Cheers,

Greg Trasuk
> On Jan 6, 2016, at 10:31 AM, James Hurley <jim.hur...@icloud.com> wrote:
> 
> +1
> 
> -Jim
> 
> On Jan 06, 2016, at 10:13 AM, Patricia Shanahan <p...@acm.org> wrote:
>> Please, please cancel this.
>> 
>> We do need to have a serious discussion of River future direction. I
>> expect that discussion to take a lot longer than a week, and hope it
>> will involve as many users and potential users of River as possible. For
>> example, we may need to canvas other project mailing lists to find out
>> whether a River with specific changes would be useful to them.
>> 
>> It will certainly take me more than a week to study the subject, and the
>> various opinions about it, sufficiently to be prepared to vote.
>> 
>> I feel, very strongly, that we need to get River 3.0 out the door ASAP.
>> Even with enough time for proper study, holding the River future
>> discussion first will inevitably distract from that objective and delay
>> the release. I thought that was also the PMC consensus.
>> 
>> My preferred plan is get existing changes out as River 3.0 first, then
>> discussion and study, then vote on future direction. I am sorely tempted
>> to resign if this premature vote goes ahead, regardless of the outcome,
>> but will not because I don't think such threats are an appropriate way
>> of influencing PMC votes.
>> 
>> Patricia
>> 
>> On 1/6/2016 4:21 AM, Peter Firmstone wrote:
>>> Option 1. I propose that we take security seriously, no security patches 
>>> are to be rejected prior to review, that we review and analyse them 
>>> properly based on merit. That discussions about security issues be taken 
>>> seriously.
>>> 
>>> Option 2. Alternatively I resign my River committer status
>>> 
>>> Please cast your vote, the vote is open for 7 days.
>>> 
>>> Let the community decide.
>>> 
>>> Regards,
>>> 
>>> Peter
>>> 
>>> Sent from my Samsung device.
>>> 
>>> 



Re: VOTE: Take Security seriously or my resignation.

2016-01-06 Thread Bryan Thompson
Peter,

I think that there might be a consensus for publishing 3.0 and then
considering security patches against it.

Bryan


Bryan Thompson
Chief Scientist & Founder
SYSTAP, LLC
4501 Tower Road
Greensboro, NC 27410
br...@systap.com
http://blazegraph.com
http://blog.blazegraph.com

Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance
graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints
APIs.  Blazegraph is now available with GPU acceleration using our disruptive
technology to accelerate data-parallel graph analytics and graph query.

CONFIDENTIALITY NOTICE:  This email and its contents and attachments are
for the sole use of the intended recipient(s) and are confidential or
proprietary to SYSTAP. Any unauthorized review, use, disclosure,
dissemination or copying of this email or its contents or attachments is
prohibited. If you have received this communication in error, please notify
the sender by reply email and permanently delete all copies of the email
and its contents and attachments.

On Wed, Jan 6, 2016 at 10:31 AM, James Hurley <jim.hur...@icloud.com> wrote:

> +1
>
> -Jim
>
> On Jan 06, 2016, at 10:13 AM, Patricia Shanahan <p...@acm.org> wrote:
>
> Please, please cancel this.
>
> We do need to have a serious discussion of River future direction. I
> expect that discussion to take a lot longer than a week, and hope it
> will involve as many users and potential users of River as possible. For
> example, we may need to canvas other project mailing lists to find out
> whether a River with specific changes would be useful to them.
>
> It will certainly take me more than a week to study the subject, and the
> various opinions about it, sufficiently to be prepared to vote.
>
> I feel, very strongly, that we need to get River 3.0 out the door ASAP.
> Even with enough time for proper study, holding the River future
> discussion first will inevitably distract from that objective and delay
> the release. I thought that was also the PMC consensus.
>
> My preferred plan is get existing changes out as River 3.0 first, then
> discussion and study, then vote on future direction. I am sorely tempted
> to resign if this premature vote goes ahead, regardless of the outcome,
> but will not because I don't think such threats are an appropriate way
> of influencing PMC votes.
>
> Patricia
>
> On 1/6/2016 4:21 AM, Peter Firmstone wrote:
>
> Option 1. I propose that we take security seriously, no security patches
> are to be rejected prior to review, that we review and analyse them
> properly based on merit. That discussions about security issues be taken
> seriously.
>
>
> Option 2. Alternatively I resign my River committer status
>
>
> Please cast your vote, the vote is open for 7 days.
>
>
> Let the community decide.
>
>
> Regards,
>
>
> Peter
>
>
> Sent from my Samsung device.
>
>
>
>


Cancelled. Re: VOTE: Take Security seriously or my resignation.

2016-01-06 Thread Peter
Vote withdrawn.

Peter.

Sent from my Samsung device.
 
  Include original message
 Original message 
From: Patricia Shanahan <p...@acm.org>
Sent: 07/01/2016 01:13:23 am
To: dev@river.apache.org
Subject: Re: VOTE: Take Security seriously or my resignation.

Please, please cancel this. 

We do need to have a serious discussion of River future direction. I 
expect that discussion to take a lot longer than a week, and hope it 
will involve as many users and potential users of River as possible. For 
example, we may need to canvas other project mailing lists to find out 
whether a River with specific changes would be useful to them. 

It will certainly take me more than a week to study the subject, and the 
various opinions about it, sufficiently to be prepared to vote. 

I feel, very strongly, that we need to get River 3.0 out the door ASAP 
Even with enough time for proper study, holding the River future 
discussion first will inevitably distract from that objective and delay 
the release. I thought that was also the PMC consensus. 

My preferred plan is get existing changes out as River 3.0 first, then 
discussion and study, then vote on future direction. I am sorely tempted 
to resign if this premature vote goes ahead, regardless of the outcome, 
but will not because I don't think such threats are an appropriate way 
of influencing PMC votes. 

Patricia 

On 1/6/2016 4:21 AM, Peter Firmstone wrote: 
> Option 1.  I propose that we take security seriously, no security patches are 
>to be rejected prior to review, that we review and analyse them properly based 
>on merit. That discussions about security issues be taken seriously. 
> 
> Option 2.  Alternatively I resign my River committer status 
> 
> Please cast your vote, the vote is open for 7 days. 
> 
> Let the community decide. 
> 
> Regards, 
> 
> Peter 
> 
> Sent from my Samsung device. 
> 
> 



research article on Jini /River security

2015-12-21 Thread Peter Firmstone
http://www.hindawi.com/journals/ijdsn/2015/205793/

Regards,

Peter

Re: Security

2015-02-25 Thread Peter
HIP looks very promising.  Certainly solve a number of issues for River.

Jini outran the capabilities of the underlying java platform (class identity, 
resolution and isolation) and underlying network protocols.

Hopefully one day these issues will be resolved.  

Cheers,

Peter.


- Original message -
 I think the next big thing is going to be HIP networks where Jini could
 excel as a communications platform via service discovery and the other
 parts of the platform that make it fast and easy to put together remote
 communications.
 
 Gregg
 
  On Feb 21, 2015, at 9:22 PM, Peter j...@zeus.net.au wrote:
  
  - Original message -
   
   Yes, “accidental” DOS certainly could apply, which is why I say that
   simple measures (like limiting the number of bytes that
   PreferredClassLoader will download before giving up) are a good
   idea.   But I think that any radical re-imagining of object
   serialization is outside the scope of the River project.
  
  Ok, I'll bite, the work I've done doesn't fit into the radical
  re-imagining category by any stretch, it uses the existing
  ObjectInputStream public api and the public serial form of existing
  objects.   It does however allow people to implement an additional
  constructor by declaring an annotation, so they can check invariants. 
  These invariant checks won't be performed by the standard
  ObjectInputStream, but the classes are compatible with either.
  
  My implementation also significantly outperforms java's standard
  ObjectInputStream, reflectively calling one constructor is more
  performant than reflectively setting every field in each class of an
  Object's hierarchy.
  
  I've decided I'll work on this on github, where interested parties can
  participate if they want.
  
  
  
   
   Cheers,
   
   Greg Trasuk
   
   On Feb 19, 2015, at 11:39 AM, Patricia Shanahan p...@acm.org wrote:
   
I generally agree, but do have a question.

In other contexts, I've seen unintentional bugs, rather than
deliberate DOS, lead to behavior similar to DOS. A program goes
wrong, and tries to e.g. allocate far too much memory, or goes
into a loop. In contexts where that can happen, work to protect
against DOS also makes the software more robust.

In shared service situations, an apparently non-critical program
can cause a DOS that also affects more important programs. Either
all programs have to be designed, reviewed, and tested to the
reliability requirements of the most sensitive program with which
they share resources, or there has to be isolation between them.

Does this sort of consideration apply in reality to River?

On 2/19/2015 6:58 AM, Greg Trasuk wrote:
 
 The type of issues you’re talking about seem to be centred on
 putting Jini services on the open internet, and allowing
 untrusted, unknown clients to access those services safely.
 
 Personally, my interest is more along the lines of Jini’s
 original goal, which was LAN-scoped or datacenter-scoped SOA.   
 Further, I     use it on more controlled networks.     As far as I’m
 concerned, only code that I trust gets on the network.     In a
 larger corporate scenario, I might lock down access to Reggie,
 but beyond that, I don’t consider DOS a threat.     I think it
 would make sense to be able to put a byte limit on the stream
 used to load the class, and possibly a time limit, but beyond
 that, I think you’re adding complexity that isn’t needed.     If
 you want to put a service on the web, use RESTful services, not
 Jini.     I’m sure there’s a discoverability tool out there, if
 needed, but typically it isn’t.
 
 Also, since object serialization is not specific to River, I
 wonder if there’s a better forum for these kinds of deep
 discussions.     I think it makes River look far harder than it is.
 
 Cheers,
 
 Greg Trasuk.
 
 On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:
 
  What are your thoughts on security?
  
  Is it important to you?     Is it important for River?
  
  Regards,
  
  Peter.
 
   
  
 



Re: Security

2015-02-24 Thread Gregg Wonderly
I think the next big thing is going to be HIP networks where Jini could excel 
as a communications platform via service discovery and the other parts of the 
platform that make it fast and easy to put together remote communications.

Gregg

 On Feb 21, 2015, at 9:22 PM, Peter j...@zeus.net.au wrote:
 
 - Original message -
 
 Yes, “accidental” DOS certainly could apply, which is why I say that
 simple measures (like limiting the number of bytes that
 PreferredClassLoader will download before giving up) are a good idea. 
 But I think that any radical re-imagining of object serialization is
 outside the scope of the River project.
 
 Ok, I'll bite, the work I've done doesn't fit into the radical re-imagining 
 category by any stretch, it uses the existing ObjectInputStream public api 
 and the public serial form of existing objects.  It does however allow people 
 to implement an additional constructor by declaring an annotation, so they 
 can check invariants.  These invariant checks won't be performed by the 
 standard ObjectInputStream, but the classes are compatible with either.
 
 My implementation also significantly outperforms java's standard 
 ObjectInputStream, reflectively calling one constructor is more performant 
 than reflectively setting every field in each class of an Object's hierarchy.
 
 I've decided I'll work on this on github, where interested parties can 
 participate if they want.
 
 
 
 
 Cheers,
 
 Greg Trasuk
 
 On Feb 19, 2015, at 11:39 AM, Patricia Shanahan p...@acm.org wrote:
 
 I generally agree, but do have a question.
 
 In other contexts, I've seen unintentional bugs, rather than
 deliberate DOS, lead to behavior similar to DOS. A program goes wrong,
 and tries to e.g. allocate far too much memory, or goes into a loop.
 In contexts where that can happen, work to protect against DOS also
 makes the software more robust.
 
 In shared service situations, an apparently non-critical program can
 cause a DOS that also affects more important programs. Either all
 programs have to be designed, reviewed, and tested to the reliability
 requirements of the most sensitive program with which they share
 resources, or there has to be isolation between them.
 
 Does this sort of consideration apply in reality to River?
 
 On 2/19/2015 6:58 AM, Greg Trasuk wrote:
 
 The type of issues you’re talking about seem to be centred on putting
 Jini services on the open internet, and allowing untrusted, unknown
 clients to access those services safely.
 
 Personally, my interest is more along the lines of Jini’s original
 goal, which was LAN-scoped or datacenter-scoped SOA.   Further, I   use
 it on more controlled networks.   As far as I’m concerned, only code
 that I trust gets on the network.   In a larger corporate scenario, I
 might lock down access to Reggie, but beyond that, I don’t consider
 DOS a threat.   I think it would make sense to be able to put a byte
 limit on the stream used to load the class, and possibly a time
 limit, but beyond that, I think you’re adding complexity that isn’t
 needed.   If you want to put a service on the web, use RESTful
 services, not Jini.   I’m sure there’s a discoverability tool out
 there, if needed, but typically it isn’t.
 
 Also, since object serialization is not specific to River, I wonder
 if there’s a better forum for these kinds of deep discussions.   I
 think it makes River look far harder than it is.
 
 Cheers,
 
 Greg Trasuk.
 
 On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:
 
 What are your thoughts on security?
 
 Is it important to you?   Is it important for River?
 
 Regards,
 
 Peter.
 
 
 



Re: Security

2015-02-21 Thread Peter
- Original message -
 
 Yes, “accidental” DOS certainly could apply, which is why I say that
 simple measures (like limiting the number of bytes that
 PreferredClassLoader will download before giving up) are a good idea. 
 But I think that any radical re-imagining of object serialization is
 outside the scope of the River project.

Ok, I'll bite, the work I've done doesn't fit into the radical re-imagining 
category by any stretch, it uses the existing ObjectInputStream public api and 
the public serial form of existing objects.  It does however allow people to 
implement an additional constructor by declaring an annotation, so they can 
check invariants.  These invariant checks won't be performed by the standard 
ObjectInputStream, but the classes are compatible with either.

My implementation also significantly outperforms java's standard 
ObjectInputStream, reflectively calling one constructor is more performant than 
reflectively setting every field in each class of an Object's hierarchy.

I've decided I'll work on this on github, where interested parties can 
participate if they want.



 
 Cheers,
 
 Greg Trasuk
 
 On Feb 19, 2015, at 11:39 AM, Patricia Shanahan p...@acm.org wrote:
 
  I generally agree, but do have a question.
  
  In other contexts, I've seen unintentional bugs, rather than
  deliberate DOS, lead to behavior similar to DOS. A program goes wrong,
  and tries to e.g. allocate far too much memory, or goes into a loop.
  In contexts where that can happen, work to protect against DOS also
  makes the software more robust.
  
  In shared service situations, an apparently non-critical program can
  cause a DOS that also affects more important programs. Either all
  programs have to be designed, reviewed, and tested to the reliability
  requirements of the most sensitive program with which they share
  resources, or there has to be isolation between them.
  
  Does this sort of consideration apply in reality to River?
  
  On 2/19/2015 6:58 AM, Greg Trasuk wrote:
   
   The type of issues you’re talking about seem to be centred on putting
   Jini services on the open internet, and allowing untrusted, unknown
   clients to access those services safely.
   
   Personally, my interest is more along the lines of Jini’s original
   goal, which was LAN-scoped or datacenter-scoped SOA.   Further, I   use
   it on more controlled networks.   As far as I’m concerned, only code
   that I trust gets on the network.   In a larger corporate scenario, I
   might lock down access to Reggie, but beyond that, I don’t consider
   DOS a threat.   I think it would make sense to be able to put a byte
   limit on the stream used to load the class, and possibly a time
   limit, but beyond that, I think you’re adding complexity that isn’t
   needed.   If you want to put a service on the web, use RESTful
   services, not Jini.   I’m sure there’s a discoverability tool out
   there, if needed, but typically it isn’t.
   
   Also, since object serialization is not specific to River, I wonder
   if there’s a better forum for these kinds of deep discussions.   I
   think it makes River look far harder than it is.
   
   Cheers,
   
   Greg Trasuk.
   
   On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:
   
What are your thoughts on security?

Is it important to you?   Is it important for River?

Regards,

Peter.
   
 



Re: Security

2015-02-20 Thread Peter
Thanks for your interest.

It doesn't, but if there's no downloaded code, or there's no unprivileged code 
in the execution context, then the code is privileged, or is run with the 
privileges of the users Subject.

The return argument to a remote call doesn't run with a domain representing the 
subject of the remote caller.  Remote method invocation does so, only for 
parameter arguments.   In any case ObjectInputStream is an easy target for dos, 
with or without downloaded code.

I've decided not to investigate Serialization security any further, at least 
for the River project.

While it is possible to secure Serialization, the obstacles aren't technical; 
there is little interest and already some objections. A suitable solution 
wouldn't inconvenience users of untrusted networks in any way whatsoever, but 
that's a moot point

I have participated in some discussions on core-libs-dev, there are some 
changes in the pipeline to make minor improvements to Serialization's 
susceptibility to dos, but there's an enormous amount of legacy code the will 
continue to cause problems.

I think River could potentially simplify it's service model, if this project 
were to only target trusted networks with some limited capacity for untrusted 
networks.

Proxy trust adds significant complexity, a common complaint, it almost 
succeded, but it's not quite right. 

Secure Discovery V2 can be used for connections over untrusted networks, but 
only where the lookup service only allows trusted clients and services to 
connect and all parties in a djinn group trust each other.

This makes me wonder if proxy trust can be dropped altogether and if secure 
discovery V2 and secure authenticated connections can be relied upon completely 
for security. 

When the participating members agree on defining some goals, we can look at how 
we can better align our security api's with those goals.

My original interest in River was for untrusted networks. 

Regards,

Peter.

- Original message -
 BTW - I'm really interested in the reasoning why deserialization code
 does not call the non-serializable superclass constructor in the
 security context of the subclass(es) - so that it really mimics the
 normal constructor call chain.
 
 Michal
 
 Michał Kłeczek (XPro) wrote:
  Isn't the issue with non-serializable superclass constructor call this
  one? :
  http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-5353
  
  If so - I don't really see how it relates to River - to be able to
  expoit this kind of vulnerability an attacker must have already
  downloaded and run his code - otherwise the exploiting subclass could
  not have been loaded hence no construction would take place.
  I think if we can make sure only trusted (whatever that means :) ) code
  is ever run - we don't have to do anything more with (de)serialization
  - except making DoS more difficult by restricting size of the stream.
  
  Thanks,
  Michal
  
  Peter Firmstone wrote:
   Continuing on ...
   
   Lets say for example, we have a secure OS and we provide a service on
   a public port and we have a determined attacker attempting to use
   deserialization to take over our system or bring it to its knees with
   denial of service.
   
   We know this is relatively easy with standard ObjectInputStream.
   
   When we invoke a remote method, the return value is what you call a
   root object, that is it's the root of a serialized object tree,
   originating through the root object, via its fields.
   
   Most Serializable objects have invariants.
   
   We can consider the java serialization stream format as a string of
   commands that allows creation of any object, bypassing all levels of
   visibility.   That is an attacker can create package private classes
   or private internal classes, provided they implement Serializable. 
   So think of Serializable as a public constructor that lacks a context
   representing the attacker.   That's right, there is no context
   representing the remote endpoint in the thread call stack.
   
   Now we can place restrictions on the stream, such as a requirement
   that the stream is reset before a limit on the number of objects
   deserialized is reached.   We can also place a limit on the count of
   all array elements read in from the stream, until the stream is
   reset.   These measures ensure the stream cannot go on forever, they
   also prevent stack overflow and out of memory errors.   At some point
   the caller will regain control of the stream without running out of
   resources.
   
   But getting back to the previous problem, the attacker has command
   line access to create any Serializable object.
   
   Now most objects have invariants that should be satisfied, for
   correct functional state, but a major problem with Serialization is
   it creates an instance, using the first zero arg constructor of the
   first non serializable superclass.   Yep that's right, that could be
   ClassLoader, you clever

Re: Security

2015-02-19 Thread Peter
What are your thoughts on security?

Is it important to you?  Is it important for River?

Regards,

Peter.

Re: Security

2015-02-19 Thread Michał Kłeczek (XPro)
Isn't the issue with non-serializable superclass constructor call this
one? :
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-5353

If so - I don't really see how it relates to River - to be able to
expoit this kind of vulnerability an attacker must have already
downloaded and run his code - otherwise the exploiting subclass could
not have been loaded hence no construction would take place.
I think if we can make sure only trusted (whatever that means :) ) code
is ever run - we don't have to do anything more with (de)serialization -
except making DoS more difficult by restricting size of the stream.

Thanks,
Michal

Peter Firmstone wrote:
 Continuing on ...

 Lets say for example, we have a secure OS and we provide a service on
 a public port and we have a determined attacker attempting to use
 deserialization to take over our system or bring it to its knees with
 denial of service.

 We know this is relatively easy with standard ObjectInputStream.

 When we invoke a remote method, the return value is what you call a
 root object, that is it's the root of a serialized object tree,
 originating through the root object, via its fields.

 Most Serializable objects have invariants.

 We can consider the java serialization stream format as a string of
 commands that allows creation of any object, bypassing all levels of
 visibility.  That is an attacker can create package private classes or
 private internal classes, provided they implement Serializable.  So
 think of Serializable as a public constructor that lacks a context
 representing the attacker.  That's right, there is no context
 representing the remote endpoint in the thread call stack.

 Now we can place restrictions on the stream, such as a requirement
 that the stream is reset before a limit on the number of objects
 deserialized is reached.  We can also place a limit on the count of
 all array elements read in from the stream, until the stream is
 reset.  These measures ensure the stream cannot go on forever, they
 also prevent stack overflow and out of memory errors.  At some point
 the caller will regain control of the stream without running out of
 resources.

 But getting back to the previous problem, the attacker has command
 line access to create any Serializable object.

 Now most objects have invariants that should be satisfied, for correct
 functional state, but a major problem with Serialization is it creates
 an instance, using the first zero arg constructor of the first non
 serializable superclass.  Yep that's right, that could be ClassLoader,
 you clever attacker you!  The poor old object hasn't even had a chance
 to check it's invariants and it's game over, that's not fair!

 This will never do, I had to create a new contract for atomic object
 construction:

   1. The class must implement Serializable (can be inherited) AND one
  of the following conditions must be met:
   2. The class must be annotated with @AtomicSerial OR
   3. The class must be stateless and can be created by calling
  class.newInstance() OR
   4. The class must have DeSerializationPermission

 Now if the class is annotated with @AtomicSerial, it must also have a
 constructor signature that has public or default visibility:

 SomeClass(GetArg arg) throws IOException{
 // The first thing I must do is to call a static method that
 checks invariants, before calling a superclass constructor, the static
 method should return the argument for another constructor.
 }

 Some simple rules for our object input stream:

   1. Though shalt not publish a reference to a partially constructed
  object.
   2. Though shalt not publish a reference if an object fails
  construction, where readObject, readObjectNoData and readResolve
  methods are considered to be constructors for the purpose of
  deserialization of conventional Serializable objects.
   3. Though shalt not attempt to construct a Serializable object that
  doesn't have an @AtomicSerial annotation, or has serial fields
  (state) and doesn't have DeSerializationPermission.
   4. Only call a protected constructor if the class doesn't implement
  @AtomicSerial and has DeSerializationPermission.
   5. Do not honour the standard java Serialization construction
  contract (not all Serializable classes can be constructed even if
  they have DeSerializationPermission).
   6. If a standard Serializable class has DeSerializationPermission for
  it to be constructed it must have a zero arg constructor or a
  constructor that accepts null object arguments and default
  primitive values.
   7. If an any object in a serialized object graph fails it's invariant
  checks, deserialization of the object graph fails at that point
  and control is returned to the caller, by way of an
  InvalidObjectException (a child class of IOException).
   8. Honour the standard java serialization stream protocol.
   9. Count number of objects received and throw a
  

Re: Security

2015-02-19 Thread Michał Kłeczek (XPro)
BTW - I'm really interested in the reasoning why deserialization code
does not call the non-serializable superclass constructor in the
security context of the subclass(es) - so that it really mimics the
normal constructor call chain.

Michal

Michał Kłeczek (XPro) wrote:
 Isn't the issue with non-serializable superclass constructor call this
 one? :
 http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-5353

 If so - I don't really see how it relates to River - to be able to
 expoit this kind of vulnerability an attacker must have already
 downloaded and run his code - otherwise the exploiting subclass could
 not have been loaded hence no construction would take place.
 I think if we can make sure only trusted (whatever that means :) ) code
 is ever run - we don't have to do anything more with (de)serialization -
 except making DoS more difficult by restricting size of the stream.

 Thanks,
 Michal

 Peter Firmstone wrote:
 Continuing on ...

 Lets say for example, we have a secure OS and we provide a service on
 a public port and we have a determined attacker attempting to use
 deserialization to take over our system or bring it to its knees with
 denial of service.

 We know this is relatively easy with standard ObjectInputStream.

 When we invoke a remote method, the return value is what you call a
 root object, that is it's the root of a serialized object tree,
 originating through the root object, via its fields.

 Most Serializable objects have invariants.

 We can consider the java serialization stream format as a string of
 commands that allows creation of any object, bypassing all levels of
 visibility.  That is an attacker can create package private classes or
 private internal classes, provided they implement Serializable.  So
 think of Serializable as a public constructor that lacks a context
 representing the attacker.  That's right, there is no context
 representing the remote endpoint in the thread call stack.

 Now we can place restrictions on the stream, such as a requirement
 that the stream is reset before a limit on the number of objects
 deserialized is reached.  We can also place a limit on the count of
 all array elements read in from the stream, until the stream is
 reset.  These measures ensure the stream cannot go on forever, they
 also prevent stack overflow and out of memory errors.  At some point
 the caller will regain control of the stream without running out of
 resources.

 But getting back to the previous problem, the attacker has command
 line access to create any Serializable object.

 Now most objects have invariants that should be satisfied, for correct
 functional state, but a major problem with Serialization is it creates
 an instance, using the first zero arg constructor of the first non
 serializable superclass.  Yep that's right, that could be ClassLoader,
 you clever attacker you!  The poor old object hasn't even had a chance
 to check it's invariants and it's game over, that's not fair!

 This will never do, I had to create a new contract for atomic object
 construction:

   1. The class must implement Serializable (can be inherited) AND one
  of the following conditions must be met:
   2. The class must be annotated with @AtomicSerial OR
   3. The class must be stateless and can be created by calling
  class.newInstance() OR
   4. The class must have DeSerializationPermission

 Now if the class is annotated with @AtomicSerial, it must also have a
 constructor signature that has public or default visibility:

 SomeClass(GetArg arg) throws IOException{
 // The first thing I must do is to call a static method that
 checks invariants, before calling a superclass constructor, the static
 method should return the argument for another constructor.
 }

 Some simple rules for our object input stream:

   1. Though shalt not publish a reference to a partially constructed
  object.
   2. Though shalt not publish a reference if an object fails
  construction, where readObject, readObjectNoData and readResolve
  methods are considered to be constructors for the purpose of
  deserialization of conventional Serializable objects.
   3. Though shalt not attempt to construct a Serializable object that
  doesn't have an @AtomicSerial annotation, or has serial fields
  (state) and doesn't have DeSerializationPermission.
   4. Only call a protected constructor if the class doesn't implement
  @AtomicSerial and has DeSerializationPermission.
   5. Do not honour the standard java Serialization construction
  contract (not all Serializable classes can be constructed even if
  they have DeSerializationPermission).
   6. If a standard Serializable class has DeSerializationPermission for
  it to be constructed it must have a zero arg constructor or a
  constructor that accepts null object arguments and default
  primitive values.
   7. If an any object in a serialized object graph fails it's invariant
  checks

Re: Security

2015-02-19 Thread Patricia Shanahan

I generally agree, but do have a question.

In other contexts, I've seen unintentional bugs, rather than deliberate 
DOS, lead to behavior similar to DOS. A program goes wrong, and tries to 
e.g. allocate far too much memory, or goes into a loop. In contexts 
where that can happen, work to protect against DOS also makes the 
software more robust.


In shared service situations, an apparently non-critical program can 
cause a DOS that also affects more important programs. Either all 
programs have to be designed, reviewed, and tested to the reliability 
requirements of the most sensitive program with which they share 
resources, or there has to be isolation between them.


Does this sort of consideration apply in reality to River?

On 2/19/2015 6:58 AM, Greg Trasuk wrote:


The type of issues you’re talking about seem to be centred on putting
Jini services on the open internet, and allowing untrusted, unknown
clients to access those services safely.

Personally, my interest is more along the lines of Jini’s original
goal, which was LAN-scoped or datacenter-scoped SOA.  Further, I  use
it on more controlled networks.  As far as I’m concerned, only code
that I trust gets on the network.  In a larger corporate scenario, I
might lock down access to Reggie, but beyond that, I don’t consider
DOS a threat.  I think it would make sense to be able to put a byte
limit on the stream used to load the class, and possibly a time
limit, but beyond that, I think you’re adding complexity that isn’t
needed.  If you want to put a service on the web, use RESTful
services, not Jini.  I’m sure there’s a discoverability tool out
there, if needed, but typically it isn’t.

Also, since object serialization is not specific to River, I wonder
if there’s a better forum for these kinds of deep discussions.  I
think it makes River look far harder than it is.

Cheers,

Greg Trasuk.

On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:


What are your thoughts on security?

Is it important to you?  Is it important for River?

Regards,

Peter.




Re: Security

2015-02-19 Thread Greg Trasuk

Yes, “accidental” DOS certainly could apply, which is why I say that simple 
measures (like limiting the number of bytes that PreferredClassLoader will 
download before giving up) are a good idea.  But I think that any radical 
re-imagining of object serialization is outside the scope of the River project.

Cheers,

Greg Trasuk

On Feb 19, 2015, at 11:39 AM, Patricia Shanahan p...@acm.org wrote:

 I generally agree, but do have a question.
 
 In other contexts, I've seen unintentional bugs, rather than deliberate DOS, 
 lead to behavior similar to DOS. A program goes wrong, and tries to e.g. 
 allocate far too much memory, or goes into a loop. In contexts where that can 
 happen, work to protect against DOS also makes the software more robust.
 
 In shared service situations, an apparently non-critical program can cause a 
 DOS that also affects more important programs. Either all programs have to be 
 designed, reviewed, and tested to the reliability requirements of the most 
 sensitive program with which they share resources, or there has to be 
 isolation between them.
 
 Does this sort of consideration apply in reality to River?
 
 On 2/19/2015 6:58 AM, Greg Trasuk wrote:
 
 The type of issues you’re talking about seem to be centred on putting
 Jini services on the open internet, and allowing untrusted, unknown
 clients to access those services safely.
 
 Personally, my interest is more along the lines of Jini’s original
 goal, which was LAN-scoped or datacenter-scoped SOA.  Further, I  use
 it on more controlled networks.  As far as I’m concerned, only code
 that I trust gets on the network.  In a larger corporate scenario, I
 might lock down access to Reggie, but beyond that, I don’t consider
 DOS a threat.  I think it would make sense to be able to put a byte
 limit on the stream used to load the class, and possibly a time
 limit, but beyond that, I think you’re adding complexity that isn’t
 needed.  If you want to put a service on the web, use RESTful
 services, not Jini.  I’m sure there’s a discoverability tool out
 there, if needed, but typically it isn’t.
 
 Also, since object serialization is not specific to River, I wonder
 if there’s a better forum for these kinds of deep discussions.  I
 think it makes River look far harder than it is.
 
 Cheers,
 
 Greg Trasuk.
 
 On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:
 
 What are your thoughts on security?
 
 Is it important to you?  Is it important for River?
 
 Regards,
 
 Peter.
 



Re: Security

2015-02-19 Thread Greg Trasuk

The type of issues you’re talking about seem to be centred on putting Jini 
services on the open internet, and allowing untrusted, unknown clients to 
access those services safely.  

Personally, my interest is more along the lines of Jini’s original goal, which 
was LAN-scoped or datacenter-scoped SOA.  Further, I  use it on more controlled 
networks.  As far as I’m concerned, only code that I trust gets on the network. 
 In a larger corporate scenario, I might lock down access to Reggie, but beyond 
that, I don’t consider DOS a threat.  I think it would make sense to be able to 
put a byte limit on the stream used to load the class, and possibly a time 
limit, but beyond that, I think you’re adding complexity that isn’t needed.  If 
you want to put a service on the web, use RESTful services, not Jini.  I’m sure 
there’s a discoverability tool out there, if needed, but typically it isn’t.

Also, since object serialization is not specific to River, I wonder if there’s 
a better forum for these kinds of deep discussions.  I think it makes River 
look far harder than it is.

Cheers,

Greg Trasuk.

On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:

 What are your thoughts on security?
 
 Is it important to you?  Is it important for River?
 
 Regards,
 
 Peter.



Re: Security

2015-02-18 Thread Peter Firmstone

Continuing on ...

Lets say for example, we have a secure OS and we provide a service on a 
public port and we have a determined attacker attempting to use 
deserialization to take over our system or bring it to its knees with 
denial of service.


We know this is relatively easy with standard ObjectInputStream.

When we invoke a remote method, the return value is what you call a root 
object, that is it's the root of a serialized object tree, originating 
through the root object, via its fields.


Most Serializable objects have invariants.

We can consider the java serialization stream format as a string of 
commands that allows creation of any object, bypassing all levels of 
visibility.  That is an attacker can create package private classes or 
private internal classes, provided they implement Serializable.  So 
think of Serializable as a public constructor that lacks a context 
representing the attacker.  That's right, there is no context 
representing the remote endpoint in the thread call stack.


Now we can place restrictions on the stream, such as a requirement that 
the stream is reset before a limit on the number of objects deserialized 
is reached.  We can also place a limit on the count of all array 
elements read in from the stream, until the stream is reset.  These 
measures ensure the stream cannot go on forever, they also prevent stack 
overflow and out of memory errors.  At some point the caller will regain 
control of the stream without running out of resources.


But getting back to the previous problem, the attacker has command line 
access to create any Serializable object.


Now most objects have invariants that should be satisfied, for correct 
functional state, but a major problem with Serialization is it creates 
an instance, using the first zero arg constructor of the first non 
serializable superclass.  Yep that's right, that could be ClassLoader, 
you clever attacker you!  The poor old object hasn't even had a chance 
to check it's invariants and it's game over, that's not fair!


This will never do, I had to create a new contract for atomic object 
construction:


  1. The class must implement Serializable (can be inherited) AND one
 of the following conditions must be met:
  2. The class must be annotated with @AtomicSerial OR
  3. The class must be stateless and can be created by calling
 class.newInstance() OR
  4. The class must have DeSerializationPermission

Now if the class is annotated with @AtomicSerial, it must also have a 
constructor signature that has public or default visibility:


SomeClass(GetArg arg) throws IOException{
// The first thing I must do is to call a static method that checks 
invariants, before calling a superclass constructor, the static method 
should return the argument for another constructor.

}

Some simple rules for our object input stream:

  1. Though shalt not publish a reference to a partially constructed
 object.
  2. Though shalt not publish a reference if an object fails
 construction, where readObject, readObjectNoData and readResolve
 methods are considered to be constructors for the purpose of
 deserialization of conventional Serializable objects.
  3. Though shalt not attempt to construct a Serializable object that
 doesn't have an @AtomicSerial annotation, or has serial fields
 (state) and doesn't have DeSerializationPermission.
  4. Only call a protected constructor if the class doesn't implement
 @AtomicSerial and has DeSerializationPermission.
  5. Do not honour the standard java Serialization construction
 contract (not all Serializable classes can be constructed even if
 they have DeSerializationPermission).
  6. If a standard Serializable class has DeSerializationPermission for
 it to be constructed it must have a zero arg constructor or a
 constructor that accepts null object arguments and default
 primitive values.
  7. If an any object in a serialized object graph fails it's invariant
 checks, deserialization of the object graph fails at that point
 and control is returned to the caller, by way of an
 InvalidObjectException (a child class of IOException).
  8. Honour the standard java serialization stream protocol.
  9. Count number of objects received and throw a
 StreamCorruptedException if limit is exceeded.
 10. Count number of array elements received and throw
 StreamCorruptedException if limit is exceeded.
 11. readResolve() can be called on @AtomicSerial instances but,
 readObject() is never called and neither is readObjectNoData().

Obligations of our object output stream:

  1. Reset the stream before the limit is reached.
  2. Replace java collections and maps with safe @AtomicSerial
 implementations, it is the developers obligation to replace them
 with their preferred implementations during construction, these
 are functional but are only immutable containers for keys, values
 and comparators.
  3. Honour 

Re: Security

2015-02-09 Thread Peter Firmstone
Ok, so here's where I discuss the how I've addressed serialization's 
security issues:


From the source code:
 // These two settings are to prevent DOS attacks.
private static final int MAX_ARRAY_LEN = 32768;
private static final int MAX_OBJECT_CACHE = 65664;

The output stream implementation automatically resets the cach, when it 
exceeds a certain size, while the ObjectInputStream throws a 
StreamCorruptedException if OBJECT_CACHE is exceeded.


@Override
public void writeObjectOverride(Object obj) throws IOException {
d.writeObject(obj);
// This is where we check the number of Object's cached and
// reset if we're getting towards half our limit.
// One day this limit will become too small.
if (d.numObjectsCached  32768) reset();
}

A new annotation

@AtomicSerial is provided for classes that implement Serializable and 
want to validate their invariants atomically.  That is if invariants 
aren't satisfied, the object is not created and doesn't exist so cannot 
be used as an attack vector.


An annotation was chosen since it is not inherited by subclasses.

Classes that implement Serializable, still continue to do so as usual, 
the serial form doesn't change.


Use of the new streams is determined by MethodConstraints.

Child classes in the stream that don't implement @AtomicSerial (apart 
from instances of Throwable, immutable Object versions of primitives and 
MarshalledObject) require DeSerializationPermission.


Because circular links are prohibited, when a conventional Serializable 
object is constructed, it is not published until after it's readObject, 
readObjectNoData or readResolve method has been called, so an attacker 
is prevented from obtaining a reference to it through the stream.  These 
object's are still vulnerable to finalizer attacks, however that 
requires downloaded code and the intent is for this to be used to 
establish proxy trust prior to granting DownloadPermission.


@AtomicSerial objects are constructed using a constructor:

SomeObject(GetArg arg) throws IOException{
super(check(arg));
Object somefield = arg.get(somefield, null);
}

Where GetArg extends GetFields, but provides caller sensitive methods, 
to ensure different classes can't see each others field namespaces.  To 
construct a GetArg instance requires 
SerializablePermission(enableSubclassImplementation).


The lowest extension class in the inheritance heirarchy is called first, 
it checks its invariants using a static method, each class in the 
inheritance heirarchy checks it's invariants before Object's default 
constructor is called, if any invariant checks fail the object is not 
constructed.


This is also generic parameter and final field friendly.

@ReadInput (used to annotate a static method that returns ReadObject (an 
interface) are both provided to allow classes to gain direct access to 
the stream, as they would in readObject(ObjectInputStream in), but 
without requiring their own object instance.


Conventional classes are deserialized using a best effort approach, by 
trying each constructor on the lowest extension class, using default 
values.  Not all Serializable object's can be constructed.


The intent is for services, proxy's and discovery to use DOS safe code 
to serialize their state while establishing trust.


Collection, List, Set, SortedSet, Map and SortedMap are replaced in the 
ObjectOutputStream implementation with DOS attack safe immutable 
versions backed by arrays, these are functional package private 
implementations, intended as parameters, so implementations of 
@AtomicSerial are required to pass them as parameters to their preferred 
collection implementation.  @AtomicSerial implementations are encouraged 
to use checked collections, see java.util.Collections.


Conventional serialization can be used for trusted connections.

@AtomicSerial is very easy to implement, and much easier to evolve than 
default serialization, some users may wish to use it anyway (it's also 
final field friendly), but it is definitely intended to be optional.


Regards,

Peter.



On 8/02/2015 6:11 PM, Peter Firmstone wrote:

Thanks Dan, hopefully I don't dissapoint.

... So continuing on, another benefit of secure Serialization, if 
you're a lookup service, you don't need to authenticate your clients, 
you can deal with anyone and your not subject to DOS attacks, other 
than more conventional attacks unrelated to java.


I've been investigating Serialization security, identifying issues and 
considering how to deal with them. I think everyone is aware 
Serialization has security pitfalls, if I fail to mention an issue 
you're aware of, please let me know.


At first I thought it was just an issue of limiting the classes that 
could be deserialized and that's relatively easily done, for example, 
ArrayList reads an integer from the stream, then uses it to create an 
array, without sanity checking it first.  Well that's easy, just 
prevent ArrayList and a bunch

Re: Security

2015-02-08 Thread Peter Firmstone

Thanks Dan, hopefully I don't dissapoint.

... So continuing on, another benefit of secure Serialization, if you're 
a lookup service, you don't need to authenticate your clients, you can 
deal with anyone and your not subject to DOS attacks, other than more 
conventional attacks unrelated to java.


I've been investigating Serialization security, identifying issues and 
considering how to deal with them. I think everyone is aware 
Serialization has security pitfalls, if I fail to mention an issue 
you're aware of, please let me know.


At first I thought it was just an issue of limiting the classes that 
could be deserialized and that's relatively easily done, for example, 
ArrayList reads an integer from the stream, then uses it to create an 
array, without sanity checking it first.  Well that's easy, just prevent 
ArrayList and a bunch of others from deserializing...


Not so fast, ObjectInputStream also creates arrays, without sanity 
checking, blockdata also has similar issues during byte array creation.


So you only need send an Integer.MAX_VALUE to bring the jvm to it's 
knees, and if that doesn't do it, send a multi dimension array with a 
few more.   It requires very little data from the sender, just a few 
bytes and a couple of integers.


In addition ObjectInputStream caches objects, so they don't have to be 
re-serialized, but if ObjectOutputStream  doesn't perform a reset, well 
you can figure that out without my help.


But wait there's more...

During deserialization, Serializable objects are instantiated by calling 
a zero arg constructor of the first non Serializable super class, this 
partially constructed Object, an instance of a Serializable child class, 
without any invariant checks or validation, is then allocated an integer 
handle and published, an attacker is now free to obtain a reference to 
the unconstructed object simply by inserting a reference handle into the 
stream.


At this time the ProtectionDomain's of the classes in the object's 
heirarchy are not present in the AccessControlContext, which is why 
attackers in the past have been able to creat ClassLoader instances and 
download code, when someone has deserialized into privileged context 
(that renders DownloadPermission useless, because the attacker can work 
around it).


After unsafe publication, the fields are read in from the stream from 
super class to child class and set.


Now I don't blame the developers back in the day, they had deadlines and 
targets to achieve, but  the market has changed significantly since then 
and the issue needs addressing.


The good news is, the Serialization stream protocol has all the 
components necessary to create a secure ObjectInputStream.


For example, the stream cache can be reset periodically, this also means 
we can place a limit on the number of objects cached, and require 
ObjectOuputStream to call reset.  If it doesn't and the object cache 
exceeds our limit, StreamCorruptedException.


For array length, again we can impose limits, and throw 
StreamCorruptedException if the limit is exceeded.


The collections themselves aren't hard to solve, simply replace all 
collection types with a safe replacement in ObjectOutputStream and 
require DeSerializationPermission for known insecure objects.


Circular links, can't be supported using existing deserialization 
mechanisms.


Does this last point matter?

My implementation of ObjectInputStream doesn't support circular links 
and passes all lookup service tests.  In this case circular links are 
replaced with null.  To support circular links safely would require some 
cooperation from the classes participating in deserialization.


Construction during deserialization is the last challenge, many existing 
Serializable classes don't have public constructors or zero arg 
constructors, even though implementing Serializable is equivalent to a 
public constructor.


... to be continued, until next time.

Cheers,

Peter.


On 5/02/2015 2:38 AM, Dan Rollo wrote:

Very interesting. Looking forward to the next episode.


On Feb 4, 2015, at 9:11 AM, dev-digest-h...@river.apache.org wrote:

to be continued...






Re: Security

2015-02-08 Thread Peter Firmstone

Thanks Dan, hopefully I don't dissapoint.

... So continuing on, another benefit of secure Serialization, if you're 
a lookup service, you don't need to authenticate your clients, you can 
deal with anyone and your not subject to DOS attacks, other than more 
conventional attacks unrelated to java.


I've been investigating Serialization security, identifying issues and 
considering how to deal with them. I think everyone is aware 
Serialization has security pitfalls, if I fail to mention an issue 
you're aware of, please let me know.


At first I thought it was just an issue of limiting the classes that 
could be deserialized and that's relatively easily done, for example, 
ArrayList reads an integer from the stream, then uses it to create an 
array, without sanity checking it first.  Well that's easy, just prevent 
ArrayList and a bunch of others from deserializing...


Not so fast, ObjectInputStream also creates arrays, without sanity 
checking, blockdata also has similar issues during byte array creation.


So you only need send an Integer.MAX_VALUE to bring the jvm to it's 
knees, and if that doesn't do it, send a multi dimension array with a 
few more.   It requires very little data from the sender, just a few 
bytes and a couple of integers.


In addition ObjectInputStream caches objects, so they don't have to be 
re-serialized, but if ObjectOutputStream  doesn't perform a reset, well 
you can figure that out without my help.


But wait there's more...

During deserialization, Serializable objects are instantiated by calling 
a zero arg constructor of the first non Serializable super class, this 
partially constructed Object, an instance of a Serializable child class, 
without any invariant checks or validation, is then allocated an integer 
handle and published, an attacker is now free to obtain a reference to 
the unconstructed object simply by inserting a reference handle into the 
stream.


At this time the ProtectionDomain's of the classes in the object's 
heirarchy are not present in the AccessControlContext, which is why 
attackers in the past have been able to creat ClassLoader instances and 
download code, when someone has deserialized into privileged context 
(that renders DownloadPermission useless, because the attacker can work 
around it).


After unsafe publication, the fields are read in from the stream from 
super class to child class and set.


Now I don't blame the developers back in the day, they had deadlines and 
targets to achieve, but  the market has changed significantly since then 
and the issue needs addressing.


The good news is, the Serialization stream protocol has all the 
components necessary to create a secure ObjectInputStream.


For example, the stream cache can be reset periodically, this also means 
we can place a limit on the number of objects cached, and require  
ObjectOuputStream to call reset.  If it doesn't and the object cache 
exceeds our limit, StreamCorruptedException.


For array length, again we can impose limits, and throw 
StreamCorruptedException if the limit is exceeded.


The collections themselves aren't hard to solve, simply replace all 
collection types with a safe replacement in ObjectOutputStream and 
require DeSerializationPermission for known insecure objects.


Circular links, can't be supported using existing deserialization 
mechanisms.


Does this last point matter?

My implementation of ObjectInputStream doesn't support circular links 
and passes all lookup service tests.  In this case circular links are 
replaced with null.  To support circular links safely would require some 
cooperation from the classes participating in deserialization.


Construction during deserialization is the last challenge, many existing 
Serializable classes don't have public constructors or zero arg 
constructors, even though implementing Serializable is equivalent to a 
public constructor.


... to be continued, until next time.

Cheers,

Peter.


On 5/02/2015 2:38 AM, Dan Rollo wrote:

Very interesting. Looking forward to the next episode.


On Feb 4, 2015, at 9:11 AM, dev-digest-h...@river.apache.org wrote:

to be continued...






Re: Security

2015-02-04 Thread Dan Rollo
Very interesting. Looking forward to the next episode. 

 On Feb 4, 2015, at 9:11 AM, dev-digest-h...@river.apache.org wrote:
 
 to be continued...



Security

2015-02-04 Thread Peter Firmstone
There's a free certificate authority coming this year, I think privacy 
and security are hot topics these days: https://letsencrypt.org/


Just a quick note about something I'm currently exploring.

The good thing about River is it allows you to be mostly ignorant of 
security when developing services and clients and then later using 
configuration, secure services and clients.


River is secure for the following scenario:

   * One entity / company is reponsible for the lookup service,
 services and clients.
   * Secure Discovery v2 is used.
   * Codebase Integrity and TLS / SSL Endpoints.
   * Authentication of services and clients is required.

Where River is not secure:

   * More than two entites / companies interact using lookup services,
 services and clients.
   * Secure discovery v2 is used.
   * Codebase Integrity and TLS / SSL Endpoints.

Why isn't it secure, what's vulnerable?

Well we know the sandbox isn't secure against DOS, but what about 
Serialization ObjectInputStream and using only local code?


Well that's not secure either.

Lets for a moment pretend that it is, what are the benefits?

We could use simple proxy services from a trusted lookup service, for 
example, without code downloads as trust is easily established.


We could define an interface for obtaining smart proxy's from bootstrap 
proxy's, register the bootstrap proxy with entries on a lookup service.


We can prevent unauthorised code downloads with DownloadPermission using 
the right PreferredClassProvider.


This would allow clients to obtain the boostrap proxy first, 
authenticate it, grant DownloadPermission to it, then use the smart proxy.


Anyway out of time right now, to be continued...

I'm presently investigating deserialization security and trying to fix 
another annoying River concurrency bug, these always seem to pop up when 
you're in the middle of something, taking days off the actual project.


Regards,

Peter.



Re: Subtleties of JAAS in an internet djinn (was Distributed Network Security)

2012-07-22 Thread Peter Firmstone

Gregg Wonderly wrote:

On Jul 7, 2012, at 8:02 AM, Peter Firmstone wrote:

  

These doAs methods in this case cannot elevate Permission, they can reduce 
Permission to that which the Subject has in common with other code on the 
stack, but it cannot be used by code to gain privilege if it uses a custom 
DomainCombiner that extends SubjectDomainCombiner.

Would this be an acceptable compromise?

In future, we can look at other tools to assist with simplifying security, such 
as static bytecode analysis or FindBugs perhaps.



When I am using remote code, I either trust it by source, or don't, and can assert that trust by 
granting AllPermission for the URL, or not.  Adding local resource access to a proxies code base, 
is usually limited to network access, but sometimes file access for certain UIs which, for example, 
have images which my caching URLHandler will store on disk.  That access to local resources, 
whether through Subject grant, or blanket permission is what matters for using a services 
ServiceUI.  When I use a services proxy for interaction, I've always used JAAS login services to 
gain access, via PAM login services on Linux and other PAM supporting OSes using JNI mechanisms.  
Thus, there is a Subject created which has a set of Principals that my LoginModule creates in the 
form of a user and any groups that the user is a member of.

In the services, I always use Subject.doAs with those subjects, and could use 
user or group based permission grants in my policy.  But, in the end, I never 
found that to be needed, and I instead, grant permissions to the codebase at 
the appropriate granularity (never granting to or using a client side codebase 
jar).

The authorization framework that I've mention here, before, is then used to do 
role based control of how the API is used on the server, by the calls into that 
environment from the client.  I have a InvocationHandler that I insert to do 
the Subject.doAs() on the server side.  So, when the user authenticates with 
the server, I get a Subject.  I do Subject.doAs() to create an instance of the 
InvocationHandler (which holds a reference to the Subject), and the exported 
smart proxy instance, which is returned to the client.

When a Subject asserts some kind of client local controls, it could provide 
some new functionality as you illustrated. In the case of network access 
control, or other local resource access, it could allow you to limit access to 
those resources, or to extend access in particular ways with a dynamic policy 
grant, on the client.

I crafted some code in that direction at one point.  It allowed the jar to have 
a list of permissions in it, that it wished to have granted, and I was trying 
to decide what the right interface would be, to allowing the client software to 
talk to the user about these permissions.  In a sense, this was along the lines 
of Java WebStart kinds of thoughts.  My ServiceUI desktop has a code flow at 
the point that the UI is activated, that it could obviously do a resource query 
into the jar, find the requested accesses, and prompt the user to grant them.

Conversely, you want to limit permissions by asserting a domain controlled by 
the Subject, that would provide whatever access you granted to the Subject in 
that domain/policty.  So, at the time that a client UI is first activated, you 
could use information about it being an uncontrolled codebase to trigger the 
assertion of the Subject controlled domain.

What I think we need to focus on, are these two mechanisms (grant to client 
thread/Subject jar requested permissions and limit of client thread to already 
asserted Subject based domain) and the obvious fact that it's really about 
asserting control into the client execution environment.  We need to work on 
how we'd decide that needed to happen, and then think about whether it's just 
Subjects we want to assert, and whether we need to include the Privileged forms 
or not.

Gregg Wonderly
  
I've spent some time pondering the issue of running as a Subject in the 
presence of untrusted code.


I've realised that the Privileged forms are also required, to allow a 
context to be saved and run in an executor thread as an example.  But 
since Jini Security also allows preservation of the context ClassLoader 
or anything else for that matter, by utilising a Policy or 
SecurityManager that implements SecurityContextSource, I'd like to use 
SecurityContext in place of AccessControlContext.  
AggregatePolicyProvider implements it to preserve the context class 
loader.  Other than SecurityContext, the functionality would be similar 
to Subject.doAsPrivileged.


It is possible using the existing Subject.doAsPrivileged methods, to get 
the current context, then from within a privileged action call 
Subject.doAsPrivileged with the context previously retrieved.  In this 
case the Subjects Principals are only injected into the privileged 
domain of the calling code, excluding domains from

Re: Subtleties of JAAS in an internet djinn (was Distributed Network Security)

2012-07-22 Thread Gregg Wonderly

On Jul 22, 2012, at 5:04 AM, Peter Firmstone wrote:

 Since Gregg hasn't utilised traditional jvm style Permissions for Principals, 
 there is no possibility of elevating privileges when calling Subject.doAs, so 
 granting doAs to untrusted code doesn't present any security risk in 
 Gregg's use case.

Just to make it clear, I do this precisely because the JVM permissions and 
principals with SecurityContext and all the other details are horribly complex 
for a developer and deployer to evaluate.  It becomes very problematic over the 
lifetime of a deployment to know when someone maybe calling from a new context 
that requires a specific security configuration to confine a use case to some 
limited set of permissions.

Practically, I just don't see the value in that.  I'd rather write utility 
methods, and call them from validated contexts with only the single entry point 
to validate.  The Apple world drove security down the path it went, because 
there was no authentication possible, at first, and then when signed jars 
were added, the code source, not the user was responsible for declaring it's 
intent.

My PAM login module uses password access in my applications.  My customers use 
a product that provides PAM access to the Windows directory services for 
authentication.  So there is no local administration of that, on the servers.  

 Also it's worth noting the Policy implementation can provide support, so 
 changes to Subject Principals are effective immediately, leading to a much 
 more programmer friendly JAAS.

I think that this security stuff, is a fairly large reason why Jini becomes 
overwhelming so quickly.  The simple fact that XXX.doAS and the security 
manager are never part of general Java programming education makes their 
introduction into the thought process very problematic.

Gregg Wonderly

Re: Subtleties of JAAS in an internet djinn (was Distributed Network Security)

2012-07-08 Thread Peter Firmstone

Thanks Gregg,

The services you deploy are often unique, but relevant and it's apparent 
you've been able to explore and delve deeply into complex problems.  I'm 
grateful that both you and Dan are finding some time to discuss this 
issue, because to be quite honest, I'm not happy that I fully grasp the 
issues myself and discussion helps to improve understanding.


You've also hit the nail on the head, it's about limiting permission 
while executing as a Subject while further limiting permissions 
available to the proxy, but still allowing the Subject to have more 
permission when the proxy isn't on the call stack. 

It was also about separating Principal and CodeSource concerns, so you 
don't need to ensure that all your policy grant files include CodeSource 
and Principals.  - But it turns out that this isn't necessary there is 
an alternative option that follows:


Dynamic grants made to proxy's usually include the CodeSource and 
Principals, in the current case, the permission granted can be less than 
with the same Principals executed only in the presence of signed trusted 
code, the dynamic grant is very specific, to be granted permission the 
code must have the proxy ClassLoader and be executed by a Subject with 
the Principals specified in the grant.


If the proxy doesn't have AuthPermission(doAs), get the current 
context by calling AccessController.getContext() this will contain the 
proxy's ProtectionDomain if it has already been unmarshalled, then use 
AccessController.doPrivileged to execute Subject.doAsPrivileged(Subject, 
PrivilegedAction, AccessControlContext), which injects the Subject 
principals into the proxy codebase, without allowing the proxy to gain 
the privileges the Principals have in the presence of trusted signed code.


Rather than modify SubjectDomainCombiner, the alternative option 
described above works by signing all your local code (some additional 
work for the developer) and all Principal based grants must also include 
signed by, so those principal grant's can't be stolen by untrusted code, 
because untrusted code it isn't signed with the Certificates stated in 
the policy file.  No grant should be made to a Principal alone, it must 
include a CodeSource URL or a jar Signer.  BTW, ConcurrentFilePolicy 
uses CodeSource URI's not URL, a small change of semantics for a big 
performance gain.


Another benefit of signed code is it protects local package private 
security boundaries, of course this will create more errors if you 
haven't got your preferred lists right.


So we could provide two additional methods in 
net.jini.security.Security, but without modifying SubjectDomainCombiner, 
while also preserving current policy behaviour, to ensure that the 
AccessControlContext used in Subject.doAsPrivileged is not null.


I guess what I've also just shown is that granting to a combination of 
CodeSource / Signers and Principals is determinate as Dan mentioned in a 
previous post, so yes, Dan was right.


Do you think a practical way for an administrator to limit the 
Permissions a Subject can grant to a service proxy is by limiting them 
with GrantPermission?  It allows the Subject to have more permission (in 
the presence of trusted signed code) than it's capable of granting.


The proxy could make available a list of Permissions it needs within the 
jar file as you've suggested.


My gut feel is permissions should be granted by the Subject to the proxy 
(in the presence of trusted code), however since not all Subjects are 
people, this mechanism needs to be flexible.  Any ideas? Events?


We can also determine if a Subject can grant a subset of the Permissions 
requested by the proxy, by creating a ProtectionDomain that contains the 
Principals and required Signer Certificate (this won't work for 
CodeSource URL's only signed code), then ask the Policy for it's 
PermissionCollection, then check GrantPermission for each permission on 
the list.


This is where things get a little interesting, since 
Subject.doAsPrivileged or Subject.doAs can be used to perform the 
dynamic grants, there are traps for the uneducated.


Privileged context must not be used as the context to communicate with 
the proxy itself, so it's beneficial to developers if we provide the 
methods to perform high risk tasks without requiring the proxy to have 
AuthPermission(doAs).  Prior to receiving dynamic grants, the proxy, 
even if injected with Principals does not have Principal privileges, 
hence privilege escalation is impossible, provided all grants to 
principals also require a trusted jar signer.  We can even pass the 
proxy class to these methods, so they can ensure the proxy's 
ProtectionDomain is on the stack, prior to making any method calls to 
avoid deserialisation attacks.


Some supporting information:

  1. Deserialisation Attacks: During deserialisation, there is a small
 time period during class loading where privileges must be
 minimised. Deserialisation privilege

Re: Distributed network security

2012-07-01 Thread Peter Firmstone
Consider a Subject ProtectionDomain that is added to the stack, instead of 
Subject Principals being injected into every domain on the stack:

Subjects don't have code, they lack the ability to use one permission to gain 
another, so if a ProtectionDomain representing only a Subject and its 
Principals has AWTPermission(*) and AudioPermission(*), it means users can 
use a application locally as intuitively expected.  However when untrusted code 
appears on the stack, readDisplayPixels, accessClipboard, listenToAllAWTEvents 
and record are no longer allowed, preventing an attack that could steal the 
users private information.  Also because the user might have 
SocketPermission(*, connect, resolve), the untrusted code can't use it to 
transmit stolen data and the application has limited network and disk acces 
while the untrusted code remains on the call stack.

Here's an idea:

If we extended ProtectionDomain, say SubjectProtectionDomain, then Permissions 
typically meaningingful only to code can be included in domain static 
permissions, so they are automatically granted and don't have to be managed in 
policy files.  Application specific Permissions would still need to be granted 
in policy files, so this would need to be well documented.  A system property 
could be set to use the default SubjectDomainCombiner behaviour if required, 
allowing choice depending on deployment.

It must be remembered that developers writing existing software expect that 
Principals only add extra Permissions, so they probably have not written code 
with doPrivileged blocks in mind for Principals.

Assume - makes an Ass out of you and me.

Thoughts?

Cheers,

Peter.

Re: Distributed network security

2012-06-30 Thread Peter Firmstone

Thanks Dan  Gregg,

I've been working on the overhead problem, here's a summary (will commit 
the latest after tests finish):


  1. Removed calls to CodeSource.implies (this causes a DNS lookup)
  2. Replaced URL with URI in Policy providers (URL requires DNS lookup
 and File system access)
  3. Removed calls to SocketPermission.hashCode and equals (this causes
 a reverse DNS lookup)
  4. Replaced URL with URI in maps inside PreferredClassLoader and
 PreferredClassProvider
  5. Changed the behaviour of Reference Collections to avoid calling
 hashCode during initialisation of temporary and timed referrers.
 (to avoid calling SocketPermission.hashCode).
  6. Created a PermissionComparator to avoid broken equals and hashCode
 implementations in Permission classes.
  7. Optimise the order of Permission checks (further optimisation is
 possible).
  8. Added dnsjava nameservice provider - a local dns server written
 entirely in java that supports multi threading and reverse DNS
 lookup, so many SocketPermission.implies instances can execute in
 parallel and avoid long timeouts (dnsjava has had 13 years of
 development and is relatively stable).  This requires a system
 property to be set.
  9. CachingSecurityManager uses Cliff Click's high scaling hash map
 implementation, proven to scale linearly to over 500 threads and
 Doug Lee's ConcurrentSkipListSet which is also concurrent.  This
 caches permission checks for each AccessControlContext and uses
 the PermissionComparator to avoid calling equals or hashCode. 
 Timed references reap permissions from the cache after about 10

 minutes of non use using a background thread.  Soft Permissions
 were not suitable.
 10. Immutable PermissionGrant's; the Policy uses these to make policy
 decisions, these are shared among many threads without blocking. 
 PermissionCollection instances are created on demand then

 discarded, since PermissionCollection is typically single threaded
 and synchronized.  Permission objects are meant to be immutable so
 they should be non blocking, some like SocketPermission aren't
 (they wait for dns lookup results if other threads are executing
 the same lookup) but with dnsjava they are much better.

So I hope I'm fixing security enough to stop people switching it off 
altogether. River is more exposed to Java 2 security kinks than other 
software because it is so network orientated, but I haven't really gone 
outside the original Java 2 security model, I've created completely new 
file policy, dynamic policy and security manager implementations, java 2 
security was designed with this flexibility in mind.


I think we've got the only scalable java security infrastructure, I'm 
hoping future deployment will bring more patches and further improvements.


Code needs to access more resources than users like ProtectionDomains, 
ClassLoaders and system properties, however users are more concerned 
with access to files and network.  In Unix, everything is a file reduces 
access control complexity.  Plan9 was designed as a distributed network 
os also based on the everything is a file concept.  The protection 
domain concept that Java uses was inspired by Multix.


The intersection of permissions contained by protection domains on the 
stack at any point in time determine the available permissions.


I've found that each time we've changed our java platform dependencies 
and compiler options, I've had to modify the test policy files, simply 
because code follows different execution paths and different domains are 
on the stack at the time the permission check is called.


Now here is where it gets interesting, as Dan pointed out, most 
administrators are happy to just allow a user all network access and 
restrict it using a firewall.  But if it enables smart proxy's from 
behind the firewall to serve clients beyond the firewall (lets imagine 
for a moment that UDT Sockets have solved that problem), that could be 
seen as circumvention by an administrator.  However if user Principals 
are not injected into every domain on the stack, and there is less 
trusted code on the stack, the user will only have the permissions of 
the less trusted code while it remains on the stack.


So in the presence of trusted code, user polices are relatively simple, 
but in the presence of less trusted code, the code doesn't gain the full 
permissions of the user and user privileges can be much more relaxed and 
remain simple.  Principals and code become separate concerns.


This is why I think that the design of SubjectDomainCombiner is not 
suited to River, without some subtle modifications to add a Subject 
ProtectionDomain to the stack rather than injecting the Subject's 
principals into every ProtectionDomain on the stack.


Then as Dan has suggested, services can expose the permissions they 
require, this then drops the users privileges to that of the less

Distributed network security

2012-06-28 Thread Peter Firmstone
What follows is relatively hard core security, but it's relevant to anyone 
wishing to deploy on untrusted networks.  I know some people don't like 
discussing security issues for fear of turning off would be developers but 
security isn't mandatory, however considering that River/Jini is already making 
extremely difficult tasks possible, I think it appropriate to continue focusing 
on solving difficult issues, especially considering recent online coverage and 
increasing awareness of java security issues.  In the long run this will make 
life easier for developers.

Java 2 security infrastructure was originally designed around codebase trust, 
but it's also extensible and customisable, JAAS was designed around the premise 
of user security concerns, previously lacking from Java 2, code was assumed to 
be trusted in most cases, although it's also still possible to restrict 
permission to a combination of principals and codesources or certificates, but 
in practise, permission is generally granted to principals.  JAAS uses the 
SubjectDomainCombiner to inject principals into every protection domain on the 
call stack (well it replaces them actually, but that's an implementation 
detail).

To ensure that only trusted code is used by authorised principals, policy file 
permission grants must be made that includes both the signer of the code and 
the principal authorised.  Failure to include the code signer in the permission 
grant may allow untrusted code to gain elevated permissions on the call stack 
by being executed by an authorised principal.  

In a distributed environment it is difficult if not impossible to know in 
advance which code signers and principal grant combinations are required, these 
interactions and introductions may occur dynamically at runtime.   For this 
reason, too much permission is often granted, in order to allow the software to 
function intuitively or as anticipated by users, leaving security holes for 
opportunistic attackers.

A user is bound to a single thread and all child threads inherit its context.  
Executors and Thread pools may execute tasks on behalf of users however user 
AccessControlContext must be copied from submitting threads.  

SubjectDomainCombiner is extendable, it isn't final, JAAS behaviour can be 
subtly modified without having to reinvent the wheel.

So what changes need to be made to SubjectDomainCombiner functionality to allow 
the mixing of Principal grants with codebase or codesigner grants in the 
presence of untrusted code without compromising security?

Instead of injecting Subject principals into every ProtectionDomain on the 
stack, add a new ProtectionDomain, containing the Subject's Principals to the 
existing stack.  In the presence of untrusted code, priviledge is reduced to 
the intersection of commonly held Permissions, instead of being elevated to 
those of the principal.  ProtectionDomain would also need to be subclassed, eg 
SubjectProtectionDomain, so the SubjectDomainCombiner could identify and 
replace the SubjectProtectionDomain when running as another more privileged 
Subject in a privileged context (equivalent to that effected by Subject.doAs).  
This also requires that other ProtectionDomain's without Principals will 
require all necessary permissions granted in policy statements.

Unfortunately it is much simpler to make all the necessary grants only to 
Principals and associate them with Subjects upon login.  It takes considerably 
more effort to determine all the code sources that require these permissions, a 
deployment simulation tool could greatly simplify determining permission 
requirements during deployment, so it doesn't become an impediment to adoption, 
but has rather the opposite affect.

According to current policy behaviour, a ProtectionDomain with a null 
CodeSource would not be granted any Permission, ever!  A ProtectionDomain 
containing a CodeSource with null URL can however be granted Permission, 
provided no codesource is specified in relevant policy file grant statements.   
A Principal only ProtectionDomain would need to have a CodeSource with null 
URL, to avoid any unexpected behaviour when combined in the stack.

JAAS was designed so permissions are not granted until after a user has 
authenticated so trusted code cannot perform privileged tasks such as 
connecting to a network until after a user is logged in, however a system where 
trusted code is granted permission it already has sufficient permission.  
Authentication in this case would inject a ProtectionDomain on the stack that 
reduces privileges.  Log-in represents a lessening of permission.  This will 
work if the only way for the user to access the functionality the code provides 
is by logging in.

The question is, which is easier to control, untrusted code that may be able to 
gain privilege by running on a user thread (a trojan) or trusted code logging 
in users?   The former is indeterminate in an environment where untrusted or 
semi

Re: Simple security change - DownloadPermission

2012-01-29 Thread Peter Firmstone
 you build lots of small or large services 
and get all the dynamic, built-in, life-cycle management features 
that a large enterprise environment needs.  The Harvester and other 
such systems, provide ways to use Jini features inside of a JavaEE 
environment to take advantage of both tool sets together.  The 
dedicated solutions world, which has plagued the Jini platform with 
no demonstrable users, is what we've always held up and waved 
around, saying, people feel it's so valuable to them, that they 
don't want their competition to see or know how they are using it.


So, there is the whole other side of the internet, on untrusted 
networks, where people are constantly using the Web for their 
transactional, data transport systems/models.   I'm not sure where 
Jini fits in that world, without some very specific, dedicated 
systems that do stuff that the web can't do.   Looking for some of 
that lone fruit to pick, is what I'm not sure about.


What kind of transactional, leased or other data services could you 
imagine Jini being a key part of on the Internet?


Gregg Wonderly

On 1/27/2012 7:04 PM, Peter Firmstone wrote:
I've been thinking about the practicalities of a djinn running in 
untrusted networks (internet), the first thing that springs to mind 
is, security is much simpler if people can get away with only dumb 
or reflective proxies.


I'd like to the see the default security setup requiring 
DownloadPermission.


I we sign our download jars (a number of developers could do this, 
requiring at least this group of signers), a standard policy file 
template could include a certificate grant for DownloadPermission, 
allowing anyone to load classes from a standard River download proxy.


This gets our smart proxy's out of the way.

Then all developers need to worry about are Principals and 
MethodConstraints, allowing people to get started using River with 
reflective proxy's over the internet.


Later if people want to get into smart proxy's that power's still 
there, this change prevents unauthorised class loading.


Cheers,

Peter.











Re: Simple security change - DownloadPermission

2012-01-29 Thread Gregg Wonderly
 that I found important to understand about Jini.  It 
 works well as a layer of communications and unification of interface, that 
 enables features that are not what the service is about, but rather what 
 distributed systems are about.  So, as a tool set, it works well for that 
 specific task.
 
 Things like Rio, JavaEE integration, Realtime systems monitoring etc, are 
 the domain targeting mechanisms that enable specific types of system 
 construction which in turn can enable specific kinds of features.  Rio 
 lets you build lots of small or large services and get all the dynamic, 
 built-in, life-cycle management features that a large enterprise 
 environment needs.  The Harvester and other such systems, provide ways to 
 use Jini features inside of a JavaEE environment to take advantage of 
 both tool sets together.  The dedicated solutions world, which has 
 plagued the Jini platform with no demonstrable users, is what we've 
 always held up and waved around, saying, people feel it's so valuable to 
 them, that they don't want their competition to see or know how they 
 are using it.
 
 So, there is the whole other side of the internet, on untrusted networks, 
 where people are constantly using the Web for their transactional, data 
 transport systems/models.   I'm not sure where Jini fits in that world, 
 without some very specific, dedicated systems that do stuff that the web 
 can't do.   Looking for some of that lone fruit to pick, is what I'm not 
 sure about.
 
 What kind of transactional, leased or other data services could you imagine 
 Jini being a key part of on the Internet?
 
 Gregg Wonderly
 
 On 1/27/2012 7:04 PM, Peter Firmstone wrote:
 I've been thinking about the practicalities of a djinn running in 
 untrusted networks (internet), the first thing that springs to mind is, 
 security is much simpler if people can get away with only dumb or 
 reflective proxies.
 
 I'd like to the see the default security setup requiring 
 DownloadPermission.
 
 I we sign our download jars (a number of developers could do this, 
 requiring at least this group of signers), a standard policy file template 
 could include a certificate grant for DownloadPermission, allowing anyone 
 to load classes from a standard River download proxy.
 
 This gets our smart proxy's out of the way.
 
 Then all developers need to worry about are Principals and 
 MethodConstraints, allowing people to get started using River with 
 reflective proxy's over the internet.
 
 Later if people want to get into smart proxy's that power's still there, 
 this change prevents unauthorised class loading.
 
 Cheers,
 
 Peter.
 
 
 
 
 
 


Re: Simple security change - DownloadPermission

2012-01-28 Thread Gregg Wonderly
This is one of those places where Jini's power, using mobile code, creates more 
necessary overhead, than people familiar with other forms of marshalling 
start to wonder why would you do that then?  I think it's important to look at 
what external mechanisms Jini is using now, and start looking at providing 
other forms of marshalling at the InvocationLayerFactory level.


Simple, document transfer, is what it seems many people feel is tenable for 
them in enterprise level systems.  I've long argued, that the Jeri ILF is 
actually just like a document transfer, in that the method arguments are sent, 
in a package, to the server, whose invoke action is passed this document.  
The remote server then processes the document and returns it, potentially with a 
hyper link, in the form of a remote reference or just the resultant value.  
The type information available from the result, is the complete, self 
documenting description.  It tells you what you have and what you can do with it.


It's this simple view of the Jini transport, that would enable a lot of 
different possible mechanisms to be used at the ILF layer.  Because I've never 
really had the need for anything else, I don't have anything in production 
different than the standard Jeri ILF.  But, I did, at one point, create an ILF 
that did do MODBUS-over-TCP as an exploration of what it would mean to move 
something, which I could do at the service layer via a delegation model, 
into a lower level interface.


What I found, was that there wasn't a distinct advantage, so I threw that 
stuff away and just kept it at the service implementation level.   This is one 
of the things that I found important to understand about Jini.  It works well as 
a layer of communications and unification of interface, that enables features 
that are not what the service is about, but rather what distributed systems 
are about.  So, as a tool set, it works well for that specific task.


Things like Rio, JavaEE integration, Realtime systems monitoring etc, are the 
domain targeting mechanisms that enable specific types of system 
construction which in turn can enable specific kinds of features.  Rio lets 
you build lots of small or large services and get all the dynamic, built-in, 
life-cycle management features that a large enterprise environment needs.  The 
Harvester and other such systems, provide ways to use Jini features inside of a 
JavaEE environment to take advantage of both tool sets together.  The 
dedicated solutions world, which has plagued the Jini platform with no 
demonstrable users, is what we've always held up and waved around, saying, 
people feel it's so valuable to them, that they don't want their competition to 
see or know how they are using it.


So, there is the whole other side of the internet, on untrusted networks, where 
people are constantly using the Web for their transactional, data transport 
systems/models.   I'm not sure where Jini fits in that world, without some very 
specific, dedicated systems that do stuff that the web can't do.   Looking for 
some of that lone fruit to pick, is what I'm not sure about.


What kind of transactional, leased or other data services could you imagine Jini 
being a key part of on the Internet?


Gregg Wonderly

On 1/27/2012 7:04 PM, Peter Firmstone wrote:
I've been thinking about the practicalities of a djinn running in untrusted 
networks (internet), the first thing that springs to mind is, security is much 
simpler if people can get away with only dumb or reflective proxies.


I'd like to the see the default security setup requiring DownloadPermission.

I we sign our download jars (a number of developers could do this, requiring 
at least this group of signers), a standard policy file template could include 
a certificate grant for DownloadPermission, allowing anyone to load classes 
from a standard River download proxy.


This gets our smart proxy's out of the way.

Then all developers need to worry about are Principals and MethodConstraints, 
allowing people to get started using River with reflective proxy's over the 
internet.


Later if people want to get into smart proxy's that power's still there, this 
change prevents unauthorised class loading.


Cheers,

Peter.





Re: Simple security change - DownloadPermission

2012-01-28 Thread Peter Firmstone
 departments fighting for you, 
not with you. 

IT is the support department that supports all other departments and can 
be a big impediment to progress or an enabler.


To answer the question about the internet, everything is connected, as a 
business grows it branches out and opens up in new locations, the 
internet is a communication channel.


If River can't communicate / traverse the internet, it becomes a legacy 
data silo, it can't expand with the business, it can't fully service the 
businesses needs.


River could be the glue that works with everything else, or it can 
remain in a niche and be consigned to history.


River still has some Java platform warts, but they're not as bad as the 
html, sql, javascript worlds warts.  Systems need to be efficient and 
clean, even the simplest tasks can become daunting when dealing with 
some of today's web pages.  Computers can automate and simplify complex 
tasks, or they can make the simplest task a nightmare, it depends on the 
implementer.


That's why I contribute development time, to use River in my business, 
it needs to do some basic things it can't do presently.  To do things no 
one else can; adapt quickly for a competitive edge.


Cheers,

Peter.

Gregg Wonderly wrote:
This is one of those places where Jini's power, using mobile code, 
creates more necessary overhead, than people familiar with other 
forms of marshalling start to wonder why would you do that then?  
I think it's important to look at what external mechanisms Jini is 
using now, and start looking at providing other forms of marshalling 
at the InvocationLayerFactory level.


Simple, document transfer, is what it seems many people feel is 
tenable for them in enterprise level systems.  I've long argued, 
that the Jeri ILF is actually just like a document transfer, in that 
the method arguments are sent, in a package, to the server, whose 
invoke action is passed this document.  The remote server then 
processes the document and returns it, potentially with a hyper 
link, in the form of a remote reference or just the resultant 
value.  The type information available from the result, is the 
complete, self documenting description.  It tells you what you have 
and what you can do with it.


It's this simple view of the Jini transport, that would enable a lot 
of different possible mechanisms to be used at the ILF layer.  Because 
I've never really had the need for anything else, I don't have 
anything in production different than the standard Jeri ILF.  But, I 
did, at one point, create an ILF that did do MODBUS-over-TCP as an 
exploration of what it would mean to move something, which I could do 
at the service layer via a delegation model, into a lower level 
interface.


What I found, was that there wasn't a distinct advantage, so I threw 
that stuff away and just kept it at the service implementation 
level.   This is one of the things that I found important to 
understand about Jini.  It works well as a layer of communications and 
unification of interface, that enables features that are not what the 
service is about, but rather what distributed systems are about.  
So, as a tool set, it works well for that specific task.


Things like Rio, JavaEE integration, Realtime systems monitoring etc, 
are the domain targeting mechanisms that enable specific types of 
system construction which in turn can enable specific kinds of 
features.  Rio lets you build lots of small or large services 
and get all the dynamic, built-in, life-cycle management features that 
a large enterprise environment needs.  The Harvester and other such 
systems, provide ways to use Jini features inside of a JavaEE 
environment to take advantage of both tool sets together.  The 
dedicated solutions world, which has plagued the Jini platform with no 
demonstrable users, is what we've always held up and waved around, 
saying, people feel it's so valuable to them, that they don't want 
their competition to see or know how they are using it.


So, there is the whole other side of the internet, on untrusted 
networks, where people are constantly using the Web for their 
transactional, data transport systems/models.   I'm not sure where 
Jini fits in that world, without some very specific, dedicated systems 
that do stuff that the web can't do.   Looking for some of that lone 
fruit to pick, is what I'm not sure about.


What kind of transactional, leased or other data services could you 
imagine Jini being a key part of on the Internet?


Gregg Wonderly

On 1/27/2012 7:04 PM, Peter Firmstone wrote:
I've been thinking about the practicalities of a djinn running in 
untrusted networks (internet), the first thing that springs to mind 
is, security is much simpler if people can get away with only dumb 
or reflective proxies.


I'd like to the see the default security setup requiring 
DownloadPermission.


I we sign our download jars (a number of developers could do this, 
requiring at least this group of signers), a standard

Re: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-25 Thread Peter

- Original message -
 On 12/24/2011 2:14 AM, Dan Creswell wrote:
  So...
 
  On 23 December 2011 11:32, Peter Firmstonej...@zeus.net.au  wrote:
  One question I've asked myself when creating my own policy implementation
  was CodeSource.implies(CodeSource cs), the implementation seemed like a bad
  idea, it uses DNS, an attacker could use DNS cache poisoning to gain
  elevated permission using an untrusted CodeSource URL, simply because the
  policy thinks the CodeSource is implied.  I changed PolicyFile to
  specifically not use CodeSource.implies().  In reality a signer Certificate
  is required to identify the CodeSource with any level of trust.
 
  Well, I think a more general point here would be that JDK's default
  set of behaviours are designed to protect against DNS based attacks
  (i.e. a successful lookup result is cached forever and so changes
  can't leak in). This is bogus, because if the first lookup is
  compromised you're dead and buried.
 I think it's fundamental to understand that a lot of the DNS caching behavior
 was born in the Applet world.  When Java first hit the scenes, we had the
 problem that people could demonstrate that they could know the address of 
 the
 socket on the remote end, and thus could use that (this was before NAT was in
 use, or at least wide spread), and poison the DNS so that subsequent lookups
 returned addresses on the local network, instead of the correct address of the
 original server.

 That's one path of exploitation, but as Dan says, there are others in the Jini
 world where the first lookup, being poisoned, can cause exploitative code to 
 be
 downloaded.

 I think that it's vital to understand, that whether you cache the first, 
 second
 or fifth lookup, each situation presents a different set  of challenges in
 providing security.  Ultimately, Jini needs, in my opinion, to focus
 authentication above the network layer, and use signed jars, encrypted paths,
 and cert based auth, so that the network path, can not be a part of the
 exploitation, and instead, each end of a communication, is responsible for
 trusting the other, through negotiations carried through the network, instead 
 of
 using information about the network to guarantee trust.

 Gregg Wonderly

+1 Well said, my thoughts exactly, grant Permission to Certificate  Principal 
combinations.

We might need to work towards a PGP trust model.

Cheers  Merry Christmas,

Pete.


Re: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-23 Thread Peter Firmstone
 policy provider is completely
initiated, after reading all policy files.  So it's likely that the standard
java policy is read last by our policy provider implementation.

In summary a list of SocketPermissions need to be sorted beginning from
those that cause long dns delays, to wildcard based permissions, so the
wildcard perms are added last and hence checked first by any implies calls.

I've got two options on how to solve this:

1.  Get rid of PermissionCollection based caches altogether and generate
PermissionCollection's on demand.

2  Replace the PermissionCollection cache with a ListPermission based
cache, generate Permissioncollection's on demand.  Sort the List after
creation, before publishing, replace the list on write.

Option 2 could be implemented in ConcurrentPermissions, a replacement for
java.security.Permissions.

Option 1 would be implemented by the policy.

In addition, to allow the security manager to cache the results of
permission checks for SocketPermission, I can create a wrapper class, where
equals and hashcode are based purely on the string representation.  This
allows very rapid repeated permission checks.

Looks like I can get around the SocketPermission, CodeSource and URL
headaches, relatively unscathed.
N.B. Anyone care to try out, or seriously performance test the new
PreferredClassProvider?

Cheers,

Peter.

- Original message -

  

Actually, more significantly for me is that the default localhost
SocketPermission is checked before a more lenient SocketPermission. In
theory,
one should be able to introspect SocketPermission instances and determine
that
one may be automatically implied by the other so can be skipped, possibly
saving
a lookup. Chris

Peter Firmstone wrote:



A big problem with the current implementation is SocketPermission blocks
other permission checks from proceeding.

  



  



  




Re: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Peter Firmstone
 an unfortunate fact that not all permission checks are performed in 
the policy, replacing SocketPermission also requires the cooperation of 
the SecurityManager.  To make matters worse, static ProtectionDomains 
created prior to my policy implementation being constructed will never 
consult my policy implementation as such they will still contain 
SocketPermission.   So the SecurityManager would need to check each 
ProtectionDomain for both implementations, so reimplementing 
SocketPermission doesn't eliminate its use entirely.


It's worth noting that SocketPermission is implemented rather poorly and 
the same functionality can be provided with far fewer DNS lookups being 
performed, since the majority are performed completely unnecessarily.  
Perhaps it's worth me donating some time to OpenJDK to fix it, I'd have 
to check with Apache legal first I suppose.


The problems with DNS lookup also affects CodeSource and URL equals and 
hashcode methods, so these classes shouldn't be used in collections.


Cheers,

Peter.

Christopher Dolan wrote:

To simulate the problem, go to InetAddress.getHostFromNameService() in your IDE, set a 
breakpoint on the nameService.getHostByAddr line with a condition of 
something like this:

 new java.util.concurrent.CountDownLatch(1).await(15, 
java.util.concurrent.TimeUnit.SECONDS)

then launch your River application from within the IDE. This will cause all 
reverse DNS lookups to stall for 15 seconds before succeeding. This will affect 
Reggie the worst because it has to verify so many hostnames. In a large group 
(a few thousand services) this will drive Reggie's thread count skyward, 
perhaps triggering OutOfMemory errors if it's in a 32-bit JVM.

This problem happens in the real world in facilities that allow client connections to the 
production LAN, but do not allow the production LAN to resolve hosts in the client LAN. 
This may occur due to separate IT teams or strict security rules or simple configuration 
errors. Because most client-server systems, like web servers, do not require the server 
to contact the client this problem does not become immediately visible to IT. Instead, 
the question is inevitably Why is Jini/River so sensitive to reverse DNS? All of my 
other services work fine.

Chris

-Original Message-
From: Tom Hobbs [mailto:tvho...@googlemail.com] 
Sent: Monday, December 12, 2011 1:43 PM

To: dev@river.apache.org
Subject: Re: RE: Implications for Security Checks - SocketPermission, URL and 
DNS lookups

My biggest concern with such fundamental changes is controlling the impact
it will have.  I'm a pretty good example of this, I haven't experienced the
troubles these changes are intended to overcome.  I also don't havent made
any attempt to dive into these areas of the code, for any reason.

Is it possible to put together a test case which exposes these problems and
also proves the solution?

Obviously, a test case involving misconfigured networks is daft, in that
instance a handy if your network misconfigured diagnostic tool or
documentation would be a good idea.

Please don't interpret this concern as a criticism of your work, Peter.
Far from it.  It's just a comment born out of not really having any contact
with the area your working in!


Grammar and spelling have been sacrificed on the altar of messaging via
mobile device.

On 12 Dec 2011 18:01, Christopher Dolan christopher.do...@avid.com
wrote:

  

Specifically for SocketPermission, I experienced severe timeout problems
with reverse DNS misconfigurations. For some LAN-based deployments, I
relaxed this criterion via 'new SocketPermission(*,
accept,listen,connect,resolve)'. This was difficult to apply to a general
Sun/Oracle JVM, however, because the default security policy *prepends* a
(localhost:1024-,listen) permission that triggers the reverse DNS
lookup. To avoid this inconvenient setting, I install a new
java.security.Policy subclass that delegates to the default Policy except
when the incoming permission is a SocketPermission. That way I don't need
to modify the policy file in the JVM. The Policy.implies() override method
is trivial because it just needs to do  if (permission instanceof
SocketPermission) { ... }. The PermissionCollection methods were trickier
to override (skip over any SocketPermission elements in the default
Policy's PermissionCollection), but still only about 50 LOC.

Chris

-Original Message-
From: Peter Firmstone [mailto:j...@zeus.net.au]
Sent: Friday, December 09, 2011 9:28 PM
To: dev@river.apache.org
Subject: Implications for Security Checks - SocketPermission, URL and DNS
lookups

DNS lookups and reverse lookups caused by URL and SocketPermission,
equals, hashCode and implies methods create some serious performance
problems for distributed programs.

The concurrent policy implementation I've been working on reduces lock
contention between threads performing security checks.

When the SecurityManager is used to check a guard, it calls

RE: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Christopher Dolan
Quite true Gregg, but that doesn't help when Reggie boots and hundreds of hosts 
contact it in a short time span against a cold DNS cache. Prior to resolution 
of RIVER-396 (PreferredClassProvider classloader cache concurrency 
improvement) these timeout failures were effectively serial and caused long 
stalls. The resulting OOMEs and failed thread creation events in some isolated 
scenarios were unrecoverable. For me, this was mitigated by the triple solution 
of 1) turning off the SocketPermission check, 2) the RIVER-396 patch and 3) 
switching JERI to NIO to save some threads.

Chris

-Original Message-
From: Gregg Wonderly [mailto:gr...@wonderly.org] 
Sent: Tuesday, December 13, 2011 8:19 AM
To: dev@river.apache.org
Cc: Peter Firmstone
Subject: Re: Implications for Security Checks - SocketPermission, URL and DNS 
lookups

Remember to, from a general workaround perspective, that you can use command 
line options to lengthen the time that DNS failure information is retained, 
to 
keep things moving when no reverse DNS information is available.  The default, 
is like 10 seconds, and that is considerably shorter than what you will 
generally experience in a failed lookup.  The end result, is that the failure 
cache doesn't serve much purpose without it having a very extended time, as a 
workaround.   In some cases, I've set it to an hour or more, and some initial 
startup is then slow, and initial client connection can be a little slow, 
but then things move along quite well.

Gregg Wonderly

On 12/13/2011 2:56 AM, Peter Firmstone wrote:
 In addition CodeSource.implies() also causes DNS checks, I'm not 100% sure 
 about the jvm code, but Harmony code uses SocketPermission.implies() to check 
 if one CodeSource implies another, I believe the jvm policy implementation 
 also utilises it, because harmony's implementation is built from Sun's java 
 spec.

 So in the existing policy implementations, when parsing the policy files, 
 additional start up delays may be caused by the CodeSource.implies() method 
 making network DNS calls.

 In my ConcurrentPolicyFile implementation (to replace the standard java 
 PolicyFile implementation), I've created a URIGrant, I've taken code from 
 Harmony to implement implies(ProtectionDomain pd), that performs wildcard 
 matching compliant with CodeSource.implies, the only difference being, that 
 no 
 attempt to resolve URI's is made.

 Typically most policy files specify file based URL's for CodeSource, however 
 in a network application where many CodeSources may be network URL's, DNS 
 lookup causes added delays.

 I've also created a CodeSourceGrant which uses CodeSource.implies() for 
 backward compatibility with existing java policy files, however I'm sure that 
 most will simply want to revise their policy files.

 The standard interface PermissionGrant, is implemented by the following 
 inheritance hierarchy of immutable classes:

  PrincipalGrant
  __|___
 
 | 
   
 |
 ProtectionDomainGrant 
 CertificateGrant
 | 
  |
 ClassLoaderGrant  
 |  |
   
 URIGrant  CodeSourceGrant


 Only PrincipalGrant is publicly visible, a builder returns the correct 
 implementation.

 ProtectionDomainGrant and ClassLoaderGrant are dynamically granted, by the 
 completely new DynamicPolicyProvider (which has long since passed all tests).

 CertificateGrant, URIGrant and CodeSourceGrant are used by the File based 
 policy's and RemotePolicy, which is intended to be a service that nodes in a 
 djinn can use to allow an administrator to update the policy (eg to include 
 new certificates or principals), with all the protection of subject 
 authentication and secure connections.  RemotePolicy is idempotent, the 
 policy 
 is updated in one operation, so the current policy state is always known to 
 the administrator (who is a client).

 Since a File based policy is mostly read and only written when refreshed, 
 PermissionGrant's are held in a volatile array reference, copied (only the 
 reference) by any code that reads the array.  The array reference is updated 
 when the policy is updated, the array is never mutated after publishing.

 A ConcurrentMapProtectionDomain, PermissionCollection (with weak keys) acts 
 as a cache, I've got ConcurrentPermissions, an implementation that replaces 
 the hetergenous java.security.Permissions class, this also resolves any 
 unresolved permissions.

 However I'm starting to wonder

RE: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Christopher Dolan
I think you're referring to this: http://support.microsoft.com/kb/314882 
(Inbound connections limit in Windows XP). If so, that applies only to WinXP. 
I understood that Microsoft relaxed that restriction for Vista and later. As 
you say it did not apply to the server OS, specifically Win 2003.

So, I wouldn't bother with a specific Reggie patch for this issue, as it will 
be less and less important as time progresses.

Chris

-Original Message-
From: Gregg Wonderly [mailto:gr...@wonderly.org] 
Sent: Tuesday, December 13, 2011 8:56 AM
To: dev@river.apache.org
Subject: Re: Implications for Security Checks - SocketPermission, URL and DNS 
lookups

Also, one simple reminder about Windows.  The folks at Microsoft want to be 
able to make you buy server class OSes, so the user OSes limit the number of 
simultaneous socket connections as well as other things, so that you can't buy 
a 
cheap user seat and make a server of any substance out of it.   But, when 
you put a Jini LUS instance, such as Jini on a user seat machine, these 
limitations can help control overload.  What happens, is that Windows will 
throw out RST packets when too many connections occur, and cause the 
connecting machines to back off.

I don't have specific numbers to show, but practically, it will cause a few 
machines at a time to register, and others to retry later when the next 
multicast announcement goes out.

 From some perspectives, we might want to look at providing a setting for 
reggie which would cause it to limit the total number of inbound registrations 
and lookups in a way which would provide for some good old fashioned resource 
management that worked well to keep what Chris mentions here from happening.

Gregg Wonderly

On 12/13/2011 8:31 AM, Christopher Dolan wrote:
 Quite true Gregg, but that doesn't help when Reggie boots and hundreds of 
 hosts contact it in a short time span against a cold DNS cache. Prior to 
 resolution of RIVER-396 (PreferredClassProvider classloader cache 
 concurrency improvement) these timeout failures were effectively serial and 
 caused long stalls. The resulting OOMEs and failed thread creation events in 
 some isolated scenarios were unrecoverable. For me, this was mitigated by the 
 triple solution of 1) turning off the SocketPermission check, 2) the 
 RIVER-396 patch and 3) switching JERI to NIO to save some threads.

 Chris

 -Original Message-
 From: Gregg Wonderly [mailto:gr...@wonderly.org]
 Sent: Tuesday, December 13, 2011 8:19 AM
 To: dev@river.apache.org
 Cc: Peter Firmstone
 Subject: Re: Implications for Security Checks - SocketPermission, URL and DNS 
 lookups

 Remember to, from a general workaround perspective, that you can use command
 line options to lengthen the time that DNS failure information is retained, 
 to
 keep things moving when no reverse DNS information is available.  The default,
 is like 10 seconds, and that is considerably shorter than what you will
 generally experience in a failed lookup.  The end result, is that the failure
 cache doesn't serve much purpose without it having a very extended time, as a
 workaround.   In some cases, I've set it to an hour or more, and some initial
 startup is then slow, and initial client connection can be a little slow,
 but then things move along quite well.

 Gregg Wonderly

 On 12/13/2011 2:56 AM, Peter Firmstone wrote:
 In addition CodeSource.implies() also causes DNS checks, I'm not 100% sure
 about the jvm code, but Harmony code uses SocketPermission.implies() to check
 if one CodeSource implies another, I believe the jvm policy implementation
 also utilises it, because harmony's implementation is built from Sun's java 
 spec.

 So in the existing policy implementations, when parsing the policy files,
 additional start up delays may be caused by the CodeSource.implies() method
 making network DNS calls.

 In my ConcurrentPolicyFile implementation (to replace the standard java
 PolicyFile implementation), I've created a URIGrant, I've taken code from
 Harmony to implement implies(ProtectionDomain pd), that performs wildcard
 matching compliant with CodeSource.implies, the only difference being, that 
 no
 attempt to resolve URI's is made.

 Typically most policy files specify file based URL's for CodeSource, however
 in a network application where many CodeSources may be network URL's, DNS
 lookup causes added delays.

 I've also created a CodeSourceGrant which uses CodeSource.implies() for
 backward compatibility with existing java policy files, however I'm sure that
 most will simply want to revise their policy files.

 The standard interface PermissionGrant, is implemented by the following
 inheritance hierarchy of immutable classes:

   PrincipalGrant
   __|___

 |
 |
 ProtectionDomainGrant
 CertificateGrant
  |
  |
 ClassLoaderGrant

Re: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Peter Firmstone

Thinking aloud for a moment:

Chris uses a policy to avoid the localhost lookup.

I think if I build the Permissions collection on demand the 
SocketPermission's can be ordered by sorting them prior to being added 
to SocketPermissionCollection using a ComparatorSocketPermission, 
based on the SocketPermission being checked.


The comparator can behave differently than equals, using the string 
representations of the host and actions to order such that:


Wild card's are added first and SocketPermissions are ordered in their 
most likely order of matching.


This could be a standard feature of the policy, allowing developers to 
provide a custom comparator to order Permission's.


The trick is to avoid the unnecessary DNS lookups, since many are 
performed simply because the order each SocketPermission is checked, eg 
localhost being checked first!


If we can reduce the DNS Lookups, to only those SocketPermission checks 
that would likely fail if reverse DNS is unavailable, without causing 
blocking that delays other permission checks, then we should be able to 
make Reggie much more scalable under these conditions.


All permission checks that can succeed, will, even if only partially for 
an AccessControlContext.   The SocketPermission checks that rely on DNS 
will be the last to complete, but since all other permissions can 
complete (even if belonging to the same thread context), the backlog 
will be much smaller.


A big problem with the current implementation is SocketPermission blocks 
other permission checks from proceeding.


Cheers,

Peter.


Christopher Dolan wrote:

Quite true Gregg, but that doesn't help when Reggie boots and hundreds of hosts contact 
it in a short time span against a cold DNS cache. Prior to resolution of RIVER-396 
(PreferredClassProvider classloader cache concurrency improvement) these 
timeout failures were effectively serial and caused long stalls. The resulting OOMEs and 
failed thread creation events in some isolated scenarios were unrecoverable. For me, this 
was mitigated by the triple solution of 1) turning off the SocketPermission check, 2) the 
RIVER-396 patch and 3) switching JERI to NIO to save some threads.

Chris

-Original Message-
From: Gregg Wonderly [mailto:gr...@wonderly.org] 
Sent: Tuesday, December 13, 2011 8:19 AM

To: dev@river.apache.org
Cc: Peter Firmstone
Subject: Re: Implications for Security Checks - SocketPermission, URL and DNS 
lookups

Remember to, from a general workaround perspective, that you can use command 
line options to lengthen the time that DNS failure information is retained, to 
keep things moving when no reverse DNS information is available.  The default, 
is like 10 seconds, and that is considerably shorter than what you will 
generally experience in a failed lookup.  The end result, is that the failure 
cache doesn't serve much purpose without it having a very extended time, as a 
workaround.   In some cases, I've set it to an hour or more, and some initial 
startup is then slow, and initial client connection can be a little slow, 
but then things move along quite well.


Gregg Wonderly

On 12/13/2011 2:56 AM, Peter Firmstone wrote:
  
In addition CodeSource.implies() also causes DNS checks, I'm not 100% sure 
about the jvm code, but Harmony code uses SocketPermission.implies() to check 
if one CodeSource implies another, I believe the jvm policy implementation 
also utilises it, because harmony's implementation is built from Sun's java spec.


So in the existing policy implementations, when parsing the policy files, 
additional start up delays may be caused by the CodeSource.implies() method 
making network DNS calls.


In my ConcurrentPolicyFile implementation (to replace the standard java 
PolicyFile implementation), I've created a URIGrant, I've taken code from 
Harmony to implement implies(ProtectionDomain pd), that performs wildcard 
matching compliant with CodeSource.implies, the only difference being, that no 
attempt to resolve URI's is made.


Typically most policy files specify file based URL's for CodeSource, however 
in a network application where many CodeSources may be network URL's, DNS 
lookup causes added delays.


I've also created a CodeSourceGrant which uses CodeSource.implies() for 
backward compatibility with existing java policy files, however I'm sure that 
most will simply want to revise their policy files.


The standard interface PermissionGrant, is implemented by the following 
inheritance hierarchy of immutable classes:


 PrincipalGrant
 __|___

|   
|
ProtectionDomainGrant 
CertificateGrant
| 
 |
ClassLoaderGrant

RE: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Christopher Dolan
Actually, more significantly for me is that the default localhost 
SocketPermission is checked before a more lenient SocketPermission. In theory, 
one should be able to introspect SocketPermission instances and determine that 
one may be automatically implied by the other so can be skipped, possibly 
saving a lookup.
Chris

Peter Firmstone wrote:
 A big problem with the current implementation is SocketPermission blocks 
 other permission checks from proceeding.


Re: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-13 Thread Peter
That's exactly what I'm thinking, order SocketPermissions first, implemented 
using a comparator, add to a new SocketPermissionCollection in order, then 
perform the security check.

The comparator can perform the introspection to customise the order for every 
securiity check, eg so that wild cards are checked first, avoiding the dns 
lookup in most cases.

That way comparators encapsulate the introspection and we can keep the policy 
implementation simpler.

In my concurrent policy, while localhost is being resolved for a 
ProtectionDomain, other threads are blocked from performing any 
SocketPermission checks on that ProtectionDomain, if that PD represents library 
code shared throughout your app, that too can bring it to a standstill.

Cheers,

Peter.

- Original message -
 Actually, more significantly for me is that the default localhost
 SocketPermission is checked before a more lenient SocketPermission. In theory,
 one should be able to introspect SocketPermission instances and determine that
 one may be automatically implied by the other so can be skipped, possibly 
 saving
 a lookup. Chris

 Peter Firmstone wrote:
  A big problem with the current implementation is SocketPermission blocks
  other permission checks from proceeding.



RE: Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-12 Thread Christopher Dolan
Specifically for SocketPermission, I experienced severe timeout problems with 
reverse DNS misconfigurations. For some LAN-based deployments, I relaxed this 
criterion via 'new SocketPermission(*, accept,listen,connect,resolve)'. 
This was difficult to apply to a general Sun/Oracle JVM, however, because the 
default security policy *prepends* a (localhost:1024-,listen) permission 
that triggers the reverse DNS lookup. To avoid this inconvenient setting, I 
install a new java.security.Policy subclass that delegates to the default 
Policy except when the incoming permission is a SocketPermission. That way I 
don't need to modify the policy file in the JVM. The Policy.implies() override 
method is trivial because it just needs to do  if (permission instanceof 
SocketPermission) { ... }. The PermissionCollection methods were trickier to 
override (skip over any SocketPermission elements in the default Policy's 
PermissionCollection), but still only about 50 LOC.

Chris

-Original Message-
From: Peter Firmstone [mailto:j...@zeus.net.au] 
Sent: Friday, December 09, 2011 9:28 PM
To: dev@river.apache.org
Subject: Implications for Security Checks - SocketPermission, URL and DNS 
lookups

DNS lookups and reverse lookups caused by URL and SocketPermission, 
equals, hashCode and implies methods create some serious performance 
problems for distributed programs.

The concurrent policy implementation I've been working on reduces lock 
contention between threads performing security checks.

When the SecurityManager is used to check a guard, it calls the 
AccessController, which retrieves the AccessControlContext from the call 
stack, this contains all the ProtectionDomain's on the call stack (I 
won't go into privileged calls here), if a ProtectionDomain is dynamic 
it will consult the Policy, prior to checking the static permissions it 
contains.

The problem with the old policy implementation is lock contention caused 
by multiple threads all using multiple ProtectionDomains, when the time 
taken to perform a check is considerable, especially where identical 
security checks might be performed by multiple threads executing the 
same code.

Although concurrent policy reduces contention between ProtectionDomain's 
calls to Policy.implies, there remain some fundamental problems with the 
implementations of SocketPermission and URL, that cause unnecessary DNS 
lookups during equals(), hashCode() and implies() methods.

The following bugs concern SocketPermission (please read before 
continuing) :

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6592285
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4975882 - contains a 
lot of valuable comments.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4671007 - fixed, 
perhaps incorrectly.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6501746

Anyway to cut a long story short, DNS lookups and DNS reverse lookups 
are performed for the equals and hashCode implementations in 
SocketPermission and URL, with disastrous performance implications for 
policy implementations using collections and caching security permission 
check results. 

For example, once a SocketPermission guard has been checked for a 
specific AccessContolContext the result is cached by my SecurityManager, 
avoiding repeat security checks, however if that cache contains 
SocketPermission, DNS lookups will be required, the cache will perform 
slower than some other directly performed security checks!  The cache is 
intended to return quickly to avoid reconsulting every ProtectionDomain 
on the stack.

To make matters worse, when checking a SocketPermission guard, the DNS 
may be consulted for every non wild card SocketPermission contained 
within a SocketPermissionCollection, up until it is implied.  DNS checks 
are being made unnecessarily, since the wild card that matches may not 
require a DNS lookup at all, but because the non matching 
SocketPermission's are being checked first, the DNS lookups and reverse 
lookups are still performed.  This could be fixed completely, by moving 
the responsibility of DNS lookups from SocketPermission to 
SocketPermissionCollection.

The identity of two SocketPermission's are equal if they resolve to the 
same IP address, but their hashCode's are different! See bug 6592623.

The identity of a SocketPermission with an IP address and a DNS name, 
resolving to identical IP address should not (in my opinion) be equal, 
but is!  One SocketPermission should only imply the other while DNS 
resolves to the same IP address, otherwise the equality of the two 
SocketPermission's will change if the IP address is assigned to a 
different domain!  Object equality / identity shouldn't depend on the 
result of a possibly unreliable network source.

SocketPermission and SocketPermissionCollection are broken, the only 
solution I can think of is to re-implement these classes (from Harmony) 
in the policy and SecurityManager, substituting the existing jvm 
classes

Implications for Security Checks - SocketPermission, URL and DNS lookups

2011-12-09 Thread Peter Firmstone
DNS lookups and reverse lookups caused by URL and SocketPermission, 
equals, hashCode and implies methods create some serious performance 
problems for distributed programs.


The concurrent policy implementation I've been working on reduces lock 
contention between threads performing security checks.


When the SecurityManager is used to check a guard, it calls the 
AccessController, which retrieves the AccessControlContext from the call 
stack, this contains all the ProtectionDomain's on the call stack (I 
won't go into privileged calls here), if a ProtectionDomain is dynamic 
it will consult the Policy, prior to checking the static permissions it 
contains.


The problem with the old policy implementation is lock contention caused 
by multiple threads all using multiple ProtectionDomains, when the time 
taken to perform a check is considerable, especially where identical 
security checks might be performed by multiple threads executing the 
same code.


Although concurrent policy reduces contention between ProtectionDomain's 
calls to Policy.implies, there remain some fundamental problems with the 
implementations of SocketPermission and URL, that cause unnecessary DNS 
lookups during equals(), hashCode() and implies() methods.


The following bugs concern SocketPermission (please read before 
continuing) :


http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6592285
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4975882 - contains a 
lot of valuable comments.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4671007 - fixed, 
perhaps incorrectly.

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6501746

Anyway to cut a long story short, DNS lookups and DNS reverse lookups 
are performed for the equals and hashCode implementations in 
SocketPermission and URL, with disastrous performance implications for 
policy implementations using collections and caching security permission 
check results. 

For example, once a SocketPermission guard has been checked for a 
specific AccessContolContext the result is cached by my SecurityManager, 
avoiding repeat security checks, however if that cache contains 
SocketPermission, DNS lookups will be required, the cache will perform 
slower than some other directly performed security checks!  The cache is 
intended to return quickly to avoid reconsulting every ProtectionDomain 
on the stack.


To make matters worse, when checking a SocketPermission guard, the DNS 
may be consulted for every non wild card SocketPermission contained 
within a SocketPermissionCollection, up until it is implied.  DNS checks 
are being made unnecessarily, since the wild card that matches may not 
require a DNS lookup at all, but because the non matching 
SocketPermission's are being checked first, the DNS lookups and reverse 
lookups are still performed.  This could be fixed completely, by moving 
the responsibility of DNS lookups from SocketPermission to 
SocketPermissionCollection.


The identity of two SocketPermission's are equal if they resolve to the 
same IP address, but their hashCode's are different! See bug 6592623.


The identity of a SocketPermission with an IP address and a DNS name, 
resolving to identical IP address should not (in my opinion) be equal, 
but is!  One SocketPermission should only imply the other while DNS 
resolves to the same IP address, otherwise the equality of the two 
SocketPermission's will change if the IP address is assigned to a 
different domain!  Object equality / identity shouldn't depend on the 
result of a possibly unreliable network source.


SocketPermission and SocketPermissionCollection are broken, the only 
solution I can think of is to re-implement these classes (from Harmony) 
in the policy and SecurityManager, substituting the existing jvm 
classes.  This would not be visible to client developers.


SocketPermission's may also exist in a ProtectionDomain's static 
Permissions, these would have to be converted by the policy when merging 
the permissions from the ProtectionDomain with those from the policy.  
Since ProtectionDomain, attempts to check it's own internal permissions, 
after the policy permission check fails, DNS checks are currently 
performed by duplicate SocketPermission's residing in the 
ProectionDomain, this will no longer occur, since the permission being 
checked will be converted to say for argument sake 
org.apache.river.security.SocketPermission.  However because some 
ProtectionDomains are static, they never consult the policy, so the 
Permission's contained in each ProtectionDomain will require conversion 
also, to do so will require extending and implementing a 
ProtectionDomain that encapsulates existing ProtectionDomain's in the 
AccessControlContext, by utilising a DomainCombiner.


For CodeSource grant's, the policy file based grant's are defined by 
URL's, however URL's identity depend upon DNS record results, similar to 
SocketPermission equals and hashCode implementations which we have no 
control

Re: Java security Policy - concurrency issues

2011-10-27 Thread Gregg Wonderly
What about a volatile as the visibility control?  Write after update, read 
before access?  It would at least expose the changes to other threads, not be a 
lock, and represent a fairly limited overhead on most hardware.


Gregg

On 10/27/2011 8:55 AM, Peter wrote:

The problem:

Stale references allowed and noted in comments:

java.security.Permissions
java.security.BasicPermissions.BasicPermissionCollection

The stale reference in Permissions is an AllPermission object - an 
optimisation.  If a thread doesn't see the current value, it just checks the 
internal Map, which is synchronised, no biggy.

Problem is, Permissions is a heterogenous PermissionCollection, it contains a 
Map, synchronzed thread access, which prevents a similar optimisation in the 
homogenous BasicPermissionCollection from being seen in the stale state.

Every ProtectionDomain has its own Permissions and each Permission class type 
has it's own unique PermissionCollection shared with all others with the same 
type for a ProtectionDomain.

I replaced Permissions with a class called ConcurrentPermissions that uses a 
ConcurrentMap

Trouble is BasicPermissionCollection is no longer protected by synchronization 
in Permissions.  BasicPermissionCollection now exposed to multiple threads has 
a stale reference optimisation for wildcard * permissions.

What happens in my concurrent policy implementation is the Permission isn't 
necessarily found in the BasicPermissionCollection by a second thread, so it 
checks the PermissionGrants (immutable objects that contain data from policy 
files or dynamic grants) again and adds all the permissions to 
BasicPermissionCollection again.   So it doesn't fail, but it doesn't scale 
well with contention, because you've still got the synchronisation bottleneck, 
can't see the Permission and have to process again, wasting resources, on the 
second occassion.

Problem is, BasicPermissionCollection is the bread and butter 
PermissionCollection implementation many Permission classes use.

Now you have to remember, these classes were designed well before concurrency 
was ever a consideration.  Nowadays these classes would be immutable, since 
policy's don't change much, they're mostly read access.

But I can't change it because many are part of the decision process.

Now I could put a synchronized wrapper PermissionCollection class around these 
things, which fixes the bug, creating long lived objects that live on the heap 
and will likely cause L2 cache misses or contended locks.

How about something different?

Create the PermissionCollection's on demand, then discard immediately after 
use.  The Permission objects themselves are long lived immutable objects.

Why?

It'll be used only by one thread, so the jvm will optimise out the synchronised 
locks.

The object will be created on the threads local memory stack, instead of the 
heap and die in the young generation, so it doesn't incur gc heap generation 
movements or memory heap copy to cpu cache stalls.

But what about single thread applications or those with few threads and little 
contention?  They would run slower, although object allocation costs aren't as 
bad as people think, say 10 to 20 cpu cycles compared to 200 for a cache miss, 
or worse for a contended lock.

Pattern matching of strings is the most expensive computation of most 
permission decisions and has to be repeated for every ProtectionDomain on the 
call stack for each thread, the impact on single core machines won't be much.  
I can test for that, but not the high end stuff.

Arrghh decisions!  Not enough test hardware.

Cheers,

Peter.





Re: Java security Policy - concurrency issues

2011-10-27 Thread Peter
That's exactly what the original implementers needed to do, make those fields 
volatile.

They're private implementation fields though.

Trouble is, none of these old jvm homogenous PermissionCollection's have been 
exposed to any more than single threads before and the last thing I want to do 
is reimplement them.  They're supposed to be thread safe but many have 
visibility issues.

Considering java security policy is a occassional write, multi read, it should 
be simple to make it scale very well, using immutability and concurrency utils. 
 There's just some legacy cruft that spoils it a little.

I guess I could make a wrapper class that uses volatile and write replace,  but 
then if it changes you still have to replace the underlying 
PermissionCollection, and still wear the synchronisation cost.

Cheers,

Peter.

Cheers,

Peter.

- Original message -
 What about a volatile as the visibility control?  Write after update, read
 before access?  It would at least expose the changes to other threads, not be 
 a
 lock, and represent a fairly limited overhead on most hardware.

 Gregg

 On 10/27/2011 8:55 AM, Peter wrote:
  The problem:
 
  Stale references allowed and noted in comments:
 
  java.security.Permissions
  java.security.BasicPermissions.BasicPermissionCollection
 
  The stale reference in Permissions is an AllPermission object - an
  optimisation.  If a thread doesn't see the current value, it just checks the
  internal Map, which is synchronised, no biggy.
 
  Problem is, Permissions is a heterogenous PermissionCollection, it contains 
  a
  Map, synchronzed thread access, which prevents a similar optimisation in the
  homogenous BasicPermissionCollection from being seen in the stale state.
 
  Every ProtectionDomain has its own Permissions and each Permission class 
  type
  has it's own unique PermissionCollection shared with all others with the 
  same
  type for a ProtectionDomain.
 
  I replaced Permissions with a class called ConcurrentPermissions that uses a
  ConcurrentMap
 
  Trouble is BasicPermissionCollection is no longer protected by 
  synchronization
  in Permissions.  BasicPermissionCollection now exposed to multiple threads 
  has
  a stale reference optimisation for wildcard * permissions.
 
  What happens in my concurrent policy implementation is the Permission isn't
  necessarily found in the BasicPermissionCollection by a second thread, so it
  checks the PermissionGrants (immutable objects that contain data from policy
  files or dynamic grants) again and adds all the permissions to
  BasicPermissionCollection again.    So it doesn't fail, but it doesn't scale
  well with contention, because you've still got the synchronisation 
  bottleneck,
  can't see the Permission and have to process again, wasting resources, on 
  the
  second occassion.
 
  Problem is, BasicPermissionCollection is the bread and butter
  PermissionCollection implementation many Permission classes use.
 
  Now you have to remember, these classes were designed well before 
  concurrency
  was ever a consideration.  Nowadays these classes would be immutable, since
  policy's don't change much, they're mostly read access.
 
  But I can't change it because many are part of the decision process.
 
  Now I could put a synchronized wrapper PermissionCollection class around 
  these
  things, which fixes the bug, creating long lived objects that live on the 
  heap
  and will likely cause L2 cache misses or contended locks.
 
  How about something different?
 
  Create the PermissionCollection's on demand, then discard immediately after
  use.  The Permission objects themselves are long lived immutable objects.
 
  Why?
 
  It'll be used only by one thread, so the jvm will optimise out the
  synchronised locks.
 
  The object will be created on the threads local memory stack, instead of the
  heap and die in the young generation, so it doesn't incur gc heap generation
  movements or memory heap copy to cpu cache stalls.
 
  But what about single thread applications or those with few threads and 
  little
  contention?  They would run slower, although object allocation costs aren't 
  as
  bad as people think, say 10 to 20 cpu cycles compared to 200 for a cache 
  miss,
  or worse for a contended lock.
 
  Pattern matching of strings is the most expensive computation of most
  permission decisions and has to be repeated for every ProtectionDomain on 
  the
  call stack for each thread, the impact on single core machines won't be 
  much.
  I can test for that, but not the high end stuff.
 
  Arrghh decisions!  Not enough test hardware.
 
  Cheers,
 
  Peter.
 




Re: Java security Policy - concurrency issues

2011-10-27 Thread Peter

- Original message -
 On 10/27/2011 1:17 PM, Peter wrote:
  That's exactly what the original implementers needed to do, make those 
  fields
  volatile.
 
  They're private implementation fields though.

 Okay, but are there any usage patterns where we could add the use of a 
 volatile?
    The JMM says that multiple non-volatile fields can be made visible by a 
single
 volatile write and then another thread making a volatile read.  So if we are
 adding properties, we should do a volatile write after that, and then if there
 is a place before the use of permissioncollection, by another thread, that we
 can force a volatile read of the same volatile field, then that should fix the
 visibility.  It's not pretty...

Interesting, read it, mutate it, write it back, didn't think of that.

You're right it's not pretty. 


 Should we create an issue in the JDK bugzilla?

Yes.  Actually, I don't think they should use any synchronisation, this can be 
provided by a wrapper class similar to collections.

Cheers,

Peter.


 Gregg

 
  Trouble is, none of these old jvm homogenous PermissionCollection's have 
  been
  exposed to any more than single threads before and the last thing I want to 
  do
  is reimplement them.  They're supposed to be thread safe but many have
  visibility issues.
 
  Considering java security policy is a occassional write, multi read, it 
  should
  be simple to make it scale very well, using immutability and concurrency
  utils.  There's just some legacy cruft that spoils it a little.
 
  I guess I could make a wrapper class that uses volatile and write replace,
  but then if it changes you still have to replace the underlying
  PermissionCollection, and still wear the synchronisation cost.
 
  Cheers,
 
  Peter.
 
  Cheers,
 
  Peter.
 
  - Original message -
   What about a volatile as the visibility control?  Write after update, read
   before access?  It would at least expose the changes to other threads, not
   be a lock, and represent a fairly limited overhead on most hardware.
  
   Gregg
  
   On 10/27/2011 8:55 AM, Peter wrote:
The problem:
   
Stale references allowed and noted in comments:
   
java.security.Permissions
java.security.BasicPermissions.BasicPermissionCollection
   
The stale reference in Permissions is an AllPermission object - an
optimisation.  If a thread doesn't see the current value, it just checks
the internal Map, which is synchronised, no biggy.
   
Problem is, Permissions is a heterogenous PermissionCollection, it
contains a Map, synchronzed thread access, which prevents a similar
optimisation in the homogenous BasicPermissionCollection from being seen
in the stale state.
   
Every ProtectionDomain has its own Permissions and each Permission class
type has it's own unique PermissionCollection shared with all others 
with
the same type for a ProtectionDomain.
   
I replaced Permissions with a class called ConcurrentPermissions that 
uses
a ConcurrentMap
   
Trouble is BasicPermissionCollection is no longer protected by
synchronization in Permissions.  BasicPermissionCollection now exposed 
to
multiple threads has a stale reference optimisation for wildcard *
permissions.
   
What happens in my concurrent policy implementation is the Permission 
isn't
necessarily found in the BasicPermissionCollection by a second thread, 
so
it checks the PermissionGrants (immutable objects that contain data from
policy files or dynamic grants) again and adds all the permissions to
BasicPermissionCollection again.      So it doesn't fail, but it doesn't
scale well with contention, because you've still got the synchronisation
bottleneck, can't see the Permission and have to process again, wasting
resources, on the second occassion.
   
Problem is, BasicPermissionCollection is the bread and butter
PermissionCollection implementation many Permission classes use.
   
Now you have to remember, these classes were designed well before
concurrency was ever a consideration.  Nowadays these classes would be
immutable, since policy's don't change much, they're mostly read access.
   
But I can't change it because many are part of the decision process.
   
Now I could put a synchronized wrapper PermissionCollection class around
these things, which fixes the bug, creating long lived objects that live
on the heap and will likely cause L2 cache misses or contended locks.
   
How about something different?
   
Create the PermissionCollection's on demand, then discard immediately 
after
use.  The Permission objects themselves are long lived immutable 
objects.
   
Why?
   
It'll be used only by one thread, so the jvm will optimise out the
synchronised locks.
   
The object will be created on the threads local memory stack, instead of
the heap and die in the young

Re: Distributed Garbage Collection Security - InvocationConstraints

2011-07-31 Thread Peter Firmstone

Peter Jones wrote:

On Jun 19, 2011, at 5:37 AM, Peter Firmstone wrote:
  

The easiest way to set DGC constraints would be via configuration.

Perhaps the reason this hasn't been implemented previously is, the constraints 
would apply to all services that use DGC, so if you've set Authentication and 
Integrity as minimal constraints, then this would apply to all services.



As mentioned offline, I'm not sure how much of this I can page into my 
consciousness at the moment.

Configuring the server-side constraints for DGC could probably be supported with additional 
parameters to the export process (and thus set via configuration).  The bigger issue, I think, is 
what client constraints to apply, and what client subject to use, when JERI's client-side DGC 
system (defined within BasicObjectEndpoint) makes dirty and clean calls on 
behalf of the application.  In the traditional RMI DGC model, those calls happen implicitly as 
remote references appear and disappear from a JVM.  But in the JERI security model, the client 
application controls the security behavior of remote calls by explicitly (with respect to the 
standard JERI layers) specifying constraints and controlling the current subject.

So when the system wants to make a dirty or clean call for a given remote reference 
(forget batching for the moment), what constraints to apply, or what subject to use?  There didn't seem to be 
an answer, without requiring the client application to interact with the DGC system more explicitly, which 
would be a significant change from the RMI DGC model-- and, I think, not something that seemed worth 
investing effort on at the time, especially given that Jini services didn't seem to make use of RMI's DGC 
functionality in practice anyway (instead they used higher-level leasing mechanisms to detect client 
failure, and most interest was around just being disable DGC for Jini services).
  


Having thought about it, something with similar behaviour to a local 
JVM's gc would be appropriate, for example, if code with higher 
privilege allows a reference to escape into less privileged code, the 
garbage collector won't collect it until it becomes unreferenced by all 
code.


So it would seem appropriate to cache the first Subject (or Subjects) 
who've called methods on the remote object and use it (them) for the DGC 
client Subject.  Different Target's might have different Subject's, so 
we could perform Subject orientated batching.


The unprivileged code, wouldn't be able to call methods that had 
constraints, similar to how an object with method guards works, however 
any code that holds a strong reference would prevent that object from 
being garbage collected.


I think this would be useful in an internet environment, where a 
developer wants to export an object and hand it as a parameter to 
another service, and have it unexported automatically when no longer 
required.


Cheers,

Peter.


  

Exporter's javadoc has the following statement regarding the force parameter in 
unexport:

QUOTE

The |force| parameter serves to indicate whether or not the caller desires the 
unexport to occur even if there are known remote calls pending or in progress 
to the remote object that were made possible by this |Exporter|:

  * If |force| is |true|, then the remote object will be forcibly
unexported even if there are remote calls pending or in progress,
and this method will return |true|.
  * If |force| is |false|, then this acts as a hint to the
implementation that the remote object should not be unexported if
there are known remote calls pending or in progress, and this
method will either unexport the remote object and return |true| or
not unexport the remote object and return |false|. If the
implementation detects that there are indeed remote calls pending
or in progress, then it should return |false|; otherwise, it must
return |true|. If the implementation does not support being able
to unexport conditionally based on knowledge of remote calls
pending or in progress, then it must implement this method as if
|force| were always |true|.

If the remote object is unexported as a result of this method, then the 
implementation may (and should, if possible) prevent remote calls in progress 
from being able to communicate their results successfully.

/QUOTE

I've updated the class Target that implements this functionality for 
BasicJeriExporter, Target's unexport method now uses thread interruption to 
attempt to interrupt and abort in process calls if force is specified.  
Interruption has been successful with the current jeri qa tests and can be seen 
in the exception output for some tests.



FWIW, aborting the execution of in-progress calls, such as via thread 
interruption, wasn't really the intent of that last sentence-- it was more that 
an implementation should feel free (or encouraged) to prevent communicating the 
eventual result of such a call, when control

Re: Distributed Garbage Collection Security - InvocationConstraints

2011-07-31 Thread Gregg Wonderly

On 7/31/2011 5:35 AM, Peter Firmstone wrote:

I think this would be useful in an internet environment, where a developer wants
to export an object and hand it as a parameter to another service, and have it
unexported automatically when no longer required.


This is why I make my smart proxy classes be InvocationHandlers and then embed 
the real remote object and a lease into the marshalled object.  With deferred 
unmarshalling, then others who unmarshall it for use, apply a 
LeaseRenewalManager on the lease as well so that everyone using it is making 
sure the server is not releasing it.


Endpoints as they exist today, are not mobile because of code contamination and 
lost-code-base issues that occur when a remote object is remarshalled to send 
across the wire.


Multiple unmarshalling activities in the same JVM might not be safe if there is 
static class initialization that should of been per-instance.


I think this is an area that needs some investigation for sure.

Gregg


Re: Security Policy Service

2011-06-26 Thread Peter Firmstone

An implementation would work something like this:

  1. When a djinn node starts, it discovers a Registrar (lets say it
 uses secure discovery to eliminate registrar proxy unmarshalling
 attacks.).  The registrar would have method constraints enabled,
 limiting who can submit services.
  2. The node then looks up a Security Policy Advisory service, it
 chooses one that can authenticate as an Administrator.
  3. The client node then hands back a proxy (RemotePolicy service)
 that the SecurityPolicyAdvisory service (using MethodConstraints
 again to restrict access) uses to update the local security
 policy, running as the administrator subject.  The RemotePolicy
 service implements Unreferenced, when the SecurityPolicyAdvisory
 service is down, DGC will notify the RemotePolicy that it has
 become unreferenced, so it can lookup another Security Policy
 Advisory service and continue keeping it's security policy up to date.

(One reason explaining why I'd like to secure DGC).

What use is a Security Policy Service?

An example:

  1. Lets for a moment imagine a hypothetical secure djinn environment,
 this may span a number of physical locations where those locations
 are connected via the internet, Registrars are used to unite this
 environment.  This secure djinn environment may also include a
 number of different administrators, who represent cooperating
 entities.  Each entity controls his/her own Registrar's and
 restricts the services registered to those he/she owns.  Any other
 known entity may lookup any of the services on any registrar.
  2. Each entity signs their proxy codebases.
  3. Additional entity's join the group over time.
  4. Each entity has an Identity, represented by a Subject containing
 authentication Certificates and Principals.
  5. A security policy service would allow you to grant
 DownloadPermission, to all and any CodeSources signed with
 approved Certificates.  The policy can be updated to include any
 newly approved Certificates.  DownloadPermission is a misnomer, it
 prevents a class being defined, or loaded, it doesn't prevent it
 being downloaded, the jar file actually gets downloaded prior to
 the DownloadPermission check.  This is useful for preventing
 unmarshalling attacks from unknown attackers.
  6. The policy also allows the administrator to update Permission's
 granted to Principals.
  7. A number of Permission's can be granted based on Principals and
 CodeSource Certificate combinations.  Note this doesn't eliminate
 the need for a Service to verify it's proxy object or grant
 Permissions to proxy's dynamically, but it can be used to
 determine which GrantPermissions are allowed for such combinations.

To make it easier for clients to determine the Permission's a CodeSource 
might require we could add something similar to the following to our jar 
files:


OSGI-INF/permissions.perm

(org.osgi.framework.PackagePermission javax.microedition.io import)
(org.osgi.framework.PackagePermission org.osgi.service.io import)
(org.osgi.framework.PackagePermission org.osgi.service.log import)
(org.osgi.framework.PackagePermission org.osgi.util.tracker import)
(org.osgi.framework.PackagePermission org.osgi.framework import)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.util.hash 
import)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.util.pool 
import)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.util.ref 
import)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.io 
export)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.io.impl 
export)
(org.osgi.framework.PackagePermission org.eclipse.equinox.internal.io.util 
export)
(org.osgi.framework.PackagePermission javax.microedition.io export)
(org.osgi.framework.BundlePermission * provide)
(org.osgi.framework.BundlePermission * host)
(org.osgi.framework.ServicePermission org.osgi.service.io.ConnectionFactory 
get)
(org.osgi.framework.ServicePermission org.osgi.service.io.ConnectorService 
register)
(java.util.PropertyPermission * read)
(java.net.SocketPermission * listen)


The above is an example of permissions.perm in an OSGi bundle jar file.  
OSGi calls these Local Permissions.  The semantics of OSGi are slightly 
different with permissions.perm defining the maximum allowable 
Permission's for a bundle, which when empty is only limited by 
AllPermission, which is not really what we want.


But we could have something similar eg:

META-INF/permissions.perm

A client could parse this information to determine the Permissions a 
proxy requires, the administrator can restrict by way of 
GrantPermissions, the Permission's Principal's or clients are allowed to 
grant to proxy's.


Some food for thought.

Cheers,

Peter.



Peter Firmstone wrote:
When deploying nodes in a Djinn, how do you currently manage your

Re: Distributed Garbage Collection Security - InvocationConstraints

2011-06-20 Thread Peter Jones
On Jun 19, 2011, at 5:37 AM, Peter Firmstone wrote:
 The easiest way to set DGC constraints would be via configuration.
 
 Perhaps the reason this hasn't been implemented previously is, the 
 constraints would apply to all services that use DGC, so if you've set 
 Authentication and Integrity as minimal constraints, then this would apply to 
 all services.

As mentioned offline, I'm not sure how much of this I can page into my 
consciousness at the moment.

Configuring the server-side constraints for DGC could probably be supported 
with additional parameters to the export process (and thus set via 
configuration).  The bigger issue, I think, is what client constraints to 
apply, and what client subject to use, when JERI's client-side DGC system 
(defined within BasicObjectEndpoint) makes dirty and clean calls on behalf 
of the application.  In the traditional RMI DGC model, those calls happen 
implicitly as remote references appear and disappear from a JVM.  But in the 
JERI security model, the client application controls the security behavior of 
remote calls by explicitly (with respect to the standard JERI layers) 
specifying constraints and controlling the current subject.

So when the system wants to make a dirty or clean call for a given remote 
reference (forget batching for the moment), what constraints to apply, or what 
subject to use?  There didn't seem to be an answer, without requiring the 
client application to interact with the DGC system more explicitly, which would 
be a significant change from the RMI DGC model-- and, I think, not something 
that seemed worth investing effort on at the time, especially given that Jini 
services didn't seem to make use of RMI's DGC functionality in practice anyway 
(instead they used higher-level leasing mechanisms to detect client failure, 
and most interest was around just being disable DGC for Jini services).

 Exporter's javadoc has the following statement regarding the force parameter 
 in unexport:
 
 QUOTE
 
 The |force| parameter serves to indicate whether or not the caller desires 
 the unexport to occur even if there are known remote calls pending or in 
 progress to the remote object that were made possible by this |Exporter|:
 
   * If |force| is |true|, then the remote object will be forcibly
 unexported even if there are remote calls pending or in progress,
 and this method will return |true|.
   * If |force| is |false|, then this acts as a hint to the
 implementation that the remote object should not be unexported if
 there are known remote calls pending or in progress, and this
 method will either unexport the remote object and return |true| or
 not unexport the remote object and return |false|. If the
 implementation detects that there are indeed remote calls pending
 or in progress, then it should return |false|; otherwise, it must
 return |true|. If the implementation does not support being able
 to unexport conditionally based on knowledge of remote calls
 pending or in progress, then it must implement this method as if
 |force| were always |true|.
 
 If the remote object is unexported as a result of this method, then the 
 implementation may (and should, if possible) prevent remote calls in progress 
 from being able to communicate their results successfully.
 
 /QUOTE
 
 I've updated the class Target that implements this functionality for 
 BasicJeriExporter, Target's unexport method now uses thread interruption to 
 attempt to interrupt and abort in process calls if force is specified.  
 Interruption has been successful with the current jeri qa tests and can be 
 seen in the exception output for some tests.

FWIW, aborting the execution of in-progress calls, such as via thread 
interruption, wasn't really the intent of that last sentence-- it was more that 
an implementation should feel free (or encouraged) to prevent communicating the 
eventual result of such a call, when control of the dispatching thread is 
returned to this layer of the system.

Cheers,

-- Peter



Re: Distributed Garbage Collection Security - InvocationConstraints

2011-06-19 Thread Peter Firmstone

The easiest way to set DGC constraints would be via configuration.

Perhaps the reason this hasn't been implemented previously is, the 
constraints would apply to all services that use DGC, so if you've set 
Authentication and Integrity as minimal constraints, then this would 
apply to all services.


If constraints cannot be satisfied for a particular service, then DGC 
would be disabled for that service?  DGC would still be enabled for 
other services that satisfied the constraints.


I'd like to see a secure by default release if possible.

Fixing River-142 made me realise River's DGC design is quite good, I've 
utilised some of the concurrent Java 5 features to fix the delayed 
garbage collection Lease processing issue, I haven't tested single 
thread performance, however it should scale better due to improved 
concurrency.


DGC could be quite useful for secure internet based services.

Exporter's javadoc has the following statement regarding the force 
parameter in unexport:


QUOTE

The |force| parameter serves to indicate whether or not the caller 
desires the unexport to occur even if there are known remote calls 
pending or in progress to the remote object that were made possible by 
this |Exporter|:


   * If |force| is |true|, then the remote object will be forcibly
 unexported even if there are remote calls pending or in progress,
 and this method will return |true|.
   * If |force| is |false|, then this acts as a hint to the
 implementation that the remote object should not be unexported if
 there are known remote calls pending or in progress, and this
 method will either unexport the remote object and return |true| or
 not unexport the remote object and return |false|. If the
 implementation detects that there are indeed remote calls pending
 or in progress, then it should return |false|; otherwise, it must
 return |true|. If the implementation does not support being able
 to unexport conditionally based on knowledge of remote calls
 pending or in progress, then it must implement this method as if
 |force| were always |true|.

If the remote object is unexported as a result of this method, then the 
implementation may (and should, if possible) prevent remote calls in 
progress from being able to communicate their results successfully.


/QUOTE

I've updated the class Target that implements this functionality for 
BasicJeriExporter, Target's unexport method now uses thread interruption 
to attempt to interrupt and abort in process calls if force is 
specified.  Interruption has been successful with the current jeri qa 
tests and can be seen in the exception output for some tests.


Cheers,

Peter.

Peter Firmstone wrote:
River doesn't currently offer constraints for DGC, it's currently 
vulnerable to attacks where the attacker knows the clientID, an 
attacker makes clean calls, clients are removed and the service is 
garbage collected, a simple DOS attack.


Should DGC be disabled in environments where security is a concern?

Wouldn't it be better to use constraints?

In reality DGC is just a service, used to preserve liveliness of 
remote objects by holding a strong reference (to another arbitrary 
service associated by using the same server jvm), while clients hold a 
lease or explicitly clean it.


DGC already uses the same endpoint as the associated service it 
provides the DGC service for.


Can anyone see a technical reason why constraints and Subject 
authentication cannot or should not be utilised?


Cheers,

Peter.















Distributed Garbage Collection Security

2011-06-18 Thread Peter Firmstone
River doesn't currently offer constraints for DGC, it's currently 
vulnerable to attacks where the attacker knows the clientID, an attacker 
makes clean calls, clients are removed and the service is garbage 
collected, a simple DOS attack.


Should DGC be disabled in environments where security is a concern?

Wouldn't it be better to use constraints?

In reality DGC is just a service, used to preserve liveliness of remote 
objects by holding a strong reference (to another arbitrary service 
associated by using the same server jvm), while clients hold a lease or 
explicitly clean it.


DGC already uses the same endpoint as the associated service it provides 
the DGC service for.


Can anyone see a technical reason why constraints and Subject 
authentication cannot or should not be utilised?


Cheers,

Peter.












RE: Security Re: Discovery V2 migration

2011-06-13 Thread Christopher Dolan
Dan,

I have no idea what would be a pragmatic migration, and that's the problem. I 
spent a couple of days last Decemeber trying to migrate from V1 to V2 (see 
http://www.mail-archive.com/river-user@incubator.apache.org/msg00197.html and 
followup messages). The blocker for me, ultimately, was that I had V1 clients 
in the field that lacked a 
META-INF/services/com.sun.jini.discovery.DiscoveryFormatProvider file. Those 
clients will never (NEVER!) be able to speak to a V2 registrar because those 
clients will not accept any DiscoveryFormatProvider that the server offers. 
Currently, there's no way for Reggie to simultaneously support V1 and V2 
clients (you probably wouldn't want that anyway because it would allow a 
downgrade attack).

My own sad migration strategy has been to roll out V1 clients with a very basic 
META-INF/services/com.sun.jini.discovery.DiscoveryFormatProvider file and hope 
that within a couple of years all of my old clients will be retired and I can 
push out an incompatible change to V2. But that seems unlikely because I still 
have clients dating back to 2006 released code (increasingly rare, thankfully).

Chris

-Original Message-
From: Dan Creswell [mailto:dan.cresw...@gmail.com] 
Sent: Friday, June 10, 2011 1:01 PM
To: dev@river.apache.org
Subject: Re: Security  Re: Discovery V2 migration

Okay, so what is a pragmatic migration then?

That tells us something about potential solutions. Although there
probably isn't one if we're saying we can't ever disrupt clients

On 10 June 2011 18:28, Christopher Dolan christopher.do...@avid.com wrote:
 But v2 isn't backward compatible with anything, even itself if you're missing 
 the META-INF/services/com.sun.jini.discovery.DiscoveryFormatProvider file. 
 I've tried to upgrade my djinn to v2, but it requires a flag day because any 
 existing v1 clients will never be able to speak to v2 servers if the clients 
 lack the provider file.

 I think a prerequisite for deprecation of v1 would be a pragmatic migration 
 path to v2. With the current v2 implementation, I don't think such a 
 migration is possible.

 Chris

 -Original Message-
 From: Dan Creswell [mailto:dan.cresw...@gmail.com]
 Sent: Friday, June 10, 2011 10:47 AM
 To: dev@river.apache.org
 Subject: Re: Security  Re: Discovery V2 migration

 Controversial position: Why don't we just deprecate the entirety of V1?

 That means less work to do, no nasty dark corner workarounds as we try
 and retain compatibility, a clear policy on what will work with what
 etc. Fact is V2 has been around so long that most people are surely
 using it by now?

 I just am not a fan of backward compatibility without some very good
 reasons, history shows this sort of holding onto the past to be a
 nightmare for all concerned. One needs to look no further than the JDK
 itself which is full of cruft and cut corners for the sake of
 compatibility.

 On 10 June 2011 08:51, Peter Firmstone j...@zeus.net.au wrote:
 _Unicast Discovery v2 - Unmarshalling Attack with Registrar proxy._

 During unicast discovery, we have the option of using SSL, Kerberos or x500
 discovery implementations, unfortunately, if the unicast discovery
 implementation being used doesn't comply with constraints for Authentication
 and Confidentiality, the constraints are re tried against the unmarshalled
 registrar proxy, bypassing the security benefits these implementations
 provide.

 I believe this was an oversight to allow codebase integrity constraints to
 bubble up as Unfulfilled Constraints to the upper layer, where they're
 checked against the unmarshalled proxy, unfortunately it appears to be a
 mistake to allow Authentication and Confidentiality constraints to bubble up
 during discovery.

 I think we should change this specifically for Authentication and
 Confidentiality constraints, if these are requested but not satisfied during
 discovery we should throw an UnsupportedConstraintException.

 Doing so avoids the DOS unmarshalling attack which is possible during
 unmarshalling of an unauthenticated registrar proxy.

 _Unicast Discovery v1_

 In light of Unicast Discovery v1's total lack of support for security, I
 believe we should deprecate it, for this to happen we also need to provide a
 way for existing deployments to migrate.

 LookupLocator performs unicast discovery v1.   ConstrainableLookupLocator
 extends LookupLocator and is used for v2 unicast discovery constraints.


 _But what about Multicast Discovery?

 _Multicast discovery produces a LookupLocator which is used by Unicast
 Discovery to retrieve a registrar proxy.

 _Multicast Request Protocol v1

 _Please add any security concerns here.

 No integrity support - how much of a problem is this?

 _Multicast Request Protocol v2

 _x500 integrity supported_

 __Multicast Announcement Protocol v1

 _Please add any security concerns here.

 No integrity support, packets can be modified in transit, but this doesn't
 seem to be much of a concern

Re: [Fwd: Re: Anonymity, Security - ProcessBuilder and Process]

2011-06-12 Thread Tom Hobbs
Always communicating in a separate JVM is going to have obvious performance
costs.  Do we know what they are and are they acceptable?  Is going to be
easy to turn off for.people who trust what they're downloading an don't want
to pay the perf costs etc?

On 11 Jun 2011 20:49, Peter Firmstone j...@zeus.net.au wrote:
 Dan Creswell wrote:
 On 8 June 2011 05:31, Peter Firmstone j...@zeus.net.au wrote:

 Phoenix wakes (Activates) up a Service when it's required on the server
 side. I haven't thought of a good name for it, but unlike Phoenix, the
 concept is to perform discovery, lookup and execute smart proxy's on
behalf
 of the client jvm at the client node, although I concede you could run a
 service from it also. Reflective proxy's would be used to make smart
 proxy's appear to the client as thought they're running in the same jvm.

 Process has some peculiarities when it comes to input and output
streams,
 they cannot block and thus require threads and buffers to ensure io
streams
 are always drained. Process uses streams over operating system pipes to
 communicate between the primary jvm and subprocess jvm.

 I've been toying around with some Jeri endpoints, specifically for
Process
 streams and pipes, still I'm not sure if I should consider it a secure
 method of communication just because it's local. Do you think I should
 encrypt the streams?



 So you want to use pipes?

 The answer to whether you want to encrypt the streams or not is down
 to what kind of threat you're trying to mitigate. And the threats
 possible are determined by what solution you adopt. Pipes are
 basically shared memory, what kind of attacks are you worrying about
 in that scenario?


 I guess the attacker would be someone who already has user access to a
 system, if that's the case, the game's probably lost for most systems.

 I'm trying to consider the semantics of such a connection with regard to
 InvocationConstratints.

 Integrity,
 Confidentiality,
 ServerAuthentication
 ClientAuthentication.

 It really doesn't support any of the above constraints, but we're not
 going to use it for discovery etc..

 The intended purpose is to isolate downloaded code in a separate jvm and
 communicate with it using a reflective proxy.



 Cheers,

 Peter.







Re: Security Re: Discovery V2 migration

2011-06-12 Thread Peter Firmstone

Peter Firmstone wrote:

_Unicast Discovery v2 - Unmarshalling Attack with Registrar proxy._

During unicast discovery, we have the option of using SSL, Kerberos or 
x500 discovery implementations, unfortunately, if the unicast 
discovery implementation being used doesn't comply with constraints 
for Authentication and Confidentiality, the constraints are re tried 
against the unmarshalled registrar proxy, bypassing the security 
benefits these implementations provide.





It appears that the above statement is incorrect.

The following information comes from existing documentation, code 
inspection and some additional unit tests I've constructed.


Security can be maintained, provided discovery constraints are set 
correctly and can prevent the unmarshalling attacks by an unknown 
registrar proxy.


Constraints set during discovery are just as powerful and compatible 
with those used by jeri.


MethodConstraints are set against various methods, containing 
InvocationConstraints, that are required or preferred.


Constraints are tested in 3 layers

The class DiscoveryConstraints is used to test for and satisfy the 
following constraints, which are tested against the lower TCP / IP 
Multicast layer:


   * UnicastSocketTimeout
   * ConnectionAbsoluteTime
   * MulticastMaxPacketSize
   * DiscoveryProtocolVersion
   * MulticastTimeToLive

There are two x500 multicast implementations that support the additional 
constraints for:


The multicast request protocol:

   * Integrity
   * ClientAuthentication
   * ClientMaxPrincipal
   * ClientMaxPrincipalType
   * ServerMinPrincipal - trivial support (ServerAuthentication and
 Delegation not supported)
   * DelegationAbsoluteTime - trivial support
   * DelegationRelativeTime - trivial support

The multicast announcement protocol:

   * Integrity
   * ServerAuthentication
   * ServerMinPrincipal
   * DelegationAbsoluteTime - trivial support (ClientAuthentication and
 Delegation not supported
   * DelegationRelativeTime - trivial support
   * ClientMaxPrincipal - trivial support
   * ClientMaxPrincipalType -trivial support
   * ClientMinPrincipal -trivial support
   * ClientMinPrincipalType -trivial support

DiscoveryConstraints.getUnfulfilledConstraints() filters out any 
constraints that can be satisfied by the lower layer and don't need to 
be satisfied by upper layers, however it also returns constraints that 
must be satisfied by the lower layer as well as upper layers.
The next layer consists of various Client and Server implementations, 
plaintext, kerberos, ssl, https that implement UnicastDiscoveryClient or 
UnicastDiscoveryServer.


Various constraints are supported based on the provider in use, note 
that multiple providers can be selected from, based on their 
compatibility with the set of constraints provided.  
ConstraintAlternatives also provides a list of constraints from which 
one must be satisfied (like an OR separated list).


For example ssl unicast discovery supports the same constraints as the 
jeri tls/ ssl endpoint:


   * ClientAuthentication
   * ClientMaxPrincipal
   * ClientMaxPrincipalType
   * ClientMinPrincipal
   * ClientMinPrincipalType
   * Confidentiality
   * ConfidentialityStrength
   * ConnectionAbsoluteTime
   * ConnectionRelativeTime
   * ConstraintAlternatives
   * Delegation
   * DelegationAbsoluteTime
   * DelegationRelativeTime
   * Integrity
   * InvocationConstraints
   * ServerAuthentication
   * ServerMinPrincipal;


The top layer is the Marshalling layer, only the Integrity constraint is 
passed up by EndpointInternals.getUnfulfilledConstraints and must be 
satisfied by all layers, when set on relevant MethodConstraints.


LookupDiscovery performs Multicast discovery and Unicast retrieval of 
registrar Proxy's using any provider or protocol.
LookupLocatorDiscovery performs Unicast retrieval of registrar proxy's 
using any provider or protocol.
ConstrainableLookupLocator performs Unicast retrieval of registrar 
proxy's using any provider or protocol.
LookupLocator performs Unicast retrieval of registrar proxy's using 
Discovery V1 only.


Kerberos can be subjected to timing attacks, the most secure providers are:

x500 sha-1 with dsa or rsa for multicast discovery (although sha-1 is 
getting weaker now and probably should be supplemented with a stronger 
hash), still this should be quite reasonable for a private network.


tls/ssl Provider for Unicast Discovery

tls/ssl Provider for Jeri

https Provider for Unicast Discovery

https Profider for Jeri.


With the right constraints, it is possible to prevent an unmarshalling 
attack by an unknown registrar proxy, authentication, authorisation, 
confidentiality and integrity is checked / performed prior to 
unmarshalling the proxy.


If the registrar (lookup service) also requires client authentication 
and authorisation, then services must be authorised prior to submitting 
their proxy's.


If we can place a limit on codebase downloads, or provide secure

Re: [Fwd: Re: Anonymity, Security - ProcessBuilder and Process]

2011-06-12 Thread Peter Firmstone

Tom Hobbs wrote:

Always communicating in a separate JVM is going to have obvious performance
costs.  Do we know what they are and are they acceptable? 


It's hard to say at this stage, without an implementation, but it will 
consume more resources.


I figure a good compromise would be that each registrar proxy be 
responsible for it's own jvm and any services it provides.  The client 
would be run from it's own jvm.


A separate JVM for remote code reduces the amount of client and platform 
code visible to proxy's.  Shared class (static) variables are not 
possible between client and downloaded code.  This would also allow 
different conflicting libraries to be kept separate.


The Isolates API would be more desirable, but not available.

Just an experiment at this stage, time will tell...  anyone wanting to 
help, sing out.



 Is going to be
easy to turn off for.people who trust what they're downloading an don't want
to pay the perf costs etc?
  

I hope so, haven't considered configuration at this stage.

Cheers,

Peter.


On 11 Jun 2011 20:49, Peter Firmstone j...@zeus.net.au wrote:
  

Dan Creswell wrote:


On 8 June 2011 05:31, Peter Firmstone j...@zeus.net.au wrote:

  

Phoenix wakes (Activates) up a Service when it's required on the server
side. I haven't thought of a good name for it, but unlike Phoenix, the
concept is to perform discovery, lookup and execute smart proxy's on


behalf
  

of the client jvm at the client node, although I concede you could run a
service from it also. Reflective proxy's would be used to make smart
proxy's appear to the client as thought they're running in the same jvm.

Process has some peculiarities when it comes to input and output


streams,
  

they cannot block and thus require threads and buffers to ensure io


streams
  

are always drained. Process uses streams over operating system pipes to
communicate between the primary jvm and subprocess jvm.

I've been toying around with some Jeri endpoints, specifically for


Process
  

streams and pipes, still I'm not sure if I should consider it a secure
method of communication just because it's local. Do you think I should
encrypt the streams?




So you want to use pipes?

The answer to whether you want to encrypt the streams or not is down
to what kind of threat you're trying to mitigate. And the threats
possible are determined by what solution you adopt. Pipes are
basically shared memory, what kind of attacks are you worrying about
in that scenario?

  

I guess the attacker would be someone who already has user access to a
system, if that's the case, the game's probably lost for most systems.

I'm trying to consider the semantics of such a connection with regard to
InvocationConstratints.

Integrity,
Confidentiality,
ServerAuthentication
ClientAuthentication.

It really doesn't support any of the above constraints, but we're not
going to use it for discovery etc..

The intended purpose is to isolate downloaded code in a separate jvm and
communicate with it using a reflective proxy.




Cheers,

Peter.



  


  




Re: [Fwd: Re: Anonymity, Security - ProcessBuilder and Process]

2011-06-12 Thread Gregg Wonderly
I just wonder if most of phoenix would not already be used.  Basically, we'd 
provide a service definition that would perform a lookup with a designated 
serviceID and then we'd lookup that service and use it locally.  That, for 
example would allow service endpoint actions for recovery from comms problems 
etc to just happen, as well as allow a crashed endpoint JVM to recover.

Gregg

Sent from my iPhone

On Jun 12, 2011, at 2:54 AM, Peter Firmstone j...@zeus.net.au wrote:

 Tom Hobbs wrote:
 Always communicating in a separate JVM is going to have obvious performance
 costs.  Do we know what they are and are they acceptable? 
 
 It's hard to say at this stage, without an implementation, but it will 
 consume more resources.
 
 I figure a good compromise would be that each registrar proxy be responsible 
 for it's own jvm and any services it provides.  The client would be run from 
 it's own jvm.
 
 A separate JVM for remote code reduces the amount of client and platform code 
 visible to proxy's.  Shared class (static) variables are not possible between 
 client and downloaded code.  This would also allow different conflicting 
 libraries to be kept separate.
 
 The Isolates API would be more desirable, but not available.
 
 Just an experiment at this stage, time will tell...  anyone wanting to help, 
 sing out.
 
 Is going to be
 easy to turn off for.people who trust what they're downloading an don't want
 to pay the perf costs etc?
  
 I hope so, haven't considered configuration at this stage.
 
 Cheers,
 
 Peter.
 
 On 11 Jun 2011 20:49, Peter Firmstone j...@zeus.net.au wrote:
  
 Dan Creswell wrote:

 On 8 June 2011 05:31, Peter Firmstone j...@zeus.net.au wrote:
 
  
 Phoenix wakes (Activates) up a Service when it's required on the server
 side. I haven't thought of a good name for it, but unlike Phoenix, the
 concept is to perform discovery, lookup and execute smart proxy's on

 behalf
  
 of the client jvm at the client node, although I concede you could run a
 service from it also. Reflective proxy's would be used to make smart
 proxy's appear to the client as thought they're running in the same jvm.
 
 Process has some peculiarities when it comes to input and output

 streams,
  
 they cannot block and thus require threads and buffers to ensure io

 streams
  
 are always drained. Process uses streams over operating system pipes to
 communicate between the primary jvm and subprocess jvm.
 
 I've been toying around with some Jeri endpoints, specifically for

 Process
  
 streams and pipes, still I'm not sure if I should consider it a secure
 method of communication just because it's local. Do you think I should
 encrypt the streams?
 
 

 So you want to use pipes?
 
 The answer to whether you want to encrypt the streams or not is down
 to what kind of threat you're trying to mitigate. And the threats
 possible are determined by what solution you adopt. Pipes are
 basically shared memory, what kind of attacks are you worrying about
 in that scenario?
 
  
 I guess the attacker would be someone who already has user access to a
 system, if that's the case, the game's probably lost for most systems.
 
 I'm trying to consider the semantics of such a connection with regard to
 InvocationConstratints.
 
 Integrity,
 Confidentiality,
 ServerAuthentication
 ClientAuthentication.
 
 It really doesn't support any of the above constraints, but we're not
 going to use it for discovery etc..
 
 The intended purpose is to isolate downloaded code in a separate jvm and
 communicate with it using a reflective proxy.
 
 

 Cheers,
 
 Peter.
 
 

  
 
  
 


Re: [Fwd: Re: Anonymity, Security - ProcessBuilder and Process]

2011-06-11 Thread Peter Firmstone

Dan Creswell wrote:

On 8 June 2011 05:31, Peter Firmstone j...@zeus.net.au wrote:
  

Phoenix wakes (Activates) up a Service when it's required on the server
side.  I haven't thought of a good name for it, but unlike Phoenix, the
concept is to perform discovery, lookup and execute smart proxy's on behalf
of the client jvm at the client node, although I concede you could run a
service from it also.  Reflective proxy's would be used to make smart
proxy's appear to the client as thought they're running in the same jvm.

Process has some peculiarities when it comes to input and output streams,
they cannot block and thus require threads and buffers to ensure io streams
are always drained.  Process uses streams over operating system pipes to
communicate between the primary jvm and subprocess jvm.

I've been toying around with some Jeri endpoints, specifically for Process
streams and pipes, still I'm not sure if I should consider it a secure
method of communication just because it's local.  Do you think I should
encrypt the streams?




So you want to use pipes?

The answer to whether you want to encrypt the streams or not is down
to what kind of threat you're trying to mitigate. And the threats
possible are determined by what solution you adopt. Pipes are
basically shared memory, what kind of attacks are you worrying about
in that scenario?
  


I guess the attacker would be someone who already has user access to a 
system, if that's the case, the game's probably lost for most systems.


I'm trying to consider the semantics of such a connection with regard to 
InvocationConstratints.


Integrity,
Confidentiality,
ServerAuthentication
ClientAuthentication.

It really doesn't support any of the above constraints, but we're not 
going to use it for discovery etc..


The intended purpose is to isolate downloaded code in a separate jvm and 
communicate with it using a reflective proxy.



  

Cheers,

Peter.




  




[Fwd: Re: Anonymity, Security - ProcessBuilder and Process]

2011-06-07 Thread Peter Firmstone


---BeginMessage---
Phoenix wakes (Activates) up a Service when it's required on the server 
side.  I haven't thought of a good name for it, but unlike Phoenix, the 
concept is to perform discovery, lookup and execute smart proxy's on 
behalf of the client jvm at the client node, although I concede you 
could run a service from it also.  Reflective proxy's would be used to 
make smart proxy's appear to the client as thought they're running in 
the same jvm.


Process has some peculiarities when it comes to input and output 
streams, they cannot block and thus require threads and buffers to 
ensure io streams are always drained.  Process uses streams over 
operating system pipes to communicate between the primary jvm and 
subprocess jvm.


I've been toying around with some Jeri endpoints, specifically for 
Process streams and pipes, still I'm not sure if I should consider it a 
secure method of communication just because it's local.  Do you think I 
should encrypt the streams?


Cheers,

Peter.



Gregg Wonderly wrote:
How do you see this as different or the same as the facilities that 
the Phoenix service, in River, already provides?


Gregg

On 6/6/2011 6:39 PM, Peter wrote:
There are trade off's using an additional jvm for isolation: the 
additonal overhead and memory footprint.


One option might be; use the subprocess jvm for unicast discovery, 
one jvm for each registrar, the subprocess jvm will also be used for 
running all service proxy's from that lookup service.  If there's an 
issue, the subprocess jvm can be destroyed.


This makes the registrar partly responsible for the subprocess jvm, 
if the lookup service contains a rouge service, then that lookup 
service suffers.  A method of communicating errors back to the 
registrar prior to exit might also be useful, perhaps output from 
stderr.


The client would communicate with the subprocess jvm using reflective 
proxy's and object streams over stdin stdout and stderr


The subprocess jvm would have very restricted permissions initially, 
a limited classpath using classloaders for isolation between proxys, 
dynamic permissions and possibly also revocable permissions, if 
required.


In effect downloaded code runs in separate jvm's, providing very good 
isolation.


This might form the basis of secure internet services.

Note, the idea of using a subprocess jvm was first suggested to me by 
Tim Blackmann to prevent unmarshalling attacks, this was a solution 
Sun's dev team were considering.


Cheers,

Peter.

- Original message -

Most on the list would be aware of the stillborn Isolates API, which
showed much promise fixing many of the issues the Java platform has in
supporting secure distributed code. EG:

   1. Class Visibility
   2. Subprocess Isolation.
   3. Unmarshalling attacks.

Released with Java 5 (which is now our minimum supported platform),
ProcessBuilder and Process, can be used to create a separate process 
for

downloaded code.

We can modify MarhalledInstance to include an authentication reflective
proxy from the originating server which created it (eg: the
MarshalledInstance contains a smart proxy), but this limits interaction
to services and clients that know each other.  Not much good if you've
got a mobile device and the lookup service you've just discovered is
unknown.People might want to remain anonymous for services where no
private information or currency changes hands, similarly to how the
internet currently operates.  You execute downloaded code, when running
web scripts for example.

Java Thread isolation isn't safe enough, you might remember I
experimented with it recently, we could prevent creation of new 
threads,

catch a thread stack overflow, but we can't kill the misbehaving thread
and client code on the classpath is still visible to untrusted code,
making it possible for references to security sensitive objects from
client and platform code to escape into insecure code.

I've also worked on Security Delegates, which prevent references to
security sensitive objects from escaping allowing revocation of
permissions.  I've created a custom security manager for this which 
also

caches the results of security checks for much faster repeated checks.

Using these features, we could completely isolate a smart proxy and
control the permissions and visibility of classes it has.

The isolation environment would contain a minimal subset of River,
Service API and any downloaded codebases the smart proxy needs.

A reflective proxy and invocation handler would be used to interact 
with

the isolated smart proxy.  No code would be downloaded by the client
jvm, this would also make it possible for the smart proxy jvm to 
utilise

a different version of River, that is serialization compatible, via a
codebase download.

I suppose we could even make it possible to isolate the ServiceUI too.

One other problem is the migration from Unicast Discovery V1 to V2, the
serialized form of Discovery V1

Anonymity, Security - ProcessBuilder and Process

2011-06-05 Thread Peter Firmstone
Most on the list would be aware of the stillborn Isolates API, which 
showed much promise fixing many of the issues the Java platform has in 
supporting secure distributed code. EG:


  1. Class Visibility
  2. Subprocess Isolation.
  3. Unmarshalling attacks.

Released with Java 5 (which is now our minimum supported platform),  
ProcessBuilder and Process, can be used to create a separate process for 
downloaded code.
 
We can modify MarhalledInstance to include an authentication reflective 
proxy from the originating server which created it (eg: the 
MarshalledInstance contains a smart proxy), but this limits interaction 
to services and clients that know each other.  Not much good if you've 
got a mobile device and the lookup service you've just discovered is 
unknown.   People might want to remain anonymous for services where no 
private information or currency changes hands, similarly to how the 
internet currently operates.  You execute downloaded code, when running 
web scripts for example.


Java Thread isolation isn't safe enough, you might remember I 
experimented with it recently, we could prevent creation of new threads, 
catch a thread stack overflow, but we can't kill the misbehaving thread 
and client code on the classpath is still visible to untrusted code, 
making it possible for references to security sensitive objects from 
client and platform code to escape into insecure code.


I've also worked on Security Delegates, which prevent references to 
security sensitive objects from escaping allowing revocation of 
permissions.  I've created a custom security manager for this which also 
caches the results of security checks for much faster repeated checks.


Using these features, we could completely isolate a smart proxy and 
control the permissions and visibility of classes it has.


The isolation environment would contain a minimal subset of River, 
Service API and any downloaded codebases the smart proxy needs.


A reflective proxy and invocation handler would be used to interact with 
the isolated smart proxy.  No code would be downloaded by the client 
jvm, this would also make it possible for the smart proxy jvm to utilise 
a different version of River, that is serialization compatible, via a 
codebase download.


I suppose we could even make it possible to isolate the ServiceUI too.

One other problem is the migration from Unicast Discovery V1 to V2, the 
serialized form of Discovery V1 is MarshalledObject, V2 uses 
MarshalledInstance.  Unicast Discovery V1 has no security considerations 
whatsoever, this exposes the client to a number of different attacks.  
We could use isolation to make Discovery V1 safer for existing code that 
uses Discovery V1 only.  To take advantage of Discovery V2 requires a 
recompile I believe.


Thoughts?

Peter.