I have started a new mail thread around "Usecases around Binary handling in
Oak" so as to first collect the kind of usecases we need to support. Once
we decide that we can discuss the possible solution.
So lets continue the discussion on that thread
Chetan Mehrotra
On Tue, May 17, 2016 at 12:31
Hi Oak-Devs
Just for the record: This topic has been discussed in a Adobe
internal Oak-coordination call last Wednesday.
Michael Marth first provided some background information and
we discussed the various concerns mentioned in this thread
and tried to identity the core issue(s).
Marcel,
Hi,
On 11 May 2016 at 14:21, Marius Petria wrote:
> Hi,
>
> I would add another use case in the same area, even if it is more
> problematic from the point of view of security. To better support load
> spikes an application could return 302 redirects to (signed) S3 urls such
Hi,
I would add another use case in the same area, even if it is more problematic
from the point of view of security. To better support load spikes an
application could return 302 redirects to (signed) S3 urls such that binaries
are fetched directly from S3.
(if this can already be done or
Hi Chetan
IMHO your original mail didn't write down the fundamental analysis
but instead presented the solution for every the 2 case I was
lacking the information _why_ this is needed.
Both have been answered in private conversions only (1 today in
the oak call and 2 in a private discussion with
Hi Angela,
On Tue, May 10, 2016 at 9:49 PM, Angela Schreiber wrote:
> Quite frankly I would very much appreciate if took the time to collect
> and write down the required (i.e. currently known and expected)
> functionality.
>
> Then look at the requirements and look what is
> what guarantees do/can we give re. this file handle within this context.
Can it suddenly go away (e.g. because of gc or internal re-organisation)?
How do we establish, test and maintain (e.g. from regressions) such
guarantees?
Logically it should not go away suddenly. So GC logic should be
Such an approach makes the API contract more explicit to the consumer by
providing a context outside which there will be no guarantees for the
passed "file handle". However, there is still the issues of
- what guarantees do/can we give re. this file handle within this
context. Can it
Hi Angela,
On 10 May 2016 at 17:19, Angela Schreiber wrote:
> Hi Ian
>
> >Fair enough, provided there is a solution that addresses the issue Chetan
> >is trying to address.
>
> That's what we are all looking for :)
>
> >The alternative, for some applications, seems to store
Hi Ian
>Fair enough, provided there is a solution that addresses the issue Chetan
>is trying to address.
That's what we are all looking for :)
>The alternative, for some applications, seems to store the binary data
>outside Oak, which defeats the purpose completely.
You mean with the current
On 10.5.16 5:39 , Ian Boston wrote:
I don't have a perfect handle on the issue he is trying to address or what
would be an acceptable solution, but I suspect the only solution that is
not vulnerable by design will a solution that abstracts all the required
functionality behind an Oak API (ie
On 10 May 2016 at 15:02, Angela Schreiber wrote:
> Hi Ian
>
> On 04/05/16 18:37, "Ian Boston" wrote:
> >[...] The locations will certainly probably leak
> >outside the context of an Oak session so the API contract should make it
> >clear that the code using a
Hi
Same here... Francesco already summarised my concerns very nicely.
The links Michael provided below resonate what came to my mind
regarding past discussions around binary handling in the JCR/Jackrabbit
API and in Oak.
I also distinctly remember that one key argument for the current
design
Hi Ian
On 04/05/16 18:37, "Ian Boston" wrote:
>[...] The locations will certainly probably leak
>outside the context of an Oak session so the API contract should make it
>clear that the code using a direct location needs to behave responsibly.
See my reply to Chetan, who was
Hi,
By processing independently I meant async, outside the callback, eg inside
a Mesos+Frenzo cluster [1], processors not running Oak.
Best Regards
Ian
1
http://techblog.netflix.com/2015/08/fenzo-oss-scheduler-for-apache-mesos.html
On 10 May 2016 at 06:02, Chetan Mehrotra
Some more points around the proposed callback based approach
1.Possible security or enforcing a read only access to the exposed file -
The file provided within the BlobProcessor callback can be a symlink
created with a os user account which only has read only access. The symlink
can be removed
Hi,
Can the uses cases presented by Chetan be solved the other way around? Instead
of exposing implementation details via JCR/OAK API maybe it is possible to
include the blobid in the S3 id/filename (a prefix?), such that external
applications can identify external resources based on their oak
Hi,
Thinking about the validity of the File and S3 Objects
I thought the consumers of this api want things like the absolute path of
the File in the BlobStore, or the bucket and key of the S3 Object, so that
they could transmit it and use it for processing independently of Oak
outside the
Had an offline discussion with Michael on this and explained the usecase
requirement in more details. One concern that has been raised is that such
a generic adaptTo API is too inviting for improper use and Oak does not
have any context around when this url is exposed for what time it is used.
So
On 9.5.16 11:43 , Chetan Mehrotra wrote:
To highlight - As mentioned earlier the user of proposed api is tying
itself to implementation details of Oak and if this changes later then that
code would also need to be changed. Or as Ian summed it up
if the API is introduced it should create an
To highlight - As mentioned earlier the user of proposed api is tying
itself to implementation details of Oak and if this changes later then that
code would also need to be changed. Or as Ian summed it up
> if the API is introduced it should create an out of band agreement with
the consumers of
Hi,
I very much share Francesco's concerns here. Unconditionally exposing
access to operation system resources underlying Oak's inner working is
troublesome for various reasons:
- who owns the resource? Who coordinates (concurrent) access to it and
how? What are the correctness and
On Thu, May 5, 2016 at 5:07 PM, Francesco Mari
wrote:
>
> This is a totally different thing. The change to the node will be committed
> with the privileges of the session that retrieved the node. If the session
> doesn't have enough privileges to delete that node, the
2016-05-05 13:22 GMT+02:00 Chetan Mehrotra :
> On Thu, May 5, 2016 at 4:38 PM, Francesco Mari
> wrote:
>
> > The security concern is quite easy to explain: it's a bypass of our
> > security model. Imagine that, using a session with the
On Thu, May 5, 2016 at 4:38 PM, Francesco Mari
wrote:
> The security concern is quite easy to explain: it's a bypass of our
> security model. Imagine that, using a session with the appropriate
> privileges, a user accesses a Blob and adapts it to a file handle, an S3
>
The security concern is quite easy to explain: it's a bypass of our
security model. Imagine that, using a session with the appropriate
privileges, a user accesses a Blob and adapts it to a file handle, an S3
bucket or a URL. This code passes this reference to another piece of code
that modifies
> This proposal introduces a huge leak of abstractions and has deep security
implications.
I understand the leak of abstractions concern. However would like to
understand the security concern bit more.
One way I can think of that it can cause security concern is you have some
malicious code
+1 to Francesco's concerns, exposing the location of a binary at the
application level doesn't sound good from a security perspective.
To me sounds like breaching the JCR and NodeState layers to directly
manipulate NodeStore binaries (from the DataStore), e.g. to perform smart
replication across
On Wed, May 4, 2016 at 10:07 PM, Ian Boston wrote:
> If the File or URL is writable, will writing to the location cause issues
> for Oak ?
>
Yes that would cause problem. Expectation here is that code using a direct
location needs to behave responsibly.
Chetan Mehrotra
This proposal introduces a huge leak of abstractions and has deep security
implications.
I guess that the reason for this proposal is that some users of Oak would
like to perform some operations on binaries in a more performant way by
leveraging the way those binaries are stored. If this is the
On 04/05/2016 17:37, Ian Boston wrote:
> Hi,
> If the File or URL is writable, will writing to the location cause issues
> for Oak ?
> IIRC some Oak DS implementations use a digest of the content to determine
> the location in the DS, so changing the content via Oak will change the
> location, but
On 03/05/2016 15:36, Chetan Mehrotra wrote:
> ...
> //Check if Binary is of type AdaptableBinary
> if (binProp instanceof AdaptableBinary){
Would it be possible to avoid the `instaceof`? Which means, in my
opinion, all our binaries should be Adaptable. In case the
implementation is not it can
Hi,
If the File or URL is writable, will writing to the location cause issues
for Oak ?
IIRC some Oak DS implementations use a digest of the content to determine
the location in the DS, so changing the content via Oak will change the
location, but changing the content via the File or URL wont. If
Hi Team,
For OAK-1963 we need to allow access to actaul Blob location say in form
File instance or S3 object id etc. This access is need to perform optimized
IO operation around binary object e.g.
1. The File object can be used to spool the file content with zero copy
using NIO by accessing the
34 matches
Mail list logo