Build Update for apache/jackrabbit-oak
-
Build: #3399
Status: Passed
Duration: 2702 seconds
Commit: 5400e18e5feefef3d74beb11f80ff77bc9faa644 (trunk)
Author: Tobias Bocanegra
Message: @trivial remove unused export annotations
git-svn-id: https://svn.apache.org/
Build Update for apache/jackrabbit-oak
-
Build: #3398
Status: Passed
Duration: 2541 seconds
Commit: dcde7bfa6491e2141d4f323e2dba5c77d67b7c5f (trunk)
Author: Tobias Bocanegra
Message: @trivial fix license checks
git-svn-id: https://svn.apache.org/repos/asf/jack
Hi,
On Tue, Feb 18, 2014 at 1:25 PM, Michael Dürig wrote:
> The check fails for me due to a missing license header:
>
> Unapproved licenses:
>
> /Users/mduerig/Checkouts/apache/jackrabbit/jackrabbit-dev/target/jackrabbit-oak-0.17/zip/jackrabbit-oak-0.17/oak-auth-ldap/src/test/resources/org/apache
The check fails for me due to a missing license header:
Unapproved licenses:
/Users/mduerig/Checkouts/apache/jackrabbit/jackrabbit-dev/target/jackrabbit-oak-0.17/zip/jackrabbit-oak-0.17/oak-auth-ldap/src/test/resources/org/apache/jackrabbit/oak/security/authentication/ldap/apache-ds-tutorial.ldif
Hi,
On Tue, Feb 18, 2014 at 6:16 AM, Chetan Mehrotra
wrote:
> That also would work with the caveat that in between layer do not
> decorate the InputStream in any form.
Right, good point. At the moment we don't decorate streams within Oak,
and I don't see any big reasons why we'd want to start do
A candidate for the Jackrabbit Oak 0.17 release is available at:
https://dist.apache.org/repos/dist/dev/jackrabbit/oak/0.17/
The release candidate is a zip archive of the sources in:
https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-0.17/
The SHA1 checksum of the archive
The Buildbot has detected a restored build on builder oak-trunk while building
ASF Buildbot.
Full details are available at:
http://ci.apache.org/builders/oak-trunk/builds/4413
Buildbot URL: http://ci.apache.org/
Buildslave for this Build: osiris_ubuntu
Build Reason: scheduler
Build Source Stam
The Buildbot has detected a new failure on builder oak-trunk while building ASF
Buildbot.
Full details are available at:
http://ci.apache.org/builders/oak-trunk/builds/4411
Buildbot URL: http://ci.apache.org/
Buildslave for this Build: osiris_ubuntu
Build Reason: scheduler
Build Source Stamp:
hi,
In CRX we solved this problem with CRX providing the "blob factory",
i.e. CRX already creates the appropriate structures for exactly the
upload case, the multipart handler then just uses the blob for writing
the output, too. Unfortunately we never got this into the JCR spec
(IIRC).
Also, movi
https://issues.apache.org/jira/browse/OAK-1434
On 18/02/14 17:00, "Angela Schreiber" wrote:
>hi jukka
>
>that would be fine with me.
>so, i would suggest we adjust the various test setup before starting
>with the refactoring.
>
>i will create a jira issue that allows us to track the progress of
hi jukka
that would be fine with me.
so, i would suggest we adjust the various test setup before starting
with the refactoring.
i will create a jira issue that allows us to track the progress of this
dependency cleanup.
kind regards
angela
On 18/02/14 16:55, "Jukka Zitting" wrote:
>Hi,
>
>On
Hi,
On Tue, Feb 18, 2014 at 9:49 AM, Angela Schreiber wrote:
> in the regular code i got rid of the dependencies but there is
> still quite some tests left that currently have a dependency to
> oak-mk, namely the MicroKernelImpl. it would feel wrong to change that
> just for the sake of getting r
hi michael
>Additionally we should go through the TODO/FIXME annotations and
>
>- remove the ones that are obsolete,
>- update/clarify the ones that are out dated/unclear,
>- create issues for the ones we deem necessary and add the issue
>reference to the TODO/FIXME.
right now we have 473 TODO/FI
hi jukka
>On Tue, Feb 18, 2014 at 7:14 AM, Angela Schreiber
>wrote:
>> variant b:
>> only move the json and utility related code to oak-commons but create
>> a new dedicated module for the blob store related code. while this would
>> look more natural to me, i know that adding new module has been
Hi,
On Tue, Feb 18, 2014 at 7:14 AM, Angela Schreiber wrote:
> variant b:
> only move the json and utility related code to oak-commons but create
> a new dedicated module for the blob store related code. while this would
> look more natural to me, i know that adding new module has been quite
> co
hi michael
sounds good to me.
regards
angela
On 18/02/14 14:17, "Michael Dürig" wrote:
>
>
>On 18.2.14 12:03 , Angela Schreiber wrote:
>> so, unless there is strong opposition for doing a bit of cleanup (some
>> may think it's still to early), i would like us to come up with some
>> improvemen
On 18.2.14 12:03 , Angela Schreiber wrote:
so, unless there is strong opposition for doing a bit of cleanup (some
may think it's still to early), i would like us to come up with some
improvements and ideas in dedicated issues or discussions on the list.
Another point that came up over lunch i
On 18.2.14 12:03 , Angela Schreiber wrote:
so, unless there is strong opposition for doing a bit of cleanup (some
may think it's still to early), i would like us to come up with some
improvements and ideas in dedicated issues or discussions on the list.
+1
Additionally we should go through t
hi
i looked at the dependencies listed with the oak-core module
and was wondering that it depends on a specific mk implementation,
while i was assuming that oak-core should be independent of the
mk/node store implementation being used.
looking at it in more detail revealed that the dependency is
On 18.2.14 12:03 , Angela Schreiber wrote:
so, unless there is strong opposition for doing a bit of cleanup (some
may think it's still to early), i would like us to come up with some
improvements and ideas in dedicated issues or discussions on the list.
+1
Additionally we should go through
On Tue, Feb 18, 2014 at 3:48 PM, Jukka Zitting wrote:
> Something like S3InputStream.getURL() should work just fine for that use case:
That also would work with the caveat that in between layer do not
decorate the InputStream in any form.
Chetan Mehrotra
hi
since we will sooner or later will approach an official 1.0 release, i
would like us to invest a couple of hours thinking about maturing our
code base.
apart from providing missing functionality and working on scalability and
performance (which is mostly covered by JIRA issues), we may also w
Hi,
On Tue, Feb 18, 2014 at 4:41 AM, Chetan Mehrotra
wrote:
> That mode is fine for case like FileInputStream but not for case like
> S3 where underlying data is just a url and DataStore needs to make use
> of that.
Something like S3InputStream.getURL() should work just fine for that use case:
On Tue, Feb 18, 2014 at 2:36 PM, Ian Boston wrote:
> Upload the binary via some other mechanism then tell give Oak a
> pointer to the location, rather than giving Oak the binary.
Yup. As explained in mail before I am looking for "pass by reference"
semantics (somewhat related to JCR-3534).
Looki
On Tue, Feb 18, 2014 at 2:32 PM, Jukka Zitting wrote:
> Good point. That use case would probably be best handled with a
> specific InputStream subclass like suggested by Felix for files.
That mode is fine for case like FileInputStream but not for case like
S3 where underlying data is just a url a
Hi,
On 18 February 2014 08:50, Felix Meschberger wrote:
> Hi
>
> That was my first thought, too: Nothing prevents the Binary implementation
> from checking whether the InputStream is a FileInputStream and then access
> the FileChannel from it.
>
> In the concrete case of Sling, the Sling Reques
Hi,
On Tue, Feb 18, 2014 at 3:50 AM, Felix Meschberger wrote:
> That was my first thought, too: Nothing prevents the Binary implementation
> from checking
> whether the InputStream is a FileInputStream and then access the FileChannel
> from it.
Right, but then there's no easy way for the imple
Hi,
On Tue, Feb 18, 2014 at 2:25 AM, Chetan Mehrotra
wrote:
> If we can have a way to create JCR Binary implementations which
> enables DataStore/BlobStore to efficiently transfer content then that
> would help.
ValueFactory.createBinary(InputStream stream)
The problem here, as far as I can see
Hi
That was my first thought, too: Nothing prevents the Binary implementation from
checking whether the InputStream is a FileInputStream and then access the
FileChannel from it.
In the concrete case of Sling, the Sling RequestParameter.getInputStream()
happens to call the Commons Upload FileIt
29 matches
Mail list logo