Re: VOTE: Migrate from Subversion to Git

2023-10-20 Thread Julian Sedding
[X] +1, migrate Jackrabbit to Git

Regards
Julian

On Thu, Oct 12, 2023 at 6:57 AM Julian Reschke  wrote:
>
> On 12.10.2023 06:55, Julian Reschke wrote:
> > On 11.10.2023 19:58, Konrad Windszus wrote:
> >> ...
> >
> > [X] +1, migrate Jackrabbit to Git
> >
> > Best regards, Julian
>
> ...and when we get to it, please be consistent with Oak (in naming the
> dev branch "trunk").
>
> Best regards, Julian


Please review fix for GC on SegmentStore setup with SplitPersistence

2022-10-21 Thread Julian Sedding
Hello

I would appreciate a review of PR #665 [0], which fixes OAK-9897 [1].

When running GC on a SegmentStore setup with SplitPersistence, it
happens regularly that the tar archives of the "read-only" part of the
persistence are identified for removal during the "cleanup" phase.
However, these can never be deleted (read-only), which leads to the
FileReaper thread to retry over and over again to delete them. I
noticed the issue while writing tests, but I am sure this happens only
in production systems. The impact AFAICS is limited to some warning
logs and excess resource-usage.

To address the issue I introduced an API change. Namely I added the
method "SegmentArchiveManager#isReadOnly(String archiveName)" with a
default implementation returning "false". This allows for exclusion of
read-only archives from the cleanup process (both the "mark" and the
"sweep" phases).

Thank you for your comments.

Regards
Julian

[0] https://github.com/apache/jackrabbit-oak/pull/665
[1] https://issues.apache.org/jira/browse/OAK-9897


Intent to backport OAK-9785

2022-10-05 Thread Julian Sedding
Hello

I intend to backport "OAK-9785 - Tar SegmentStore can be corrupted
during compaction" to the 1.22 branch. The fix hardens TAR compaction
by aborting it cleanly not only when an IOException is caught, but
also when any other Throwable is caught.

Let me know if you have any concerns.

Regards
Julian

[0] https://issues.apache.org/jira/browse/OAK-9785


Re: [VOTE] Release Apache Jackrabbit 2.16.10

2022-09-07 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit 2.16.10

...where...

[INFO] Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537)
[INFO] OS name: "mac os x", version: "12.4", arch: "x86_64", family: "mac"
[INFO] Java version: 11.0.14, vendor: Oracle Corporation, runtime:
/Library/Java/JavaVirtualMachines/jdk-11.0.14.jdk/Contents/Home
[INFO] MAVEN_OPTS:

Regards
Julian

On Wed, Sep 7, 2022 at 7:41 AM Julian Reschke  wrote:
>
> Am 07.09.2022 um 07:38 schrieb Julian Reschke:
> > ...
>
> [X] +1 Release this package as Apache Jackrabbit 2.16.10
>
> ...where...
>
> > [INFO] Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
> > [INFO] OS name: "windows 10", version: "10.0", arch: "amd64", family: 
> > "windows"
> > [INFO] Java version: 11.0.15, vendor: Oracle Corporation, runtime: 
> > C:\usr\local\jdk-11.0.15
> > [INFO] MAVEN_OPTS: -Xmx2g
>
> Best regards, Julian


OAK-9896 - Running unit-tests in IntelliJ dos not work

2022-08-22 Thread Julian Sedding
Hi

I am having issues when running unit-tests within Intellij IDEA. I can
work around the issue, but it's a bit cumbersome. Therefore I would
like to apply the change proposed in PR #664 [0], which addresses
OAK-9896 [1].

Do others experience the same problem running unit-tests in Intellij?
Does anyone object the proposed change?

Regards
Julian

[0] https://github.com/apache/jackrabbit-oak/pull/664
[1] https://issues.apache.org/jira/browse/OAK-9896


Plan to merge changes for OAK-9888 - Support more flexible SplitPersistence setups via OSGi

2022-08-22 Thread Julian Sedding
Hi

I am planning to merge PR #663 [0] on Wednesday. The changes address
OAK-9888 [1] and are intended to allow more flexibility when creating
a SplitPersistence. They affect the segment-tar and the segment-azure
modules.

Please let me know if you see any issues with this change.

Regards
Julian

[0] https://github.com/apache/jackrabbit-oak/pull/663
[1] https://issues.apache.org/jira/browse/OAK-9888


Re: Outdated 1.22 branch

2021-07-14 Thread Julian Sedding
Hi Andrei

I assume you would need to adjust the SCM Info in the pom of the
"1.22" branch in Git (compare with the pom in "trunk" branch). The SVN
branches are dead after migration to Git AFAIK.

I don't know if other adjustments are required. Konrad would likely know.

Regards
Julian

On Wed, Jul 14, 2021 at 2:44 PM Andrei Dulceanu  wrote:
>
> Hi all,
>
> I have a quick question for you: are all Oak svn branches still updated
> after migration to Git?
>
> I’m trying to cut 1.22.8 release and obviously it fails if I try it under
> git checkout, so I moved in the svn checkout, but then this seems outdated:
>
> jackrabbit-oak-1-22 dulceanu$ svn log -l 1
> 
> r1888717 | mreutegg | 2021-04-13 12:58:52 +0300 (Tue, 13 Apr 2021) | 3
> linesOAK-9393: Release Oak 1.22.7Fix version of oak-doc and
> oak-doc-railroad-macro after release
>
>
> I quickly checked what’s the status for trunk and this one is up-to-date:
>
> jackrabbit-oak dulceanu$ svn log -l 1
> 
> r1890974 | miroslav | 2021-06-22 18:33:17 +0300 (Tue, 22 Jun 2021) | 1
> lineOAK-9469 in case of the timeout in AzureRepositoryLock, retry
> renewing the lease
> 
>
> How can I progress with cutting 1.22.8 release? Any ideas?
>
> Regards,
> Andrei


Re: Adjustments of check-release.sh for multi module releases

2020-08-03 Thread Julian Sedding
Hi

Combining the empty local repo with a fallback to the "full" local
repo can speed things up a lot. I have used this strategy in the past
to "fill" minimal customer-specific local repos to enable offline
work. That requires a profile and thus a settings.xml, but that should
all be scriptable I suppose. The only difficulty could be if someone
doesn't have their local repo in the default location.

An alternative to providing a settings.xml could be to set a profile
name for the "check"-build and provide documentation on how to
configure a profile that uses the local repository as a remote
repository for the build. That would allow everyone to op-in to having
a fast build while keeping local repositories clean (the temporary
local repo would obviously need to be removed after the build).

WDYT?

Regards
Julian

On Mon, Aug 3, 2020 at 12:19 PM Robert Munteanu  wrote:
>
> On Mon, 2020-08-03 at 12:15 +0200, Konrad Windszus wrote:
> > We can even leverage a custom (empty) local repo via "-
> > Dmaven.repo.local" which can be thrown away after release
> > verification.
> > That way one would notice references which are no longer available
> > publically (for whatever reason).
> > That would delay the release check though, as you would need to
> > redownload all necessary plugins/dependencies
>
> Hm, that sounds interesting. The question is what is the relative
> increaase of the check time. I run the validation in a console and then
> switch away since it takes too long to wait for it.
>
> If it's 10-20% longer I think that's fine. If it's 5x longer it's
> probably a no-go.
>
> Thanks,
> Robert
>


Re: [Proposal] Feature toggles

2020-07-09 Thread Julian Sedding
Hi Marcel

On Tue, Jul 7, 2020 at 12:02 PM Marcel Reutegger
 wrote:
>
> Hi,
>
> Thanks for the feedback Julian.
>
> On 07.07.20, 10:45, "Julian Sedding"  wrote:
> > I'm not sure about the aspect of the implementation, that FeatureToggle
> > is Closeable and probably often short-lived. Given that the
> > FeatureToggleAdapter is registered with the whiteboard, and thus likely
> > with the OSGi service registry, this _may_ put unnecessary load on the
> > service registry.
>
> If used as a short-lived object, that is indeed a problem. My intention
> with the FeatureToggle is actually that it is long-lived, though it can
> obviously also be used differently. The try-with-resource block in the
> tests is just convenient.

It seems I misinterpreted the use of try-with-resource to indicate
short-lived toggles. I don't think it's possible to enforce long-lived
toggles, but it can certainly be encouraged in documentation. If it
turns out that we get problems with short-lived toggles, they can
still be solved later. I think your API would allow such changes in
the future.

>
> > And lastly, even if a FeatureToggleAdapter is already registered for a
> > feature, a new service would be registered if the same code was run in a
> > second thread.
>
> This is by design. It is valid to have multiple feature toggles registered
> with the same name. It's not the primary use case, but they can be used
> that way.

Ack. I assume they would get the same enabled/disabled state.

>
> > From an OSGi perspective, I would lean towards a long-lived singleton
> > service that can be toggled. The FeatureToggle could then be adjusted to
> > retrieve the matching service if available, or otherwise register its
> > own.
>
> I'm not sure I understand. Can you elaborate what you have in mind?

I meant that the implementation of Feature.newFeatureToggle() (maybe
rename to newFeature after the class name changes?) could be adjusted
from "always registering a FeatureToggle" to "returning an existing
FeatureToggle service with the same name and register a new one only
if none is available". Not sure this would work after you stated above
"it is valid to have multiple feature toggles registered with the same
name", even though I don't understand the benefit of registering
multiple toggles.

>
> > Regarding the API, I would probably rename FeatureToggle to Feature and
> > FeatureToggleAdapter to FeatureToggle. But that's of course a matter of
> > taste.
>
> Thanks for the suggestion. I like it.

:)

>
> > Also, I would add an "isEnabled" method to FeatureToggleAdapter, in
> > order to allow the code setting the toggle to introspect the current
> > state.
>
> I considered this as well, but did not see a use case for it. What would
> you do with this method?

I don't have a use case, but could imagine that introspection of the
state could be useful for reporting (e.g. a web-console report of all
active toggles and their state). I understand the desire to keep an
API minimal, but on the other hand I find it frustrating when an API
doesn't offer seemingly obvious features (obvious in my mind anyways).

>
> Regards
>  Marcel
>

Regards
Julian


Re: [Proposal] Feature toggles

2020-07-07 Thread Julian Sedding
Hi Marcel,

I think the API is elegant. Short of running "feature" code in a
closure, a "try with resource" block encourages developers to clearly
delimit the block of code that is subject to the feature toggle,
hopefully resulting in readable code.

I'm not sure about the aspect of the implementation, that
FeatureToggle is Closeable and probably often short-lived. Given that
the FeatureToggleAdapter is registered with the whiteboard, and thus
likely with the OSGi service registry, this _may_ put unnecessary load
on the service registry. Furthermore, enabling/disabling the toggle
would need to be done in a way that respects this dynamism. And
lastly, even if a FeatureToggleAdapter is already registered for a
feature, a new service would be registered if the same code was run in
a second thread.

>From an OSGi perspective, I would lean towards a long-lived singleton
service that can be toggled. The FeatureToggle could then be adjusted
to retrieve the matching service if available, or otherwise register
its own.

Regarding the API, I would probably rename FeatureToggle to Feature
and FeatureToggleAdapter to FeatureToggle. But that's of course a
matter of taste. Also, I would add an "isEnabled" method to
FeatureToggleAdapter, in order to allow the code setting the toggle to
introspect the current state.

Regards
Julian


On Mon, Jul 6, 2020 at 7:10 PM Marcel Reutegger
 wrote:
>
> Hi,
>
> There is a proposal ready in OAK-9132 [0] that introduces the concept of
> feature toggles [1]. A FeatureToggle is basically a boolean value that
> controls whether some new feature is available. The implementation uses
> the Oak Whiteboard to register a feature toggle. It is then up to
> another bundle to control the state of the feature toggles at
> initialization and/or runtime.
>
> A very simple implementation that wires feature toggles to system
> properties is presented in OAK-9132. More sophisticated implementations
> that talk to a central feature toggle service are also easy to implement
> with an OSGi component that keeps track of registered feature toggles.
>
> Feedback welcome.
>
> Regards
>  Marcel
>
> [0] https://issues.apache.org/jira/browse/OAK-9132
> [1] https://martinfowler.com/articles/feature-toggles.html
>


Re: Query ordered by node name

2020-05-19 Thread Julian Sedding
Or alternatively try [function = "fn:name()"], i.e. with the brackets "()".

Regards
Julian

On Tue, May 19, 2020 at 10:57 AM Julian Sedding  wrote:
>
> Hi Jorge
>
> You could try the Oak Index Definition Generator.
>
> http://oakutils.appspot.com/generate/index
>
> FWIW, in the "name" property node it sets [name = ":name"] instead of
> [function = "fn:name"]. I don't know if that makes a difference and
> which is better, if any.
>
> Regards
> Julian
>
> On Mon, May 18, 2020 at 11:55 PM jorgeeflorez .
>  wrote:
> >
> > Hello,
> > with the following query  I am able to get file nodes ordered by name:
> >
> > SELECT * FROM [nt:file] AS s WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER
> > BY NAME([s]) DESC
> >
> > unfortunately, because I do not have an index, on a big repository I have
> > warnings like:
> >
> > WARN org.apache.jackrabbit.oak.plugins.index.Cursors$TraversingCursor  -
> > Traversed 81000 nodes with filter Filter(query=SELECT * FROM [nt:file] AS s
> > WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER BY NAME([s]) DESC,
> > path=/repo1/pruebaJF1/*); consider creating an index or changing the query
> >
> > and the query takes a lot of time.
> >
> > I do not know how to define a proper index for name(). if I use the
> > following:
> >   - compatVersion = 2
> >   - async = "async"
> >   - jcr:primaryType = oak:QueryIndexDefinition
> >   - evaluatePathRestrictions = true
> >   - type = "lucene"
> >   + indexRules
> >+ nt:file
> > + properties
> >  + primaryType
> >   - name = "jcr:primaryType"
> >   - propertyIndex = true
> >  + name
> >   - function = "fn:name"
> >   - ordered = true
> >   - type = "String"
> >
> > the index is used (index cost is 501 compared to 80946 for traverse), but
> > it takes more time than traversing with warnings like:
> >
> > WARN
> > org.apache.jackrabbit.oak.plugins.index.search.spi.query.FulltextIndex$FulltextPathCursor
> >  - Index-Traversed 8 nodes with filter Filter(query=SELECT * FROM
> > [nt:file] AS s WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER BY NAME([s])
> > DESC, path=/repo1/pruebaJF1/*)
> >
> > Thanks in advance.
> >
> > Regards.
> >
> > Jorge


Re: Query ordered by node name

2020-05-19 Thread Julian Sedding
Hi Jorge

You could try the Oak Index Definition Generator.

http://oakutils.appspot.com/generate/index

FWIW, in the "name" property node it sets [name = ":name"] instead of
[function = "fn:name"]. I don't know if that makes a difference and
which is better, if any.

Regards
Julian

On Mon, May 18, 2020 at 11:55 PM jorgeeflorez .
 wrote:
>
> Hello,
> with the following query  I am able to get file nodes ordered by name:
>
> SELECT * FROM [nt:file] AS s WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER
> BY NAME([s]) DESC
>
> unfortunately, because I do not have an index, on a big repository I have
> warnings like:
>
> WARN org.apache.jackrabbit.oak.plugins.index.Cursors$TraversingCursor  -
> Traversed 81000 nodes with filter Filter(query=SELECT * FROM [nt:file] AS s
> WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER BY NAME([s]) DESC,
> path=/repo1/pruebaJF1/*); consider creating an index or changing the query
>
> and the query takes a lot of time.
>
> I do not know how to define a proper index for name(). if I use the
> following:
>   - compatVersion = 2
>   - async = "async"
>   - jcr:primaryType = oak:QueryIndexDefinition
>   - evaluatePathRestrictions = true
>   - type = "lucene"
>   + indexRules
>+ nt:file
> + properties
>  + primaryType
>   - name = "jcr:primaryType"
>   - propertyIndex = true
>  + name
>   - function = "fn:name"
>   - ordered = true
>   - type = "String"
>
> the index is used (index cost is 501 compared to 80946 for traverse), but
> it takes more time than traversing with warnings like:
>
> WARN
> org.apache.jackrabbit.oak.plugins.index.search.spi.query.FulltextIndex$FulltextPathCursor
>  - Index-Traversed 8 nodes with filter Filter(query=SELECT * FROM
> [nt:file] AS s WHERE ISCHILDNODE(s, [/repo1/pruebaJF1]) ORDER BY NAME([s])
> DESC, path=/repo1/pruebaJF1/*)
>
> Thanks in advance.
>
> Regards.
>
> Jorge


Re: Versionable node deletion

2020-02-25 Thread Julian Sedding
Hi Jorge

If you're looking at reclaiming disk space from "orphaned" binaries,
you likely need Blob Garbage Collection:
https://jackrabbit.apache.org/oak/docs/plugins/blobstore.html#Blob_Garbage_Collection

Regards
Julian

On Mon, Feb 24, 2020 at 3:58 PM jorgeeflorez .
 wrote:
>
> Hi Marco,
> I agree, it is related to OAK-8048.
>
> > But since it
> > isn't, there is still one node that references the binary, so (the binary)
> > is not removed when running the garbage collector.
> >
> > I am not sure about this. I just printed the rootVersion node and it has
> nothing related to the node that was deleted, this is an example:
>
> "node": "jcr:rootVersion",
> "path":
> "/jcr:system/jcr:versionStorage/03/06/92/03069247-5a8e-4957-89d6-3ccaf32edad3/jcr:rootVersion",
> "mixins": [],
> "children": [{
>  "node": "jcr:frozenNode",
>  "path":
> "/jcr:system/jcr:versionStorage/03/06/92/03069247-5a8e-4957-89d6-3ccaf32edad3/jcr:rootVersion/jcr:frozenNode",
>  "mixins": [],
>  "children": [],
>  "properties": [
>  "jcr:frozenPrimaryType = nt:file",
>  "jcr:frozenUuid = 03069247-5a8e-4957-89d6-3ccaf32edad3",
>  "jcr:primaryType = nt:frozenNode",
>  "jcr:uuid = 3a63f325-2e8b-415e-8aa1-6112d4a9049a",
>  "jcr:frozenMixinTypes =
> mix:lastModified,mix:referenceable,rep:AccessControllable,mix:versionable"
> ]
> }],
> "properties": [
>  "jcr:predecessors = ",
>  "jcr:created = 2020-02-21T17:42:44.771-05:00",
>  "jcr:primaryType = nt:version",
>  "jcr:uuid = a3eae304-16f2-438d-a482-e6dbf5b3d198",
>  "jcr:successors = "
> ]
>
> Thinking about what I want, maybe it is not that easy to mark a binary as
> "orphan" (i.e. no node is referencing it) in runtime. But it would be great
> of some method could be called that gets all orphan binaries and deletes
> them. To save space. I do not if something like that exists.
>
> Jorge
>
>
> El lun., 24 feb. 2020 a las 9:17, Marco Piovesana ()
> escribió:
>
> > Hi Jorge,
> > I'm not an expert, but I think it might be related to OAK-804
> > . The root version should
> > be automatically removed when removing the last version. But since it
> > isn't, there is still one node that references the binary, so (the binary)
> > is not removed when running the garbage collector.
> >
> > Marco.
> >
> > On Mon, Feb 24, 2020 at 9:42 PM jorgeeflorez . <
> > jorgeeduardoflo...@gmail.com>
> > wrote:
> >
> > > Hi,
> > > I managed to delete all versions for nodes that no longer exist (except
> > the
> > > jcr:rootVersion nodes, they are "protected"). I was expecting that the
> > > total size of my binary storage would decrease (I am using
> > > OakFileDataStore), since some files are no longer referenced in any
> > nodes.
> > > But that did not happen...
> > >
> > > Any help is appreciated.
> > >
> > > Jorge
> > >
> > > El vie., 21 feb. 2020 a las 15:12, jorgeeflorez . (<
> > > jorgeeduardoflo...@gmail.com>) escribió:
> > >
> > > > Hi,
> > > > when I delete a node that has version history, using node.remove() and
> > > > then session.save(), should all version info related to that node be
> > > > deleted automatically? what about the files in that version history?
> > > >
> > > > After deleting, I print all nodes of the repository and I keep seeing
> > > > those version nodes. Actually, I was working with a repository uses a
> > > > DataStoreBlobStore and after deleting some file nodes I was expecting
> > > that
> > > > the total size of the folder that contains the files would decrease and
> > > it
> > > > did not happen, which led me to make this question :)
> > > >
> > > > Thanks.
> > > >
> > > > Jorge
> > > >
> > >
> >


Re: New Jackrabbit Committer: Mohit Kataria

2019-08-15 Thread Julian Sedding
Welcome Mohit!

Regards
Julian

On Wed, Aug 14, 2019 at 3:25 PM Woonsan Ko  wrote:
>
> Welcome, Mohit!
>
> Cheers,
>
> Woonsan
>
> On Wed, Aug 14, 2019 at 2:31 AM Tommaso Teofili
>  wrote:
> >
> > Welcome to the team Mohit!
> >
> > Regards,
> > Tommaso
> >
> > On Thu, 8 Aug 2019 at 08:33, Marcel Reutegger  wrote:
> >>
> >> Hi,
> >>
> >> Please welcome Mohit Kataria as a new committer and PMC member of
> >> the Apache Jackrabbit project. The Jackrabbit PMC recently decided to
> >> offer Mohit committership based on his contributions. I'm happy to
> >> announce that he accepted the offer and that all the related
> >> administrative work has now been taken care of.
> >>
> >> Welcome to the team, Mohit!
> >>
> >> Regards
> >>  Marcel
> >>


Re: New Jackrabbit Committer: Nitin Gupta

2019-08-15 Thread Julian Sedding
Welcome Nitin!

Regards
Julian

On Wed, Aug 14, 2019 at 3:24 PM Woonsan Ko  wrote:
>
> Welcome, Nitin!
>
> Cheers,
>
> Woonsan
>
> On Wed, Aug 14, 2019 at 2:30 AM Tommaso Teofili
>  wrote:
> >
> > Welcome to the team Nitin!
> >
> > Regards,
> > Tommaso
> >
> > On Thu, 8 Aug 2019 at 08:31, Marcel Reutegger  wrote:
> >>
> >> Hi,
> >>
> >> Please welcome Nitin Gupta as a new committer and PMC member of
> >> the Apache Jackrabbit project. The Jackrabbit PMC recently decided to
> >> offer Nitin committership based on his contributions. I'm happy to
> >> announce that he accepted the offer and that all the related
> >> administrative work has now been taken care of.
> >>
> >> Welcome to the team, Nitin!
> >>
> >> Regards
> >>  Marcel
> >>


Re: New Jackrabbit Committer: Dominik Süß

2019-07-26 Thread Julian Sedding
Welcome, Dominik!

Regards
Julian

On Thu, Jul 25, 2019 at 5:17 PM Dominik Süß  wrote:
>
> Hello everyone,
>
> Like Konrad I wanted to thank a lot for the invitation.
>
>
> Here a short version about my own background. I started working as Integrator 
> for AEM/CQ and that way getting in touch with Jackrabbit in 2007 and became 
> an active member mostly of the Sling Community soon after.   In 2015 I joined 
> AEM engineering and by that rather worked more on the details of the stack 
> and began to contribute once in a while.
>
> Since a few years my focus is mostly around deployment aspects as content 
> that links directly to application that may change over time or the 
> installation and necessary transformation of content over time without having 
> negative impact on the availability  of a system.
>
>
>  I share Konrads interest in filevault but also correlated topics such as 
> composite node-store, oak-upgrade and any other mechanism that link to 
> automation of changes in Jackrabbit.
>
>
> Cheers
>
> Dominik
>
>
> On Thu, Jul 25, 2019 at 4:02 PM Woonsan Ko  wrote:
>>
>> Welcome, Dominik!
>>
>> Cheers,
>>
>> Woonsan
>>
>> On Thu, Jul 25, 2019 at 9:54 AM Marcel Reutegger  wrote:
>> >
>> > Hi,
>> >
>> > Please welcome Dominik Süß as a new committer and PMC member of
>> > the Apache Jackrabbit project. The Jackrabbit PMC recently decided to
>> > offer Dominik committership based on his contributions. I'm happy to
>> > announce that he accepted the offer and that all the related
>> > administrative work has now been taken care of.
>> >
>> > Welcome to the team, Dominik!
>> >
>> > Regards
>> > Marcel


Re: New Jackrabbit Committer: Konrad Windszus

2019-07-26 Thread Julian Sedding
Welcome Konrad!

Regards
Julian

On Thu, Jul 25, 2019 at 4:02 PM Woonsan Ko  wrote:
>
> Welcome, Konrad!
>
> Cheers,
>
> Woonsan
>
> On Wed, Jul 24, 2019 at 10:11 AM Konrad Windszus  wrote:
> >
> > Hi everyone,
> > thanks a lot for having invited me.
> > Some words about myself: I have experience with AEM/CQ since 2005. I am now 
> > working for Netcentric. I joined the Apache family in 2014 by becoming an 
> > Apache Sling committer. Meanwhile I am part of the Apache Sling PMC.
> >
> > I am looking forward to contribute even more in the future to 
> > Jackrabbit/Oak.
> > Particularly I am interested in improving Filevault and the related Maven 
> > Plugin.
> >
> > Konrad
> >
> >
> > > On 24. Jul 2019, at 15:37, Marcel Reutegger  wrote:
> > >
> > > Hi,
> > >
> > > Please welcome Konrad Windszus as a new committer and PMC member of
> > > the Apache Jackrabbit project. The Jackrabbit PMC recently decided to
> > > offer Konrad committership based on his contributions. I'm happy to
> > > announce that he accepted the offer and that all the related
> > > administrative work has now been taken care of.
> > >
> > > Welcome to the team, Konrad!
> > >
> > > Regards
> > > Marcel
> > >
> >


Re: Setting existing property from single value to multi-value

2019-07-24 Thread Julian Sedding
Thanks Julian for looking onto it!

Deleting the property first is indeed my workaround for the issue and it works

It's not a big deal (clearly, as it didn't pop up for 6 years or so),
but the behaviour was unexpected and seems unnecessarily restrictive
to me. It caused a minor production issue for a client in a very
generic code-path that hits Oak via Sling's ModifiableValueMap. Given
all layers involved I was surprised that I ended up in Oak ;)

If my question helps make Oak a little bit better, that's great. If we
can clarify the question and document it in the list's archive that's
also great.

Regards
Julian

On Wed, Jul 24, 2019 at 6:52 AM Julian Reschke  wrote:
>
> On 24.07.2019 05:55, Julian Reschke wrote:
> > On 23.07.2019 23:57, Julian Sedding wrote:
> >> Hi all
> >>
> >> Let's assume we have a Node N of primary type "nt:unstructured" with
> >> property P that has a String value "foo".
> >>
> >> Now when we try to change the value of P to a String[] value of
> >> ["foo", "bar"] a ValueException is thrown.
> >>
> >> This behaviour was introduced in OAK-273. Unfortunately the ticket
> >> does not give any explanation why this behaviour should be desired.
> >> ...
> >
> > I was curious and looked, and, surprise, I raised this issue back then.
> >
> > I would assume that this came up while running the TCK. That is, if we
> > undo this change, we are likely to see TCK tests failing.
> >
> > (not sure, but worth trying)
> >
> > Now that doesn't necessarily mean that the TCK is correct - I'll need
> > more time to re-read things.
> >
> > Best regards, Julian
>
> FWIW, did you try to delete the property first?
>
> Best regards, Julian


Setting existing property from single value to multi-value

2019-07-23 Thread Julian Sedding
Hi all

Let's assume we have a Node N of primary type "nt:unstructured" with
property P that has a String value "foo".

Now when we try to change the value of P to a String[] value of
["foo", "bar"] a ValueException is thrown.

This behaviour was introduced in OAK-273. Unfortunately the ticket
does not give any explanation why this behaviour should be desired.

Reading the java-docs for Node#setProperty(String name, Value value)
and Node#setProperty(String name, Value[] values) I got the impression
that no ValueFormatException should be thrown in that case.

The following are paragraphs 4-6 from the java-doc, the ones I
consider relevant to this issue:

(4) "The property type of the property will be that specified by the
node type of this node. If the property type of one or more of the
supplied Value objects is different from that required, then a
best-effort conversion is attempted, according to an
implemention-dependent definition of "best effort". If the conversion
fails, a ValueFormatException is thrown."

(5) "If the property is not multi-valued then a ValueFormatException
is also thrown. If another error occurs, a RepositoryException is
thrown."

(6) "If the node type of this node does not indicate a specific
property type, then the property type of the supplied Value objects is
used and if the property already exists it assumes both the new values
and the new property type."

The way I read this, paragraph (5) applies to properties where the
property type is specified by the node type. The reason is that (a) it
follows directly after paragraph (4), which is about node type defined
properties and (b) the word "also" in the phrase "... a
ValueFormatException is _also_ thrown" seems to refer back to (4).

Therefore paragraph (6) would be the one relevant to properties that
have no node type defined property type. And that is very clear that
the property should be changed to the new values and property type.

Does anyone have a good explanation why my reading is incorrect? Or
should I create a JIRA ticket to fix this?

Regards
Julian


Re: svn commit: r1834852 - /jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/

2018-07-02 Thread Julian Sedding
Hi Francesco

Have you considered using the MetaTypeReader from Felix' MetyType
implementation[0]? I've used it before and found it easy enough to
use.

It's only a test dependency, and you don't need to worry about your
implementation being in sync with the spec/Felix' implementation.

Regards
Julian

[0] 
https://github.com/apache/felix/blob/trunk/metatype/src/main/java/org/apache/felix/metatype/MetaDataReader.java

On Mon, Jul 2, 2018 at 4:57 PM,   wrote:
> Author: frm
> Date: Mon Jul  2 14:57:25 2018
> New Revision: 1834852
>
> URL: http://svn.apache.org/viewvc?rev=1834852=rev
> Log:
> OAK-6770 - Test the metatype information descriptors
>
> Added:
> 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/MetatypeInformation.java
>(with props)
> Modified:
> 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/SegmentNodeStoreFactoryTest.java
> 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/SegmentNodeStoreMonitorServiceTest.java
> 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/SegmentNodeStoreServiceTest.java
> 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/StandbyStoreServiceTest.java
>
> Added: 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/MetatypeInformation.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/MetatypeInformation.java?rev=1834852=auto
> ==
> --- 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/MetatypeInformation.java
>  (added)
> +++ 
> jackrabbit/oak/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/osgi/MetatypeInformation.java
>  Mon Jul  2 14:57:25 2018
> @@ -0,0 +1,267 @@
> +/*
> + * Licensed to the Apache Software Foundation (ASF) under one
> + * or more contributor license agreements.  See the NOTICE file
> + * distributed with this work for additional information
> + * regarding copyright ownership.  The ASF licenses this file
> + * to you under the Apache License, Version 2.0 (the
> + * "License"); you may not use this file except in compliance
> + * with the License.  You may obtain a copy of the License at
> + *
> + *   http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing,
> + * software distributed under the License is distributed on an
> + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
> + * KIND, either express or implied.  See the License for the
> + * specific language governing permissions and limitations
> + * under the License.
> + */
> +
> +package org.apache.jackrabbit.oak.segment.osgi;
> +
> +import java.io.InputStream;
> +import java.util.HashSet;
> +import java.util.Set;
> +
> +import javax.xml.parsers.DocumentBuilder;
> +import javax.xml.parsers.DocumentBuilderFactory;
> +
> +import org.w3c.dom.Document;
> +import org.w3c.dom.Element;
> +import org.w3c.dom.NodeList;
> +
> +class MetatypeInformation {
> +
> +static MetatypeInformation open(InputStream stream) throws Exception {
> +DocumentBuilderFactory factory = 
> DocumentBuilderFactory.newInstance();
> +DocumentBuilder builder = factory.newDocumentBuilder();
> +Document document = builder.parse(stream);
> +return new MetatypeInformation(document.getDocumentElement());
> +}
> +
> +private static boolean hasAttribute(Element element, String name, String 
> value) {
> +return element.hasAttribute(name) && 
> element.getAttribute(name).equals(value);
> +}
> +
> +private final Element root;
> +
> +private MetatypeInformation(Element root) {
> +this.root = root;
> +}
> +
> +ObjectClassDefinition getObjectClassDefinition(String id) {
> +return new ObjectClassDefinition(id);
> +}
> +
> +class ObjectClassDefinition {
> +
> +private final String id;
> +
> +private ObjectClassDefinition(String id) {
> +this.id = id;
> +}
> +
> +HasAttributeDefinition hasAttributeDefinition(String id) {
> +return new HasAttributeDefinition(this.id, id);
> +}
> +
> +}
> +
> +class HasAttributeDefinition {
> +
> +private final String ocd;
> +
> +private final String id;
> +
> +private String type;
> +
> +private String defaultValue;
> +
> +private String cardinality;
> +
> +private String[] options;
> +
> +private HasAttributeDefinition(String ocd, String id) {
> +this.ocd = ocd;
> +this.id = id;
> +}
> +
> +HasAttributeDefinition withStringType() {
> +this.type = "String";
> +return 

Re: Looking for small task starting in OAK .. DS conversion?

2017-10-31 Thread Julian Sedding
Hi Christian

It's up to you. I have finished the implementation of the tool now. If
you like, you can build it and see if it helps.

Regards
Julian


On Tue, Oct 31, 2017 at 9:56 AM, Christian Schneider
<ch...@die-schneider.net> wrote:
> Hi Julian,
>
> I finished the conversion for the oak-auth-external module and created a
> PR. The tests all run fine.
> I will look into the comparison tool but I am not sure if it is needed. Of
> course it is possible that I introduce a bug with
> my PR but the comparison tool will also not guarantee that the conversion
> is bug free.
>
> Christian
>
> 2017-10-30 13:40 GMT+01:00 Julian Sedding <jsedd...@gmail.com>:
>
>> Hi Christian
>>
>> I have worked on OAK-6741 before and there were some concerns
>> regarding my changes.
>>
>> To address these concerns, I started work on a tool that allows
>> diffing the OSGi DS and MetaType metadata of two bundles. It uses
>> Felix' SCR and MetaType implementations to parse the metadata and
>> should thus be able to compare on a semantic level rather than on a
>> purely syntactic level (i.e. diff all XML files, which comes with its
>> own challenges)[0].
>>
>> Note, that the tool is yet unfinished, as I don't currently have time
>> to complete it. Basically, what's left to do is implementing some
>> comparisons and possibly more rendering (see TODOs in
>> MetaDataDiff[1]). Fell free to fork, or I'm also happy grant you write
>> access on my repository.
>>
>> I hope you find this helpful!
>>
>> Regards
>> Julian
>>
>> [0] https://github.com/jsedding/osgi-ds-metatype-diff
>> [1] https://github.com/jsedding/osgi-ds-metatype-diff/blob/
>> master/src/main/java/net/distilledcode/tools/osgi/MetadataDiff.java
>>
>>
>> On Mon, Oct 30, 2017 at 10:28 AM, Alex Deparvu <stilla...@apache.org>
>> wrote:
>> > Hi Christian,
>> >
>> > Thanks for your interest in helping out in this area!
>> > You can look at OAK-6741 [0] to see what the status of this effort is,
>> > there's a few tasks created already waiting for some attention :)
>> >
>> > best,
>> > alex
>> >
>> > [0] https://issues.apache.org/jira/browse/OAK-6741
>> >
>> >
>> >
>> > On Mon, Oct 30, 2017 at 9:57 AM, Christian Schneider <
>> > ch...@die-schneider.net> wrote:
>> >
>> >> Hi all,
>> >>
>> >> as I am just starting to work on OAK I am looking for a small task.
>> >> I found that there are still some components that use the old felix scr
>> >> annotations.
>> >> Does it make sense that I look into converting these to the DS ones so
>> we
>> >> can remove support for felix scr in the build?
>> >>
>> >> I have listed the classes below.
>> >> The main issue I see with the migration is that OAK uses the meta type
>> >> support of felix scr which is quite different to what DS 1.3 provides.
>> So I
>> >> would need to migrate from the property based meta type descriptions to
>> the
>> >> type safe ones of the DS 1.3 metatype support.
>> >>
>> >> Anyway I would provide one module per PR so the reviewer does not have
>> to
>> >> review one big commit at once.
>> >>
>> >> Best
>> >> Christian
>> >>
>> >> --
>> >> --
>> >> Christian Schneider
>> >> http://www.liquid-reality.de
>> >> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e
>> >> 46=http%3a%2f%2fwww.liquid-reality.de>
>> >>
>> >> Computer Scientist
>> >> http://www.adobe.com
>> >>
>> >>
>> >> ---
>> >>
>> >> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> >> authentication/external/impl/DefaultSyncConfigImpl.java:import
>> >> org.apache.felix.scr.annotations.Component;
>> >>
>> >> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> >> authentication/external/impl/DefaultSyncHandler.java:import
>> >> org.apache.felix.scr.annotations.Component;
>> >>
>> >> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> >> authentication/external/impl/ExternalIDPManagerImpl.java:import
>> >> org.apache.felix.scr.annotations.Component;
>> >>
>> >> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/se

Re: Looking for small task starting in OAK .. DS conversion?

2017-10-30 Thread Julian Sedding
Hi Christian

I have worked on OAK-6741 before and there were some concerns
regarding my changes.

To address these concerns, I started work on a tool that allows
diffing the OSGi DS and MetaType metadata of two bundles. It uses
Felix' SCR and MetaType implementations to parse the metadata and
should thus be able to compare on a semantic level rather than on a
purely syntactic level (i.e. diff all XML files, which comes with its
own challenges)[0].

Note, that the tool is yet unfinished, as I don't currently have time
to complete it. Basically, what's left to do is implementing some
comparisons and possibly more rendering (see TODOs in
MetaDataDiff[1]). Fell free to fork, or I'm also happy grant you write
access on my repository.

I hope you find this helpful!

Regards
Julian

[0] https://github.com/jsedding/osgi-ds-metatype-diff
[1] 
https://github.com/jsedding/osgi-ds-metatype-diff/blob/master/src/main/java/net/distilledcode/tools/osgi/MetadataDiff.java


On Mon, Oct 30, 2017 at 10:28 AM, Alex Deparvu  wrote:
> Hi Christian,
>
> Thanks for your interest in helping out in this area!
> You can look at OAK-6741 [0] to see what the status of this effort is,
> there's a few tasks created already waiting for some attention :)
>
> best,
> alex
>
> [0] https://issues.apache.org/jira/browse/OAK-6741
>
>
>
> On Mon, Oct 30, 2017 at 9:57 AM, Christian Schneider <
> ch...@die-schneider.net> wrote:
>
>> Hi all,
>>
>> as I am just starting to work on OAK I am looking for a small task.
>> I found that there are still some components that use the old felix scr
>> annotations.
>> Does it make sense that I look into converting these to the DS ones so we
>> can remove support for felix scr in the build?
>>
>> I have listed the classes below.
>> The main issue I see with the migration is that OAK uses the meta type
>> support of felix scr which is quite different to what DS 1.3 provides. So I
>> would need to migrate from the property based meta type descriptions to the
>> type safe ones of the DS 1.3 metatype support.
>>
>> Anyway I would provide one module per PR so the reviewer does not have to
>> review one big commit at once.
>>
>> Best
>> Christian
>>
>> --
>> --
>> Christian Schneider
>> http://www.liquid-reality.de
>> > 46=http%3a%2f%2fwww.liquid-reality.de>
>>
>> Computer Scientist
>> http://www.adobe.com
>>
>>
>> ---
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/DefaultSyncConfigImpl.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/DefaultSyncHandler.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/ExternalIDPManagerImpl.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/ExternalLoginModuleFactory.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/principal/ExternalPrincipalConfiguration
>> .java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/
>> authentication/external/impl/SyncManagerImpl.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-ldap/src/main/java/org/apache/jackrabbit/oak/
>> security/authentication/ldap/impl/LdapIdentityProvider.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-auth-ldap/src/main/java/org/apache/jackrabbit/oak/
>> security/authentication/ldap/impl/LdapProviderConfig.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-authorization-cug/src/main/java/org/apache/
>> jackrabbit/oak/spi/security/authorization/cug/impl/
>> CugConfiguration.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-authorization-cug/src/main/java/org/apache/
>> jackrabbit/oak/spi/security/authorization/cug/impl/
>> CugExcludeImpl.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/osgi/
>> FileBlobStoreService.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-blob/src/main/java/org/apache/jackrabbit/oak/spi/blob/osgi/
>> SplitBlobStoreService.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-blob-cloud/src/main/java/org/apache/jackrabbit/oak/blob/cloud/s3/
>> AbstractS3DataStoreService.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> oak-blob-cloud/src/main/java/org/apache/jackrabbit/oak/blob/cloud/s3/
>> S3DataStoreService.java:import
>> org.apache.felix.scr.annotations.Component;
>>
>> 

Re: Naming convention for unstable releases

2017-10-17 Thread Julian Sedding
+1 for any qualifier indicating "unstable" releases.

Regards
Julian

On Tue, Oct 17, 2017 at 7:28 AM, Julian Reschke  wrote:
> Hi there,
>
> everybody over here knows that odd-numbered releases are unstable, taken
> from trunk.
>
> However, apparently not all of our users know that.
>
> Should we consider to label them accordingly in the future? Such as
>
>   1.9.0-EXPERIMENTAL
>
> instead of
>
>   1.9.0
>
> ?
>
> Best regards, Julian
>
> (cc'ing jackrabbit-dev because we'd want to be consistent)


Re: clustering and cold standby

2017-10-16 Thread Julian Sedding
Hi Marco

Cold Standby is a TarMK feature that allows for a quick failover. You
may think of it as a near real-time backup. It is _not_ a cluster. As
you noted, the sync is one-way only.

Therefore, I don't think it is possible to direct reads or writes to
the cold standby instance. AFAIK this should be prevented by the
implementation. But others are more knowledgable about these details.

Regards
Julian


On Mon, Oct 16, 2017 at 11:28 AM, Marco Piovesana  wrote:
> Hi all,
> I'm trying to set-up a cluster environment with Oak, so that anytime I can
> take down for maintenance one machine without stopping the service. One
> option is of course to use Mongo or RDBM storages. Talking with the guys at
> adaptTo() this year I've been told that maybe there's another option:
> replicate the repository in each instance of the cluster and use the "*cold
> standby*" for the synchronization.
> The sync process, however, is one-way only. My question is: do you guys
> think is possible to use it in a cluster where read and write requests are
> coming from any of the instances of the cluster?
>
> Marco.


Dependency to DropWizard Metrics Library (was: Percentile implementation)

2017-07-17 Thread Julian Sedding
Hi all

OAK-6430[0] introduced a mandatory dependency to
io.dropwizard.metrics:metrics-core to oak-segment-tar.

Before this change, the runtime dependency to this metrics library was
optional (and it still is in oak-core).

Originally, the dependency was introduced in OAK-3654[1] and a facade
was implemented with the following justification: "To avoid having
dependency on Metrics API all over in Oak we can come up with minimal
interfaces which can be used in Oak and then provide an implementation
backed by Metric."

There was no discussion on the Oak list at the time. However, a
similar discussion happened on the Sling list[2]. Basically, bad past
experiences with breaking changes in the dropwizard metrics API led to
the implementation of a facade in order to limit the potentila impact
of future breaking changes. Of course a facade decouples the code from
the dependency and thus allows plugging in a different implementation
should the need arise.

Therefore, I ask the dev team:
(1) Do we want a mandatory runtime dependency
io.dropwizard.metrics:metrics-core?
(2) Should we revisit OAK-6430 and implement the mechanism via the
facade? Probably extending the HistogramStats interface with a method
"#getPercentile(double)".

IMHO we should avoid the mandatory dependency.

Regards
Julian

[0] https://issues.apache.org/jira/browse/OAK-6430
[1] https://issues.apache.org/jira/browse/OAK-3654
[2] http://markmail.org/thread/47fd5psel2wv2y42



On Thu, Jul 6, 2017 at 2:54 PM, Andrei Dulceanu
<andrei.dulce...@gmail.com> wrote:
>> The only problem that I see is the fact that it doesn't provide a way to
>> easily access a desired percentile (only mean and 75th, 95th, 98th, 99th
>> and 999th). Currently we are using 50th percentile, i.e. mean, but in the
>> future that might change.
>>
>
> Please read median instead of mean above. Implementing the change, I
> discovered Histogram#getSnapshot().getValue(double quantile) which is
> exactly what I was looking for.
>
>
>> I will try to make the adjustments and will revisit the percentile
>> implementation once we'll change our use pattern there.
>>
>
> This change is tracked in OAK-6430 [0] and fixed at r1801043.
>
> [0] https://issues.apache.org/jira/browse/OAK-6430
>
> 2017-07-06 14:55 GMT+03:00 Andrei Dulceanu <andrei.dulce...@gmail.com>:
>
>> Hi Chetan,
>>
>>
>>> Instead of commons-math can we use Metric Histogram  (which I also
>>> suggested earlier in the thread).
>>
>>
>> I took another look at the Metric Histogram and I think at the moment it
>> can be used instead of SynchronizedDescriptiveStatistics from
>> commons-math3. The only problem that I see is the fact that it doesn't
>> provide a way to easily access a desired percentile (only mean and 75th,
>> 95th, 98th, 99th and 999th). Currently we are using 50th percentile, i.e.
>> mean, but in the future that might change.
>>
>>
>>> This would avoid downstream Oak
>>> users to include another dependency as Oak is already using Metrics in
>>> other places.
>>>
>>
>> I will try to make the adjustments and will revisit the percentile
>> implementation once we'll change our use pattern there.
>>
>> Regards,
>> Andrei
>>
>> 2017-07-06 14:38 GMT+03:00 Chetan Mehrotra <chetan.mehro...@gmail.com>:
>>
>>> Instead of commons-math can we use Metric Histogram  (which I also
>>> suggested earlier in the thread). This would avoid downstream Oak
>>> users to include another dependency as Oak is already using Metrics in
>>> other places.
>>>
>>> Can we reconsider this decision?
>>> Chetan Mehrotra
>>>
>>>
>>> On Tue, Jul 4, 2017 at 4:45 PM, Julian Sedding <jsedd...@gmail.com>
>>> wrote:
>>> > Maybe it is not necessary to embed *all* of commons-math3. The bnd
>>> > tool (used by maven-bundle-plugin) can intelligently embed classes
>>> > from specified java packages, but only if they are referenced.
>>> > Depending on how well commons-math3 is modularized, that could allow
>>> > for much less embedded classes. Neil Bartlett wrote a good blog post
>>> > about this feature[0].
>>> >
>>> > Regards
>>> > Julian
>>> >
>>> > [0] http://njbartlett.name/2014/05/26/static-linking.html
>>> >
>>> >
>>> > On Tue, Jul 4, 2017 at 12:20 PM, Andrei Dulceanu
>>> > <andrei.dulce...@gmail.com> wrote:
>>> >> I'll add the dependency.
>>> >>
>>> >> Thanks,
>>> >> Andrei
>>> >>
>>&g

Re: [DiSCUSS] - highly vs rarely used data

2017-07-04 Thread Julian Sedding
>From my experience working with customers, I can pretty much guarantee
that sooner or later:

(a) the implementation of an automatism is not *quite* what they need/want
(b) they want to be able to manually select (or more likely override)
whether a file can be archived

Thus I suggest to come up with a pluggable "strategy" interface and
provide a sensible default implementation. The default will be fine
for most customers/users, but advanced use-cases can be implemented by
substituting the implementation. Implementations could then also
respect manually set flags (=properties) if desired.

A much more important and difficult question to answer IMHO is how to
deal with the slow retrieval of archived content. And if needed, how
to expose the slow availability (i.e. unavailable now but available
later) to the end user (or application layer). To me this sounds
tricky if we want to stick to the JCR API.

Regards
Julian



On Mon, Jul 3, 2017 at 4:33 PM, Tommaso Teofili
 wrote:
> I am sure there are both use cases for automatic vs manual/controlled
> collection of unused data, however if I were a user I would personally not
> want to care about this. While I'd be happy to know that my repo is faster
> / smaller / cleaner / whatever it'd sound overly complex to deal with JCR
> and Oak constraints and behaviours from the application layer.
> IMHO if we want to have such a feature in Oak to save resources, it should
> be the persistence responsibility to say "hey, this content is not being
> accessed for ages, let's try to claim some resources from it" (which could
> mean moving to cold storage, compress it or anything else).
>
> My 2 cents,
> Tommaso
>
>
>
> Il giorno lun 3 lug 2017 alle ore 15:46 Thomas Mueller
>  ha scritto:
>
>> Hi,
>>
>> > a property on the node, e.g. "archiveState=toArchive"
>>
>> I wonder if we _can_ easily write to the version store? Also, some
>> nodetypes don't allow such properties? It might need to be a hidden
>> property, but then you can't use the JCR API. Or maintain this data in a
>> "shadow" structure (not with the nodes), which would complicate move
>> operations.
>>
>> If I was a customer, I wouldn't wan't to *manually* mark / unmark binaries
>> to be moved to / from long time storage. I would probably just want to rely
>> on automatic management. But I'm not a customer, so my opinion is not that
>> relevant (
>>
>> > Using a property directly specified for this purpose gives us more
>> direct control over how it is being used I think.
>>
>> Sure, but it also comes with some complexities.
>>
>> Regards,
>> Thomas
>>
>>
>>
>>


Re: copy on write node store

2017-05-30 Thread Julian Sedding
Slightly off topic: the thought that the copy on read/write indexing
features may need to be explicitly managed in such a setup just
occurred to me.

I.e. when an instance is switched to the copy on write node store, the
local index directory will deviate from the "mainline" node store.
Upon switching the instance back to the "mainline" (i.e. disabling
copy on write node store), the local index directory may need to be
deleted? Or maybe it is already resilient enough to automatically
recover.

Regards
Julian


On Tue, May 30, 2017 at 10:05 AM, Michael Dürig  wrote:
>
>
> On 30.05.17 09:34, Tomek Rekawek wrote:
>>
>> Hello Michael,
>>
>> thanks for the reply!
>>
>>> On 30 May 2017, at 09:18, Michael Dürig  wrote:
>>> AFAIU from your mail and from looking at the patch this is about a node
>>> store implementation that can be rolled back to a previous state.
>>>
>>> If this is the case, a simpler way to achieve this might be to use the
>>> TarMK and and add functionality for rolling it back.
>>
>>
>> Indeed, it would be much simpler. However, the main purpose of the new
>> feature is testing the blue-green Sling deployments. That’s why we need the
>> DocumentMK to support it as well.
>
>
> Ok I see. I think the fact that these classes are not for production use
> should be stated in the Javadoc along with what clarifications of what can
> be expected from the store wrt. interleaving of calls to various mutators
> (e.g. enableCopyOnWrite() / disableCopyOnWrite() / merge(), etc.). I foresee
> a couple of very sneaky race conditions here.
>
> Michael


Re: New Jackrabbit Committer: Robert Munteanu

2017-05-24 Thread Julian Sedding
Welcome Robert! Keep up the good work!

Regards
Julian

On Tue, May 23, 2017 at 1:08 PM, Robert Munteanu  wrote:
> Hi,
>
> On Mon, 2017-05-22 at 13:53 +0200, Michael Dürig wrote:
>> Hi,
>>
>> Please welcome Robert as a new committer and PMC member of the
>> Apache
>> Jackrabbit project. The Jackrabbit PMC recently decided to offer
>> Robert
>> committership based on his contributions. I'm happy to announce that
>> he
>> accepted the offer and that all the related administrative work has
>> now
>> been taken care of.
>>
>> Welcome to the team, Robert!
>
> Thank you for the invitation and for the welcome.
>
> I'm looking forward to continuing my contributions with a new hat on :-
> )
>
> Robert


Re: upgrade repository structure with backward-incompatible changes

2017-05-19 Thread Julian Sedding
Hi Marco

In this case I think you should use the JCR API to implement your
content changes.

I am not aware of a pure JCR toolkit that helps with this, so you may
just need to write something yourself.

Regards
Julian



On Fri, May 19, 2017 at 5:00 PM, Marco Piovesana <pioves...@esteco.com> wrote:
> Hi Julian,
> I meant I'm using Oak not Sing. Yes I'm using JCR API.
>
> Marco.
>
> On Fri, May 19, 2017 at 2:22 PM, Julian Sedding <jsedd...@gmail.com> wrote:
>
>> Hi Marco
>>
>> On Fri, May 19, 2017 at 2:10 PM, Marco Piovesana <pioves...@esteco.com>
>> wrote:
>> > Hi Julian, Michael and Robert
>> > first of all thanks for the suggestions.
>> > I'm using Oak directly inside my application,
>>
>> Do you mean you are not using the JCR API?
>>
>> > so I guess the Sling Pipes
>> > are not something I can use, or not? Is the concept of Pipe already
>> defined
>> > in some way inside oak?
>>
>> No Oak has no such concept. Sling Pipes is an OSGi bundle that is
>> unrelated to Oak but uses the JCR and Jackrabbit APIs (both are
>> implemented by Oak).
>>
>> Regards
>> Julian
>>
>> >
>> > Marco.
>> >
>> > On Fri, May 19, 2017 at 10:39 AM, Julian Sedding <jsedd...@gmail.com>
>> wrote:
>> >
>> >> Hi Marco
>> >>
>> >> It sounds like you are dealing with a JCR-based application and thus
>> >> you should be using the JCR API (directly or indirectly, e.g. via
>> >> Sling) to change your content.
>> >>
>> >> CommitHook is an Oak internal API that does not enforce any JCR
>> >> semantics. So if you were to go down that route, you would need to be
>> >> very careful not to change the content structure in a way  that
>> >> essentially corrupts JCR semantics.
>> >>
>> >> Regards
>> >> Julian
>> >>
>> >>
>> >> On Tue, May 16, 2017 at 6:33 PM, Marco Piovesana <pioves...@esteco.com>
>> >> wrote:
>> >> > Hi Tomek,
>> >> > yes I'm trying to upgrade within the same repository type but I can
>> >> decide
>> >> > weather to migrate the repository or not based on what makes the
>> upgrade
>> >> > easier.
>> >> > The CommitHooks can only be used inside an upgrade to a new
>> repository?
>> >> > What is the suggested way to apply backward-incompatible changes if i
>> >> don't
>> >> > want to migrate the data from one repository to another but I want to
>> >> apply
>> >> > the modifications to the original one?
>> >> >
>> >> > Marco.
>> >> >
>> >> > On Tue, May 16, 2017 at 4:04 PM, Tomek Rekawek
>> <reka...@adobe.com.invalid
>> >> >
>> >> > wrote:
>> >> >
>> >> >> Hi Marco,
>> >> >>
>> >> >> the main purpose of the oak-upgrade is to migrate a Jackrabbit 2 /
>> CRX2
>> >> >> repository into Oak or to migrate one Oak node store (eg. segment) to
>> >> >> another (like Mongo). On the other hand, it’s not a good choice to
>> use
>> >> it
>> >> >> for the application upgrades within the same repository type. You
>> didn’t
>> >> >> mention if your upgrade involves the repository migration (in this
>> case
>> >> >> choosing oak-upgrade would be justified) or not.
>> >> >>
>> >> >> If you still want to use oak-upgrade, it allows to use custom
>> >> CommitHooks
>> >> >> [1] during the migration. They should be included in the class path
>> with
>> >> >> the ServiceLoader mechanism [2].
>> >> >>
>> >> >> Regards,
>> >> >> Tomek
>> >> >>
>> >> >> [1] http://jackrabbit.apache.org/oak/docs/architecture/
>> >> >> nodestate.html#The_commit_hook_mechanism
>> >> >> [2] https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html
>> >> >>
>> >> >> --
>> >> >> Tomek Rękawek | Adobe Research | www.adobe.com
>> >> >> reka...@adobe.com
>> >> >>
>> >> >> > On 14 May 2017, at 12:20, Marco Piovesana <pioves...@esteco.com>
>> >> wrote:
>> >> >> >
>> >> >> > Hi all,
>> >>

Re: upgrade repository structure with backward-incompatible changes

2017-05-19 Thread Julian Sedding
Hi Marco

On Fri, May 19, 2017 at 2:10 PM, Marco Piovesana <pioves...@esteco.com> wrote:
> Hi Julian, Michael and Robert
> first of all thanks for the suggestions.
> I'm using Oak directly inside my application,

Do you mean you are not using the JCR API?

> so I guess the Sling Pipes
> are not something I can use, or not? Is the concept of Pipe already defined
> in some way inside oak?

No Oak has no such concept. Sling Pipes is an OSGi bundle that is
unrelated to Oak but uses the JCR and Jackrabbit APIs (both are
implemented by Oak).

Regards
Julian

>
> Marco.
>
> On Fri, May 19, 2017 at 10:39 AM, Julian Sedding <jsedd...@gmail.com> wrote:
>
>> Hi Marco
>>
>> It sounds like you are dealing with a JCR-based application and thus
>> you should be using the JCR API (directly or indirectly, e.g. via
>> Sling) to change your content.
>>
>> CommitHook is an Oak internal API that does not enforce any JCR
>> semantics. So if you were to go down that route, you would need to be
>> very careful not to change the content structure in a way  that
>> essentially corrupts JCR semantics.
>>
>> Regards
>> Julian
>>
>>
>> On Tue, May 16, 2017 at 6:33 PM, Marco Piovesana <pioves...@esteco.com>
>> wrote:
>> > Hi Tomek,
>> > yes I'm trying to upgrade within the same repository type but I can
>> decide
>> > weather to migrate the repository or not based on what makes the upgrade
>> > easier.
>> > The CommitHooks can only be used inside an upgrade to a new repository?
>> > What is the suggested way to apply backward-incompatible changes if i
>> don't
>> > want to migrate the data from one repository to another but I want to
>> apply
>> > the modifications to the original one?
>> >
>> > Marco.
>> >
>> > On Tue, May 16, 2017 at 4:04 PM, Tomek Rekawek <reka...@adobe.com.invalid
>> >
>> > wrote:
>> >
>> >> Hi Marco,
>> >>
>> >> the main purpose of the oak-upgrade is to migrate a Jackrabbit 2 / CRX2
>> >> repository into Oak or to migrate one Oak node store (eg. segment) to
>> >> another (like Mongo). On the other hand, it’s not a good choice to use
>> it
>> >> for the application upgrades within the same repository type. You didn’t
>> >> mention if your upgrade involves the repository migration (in this case
>> >> choosing oak-upgrade would be justified) or not.
>> >>
>> >> If you still want to use oak-upgrade, it allows to use custom
>> CommitHooks
>> >> [1] during the migration. They should be included in the class path with
>> >> the ServiceLoader mechanism [2].
>> >>
>> >> Regards,
>> >> Tomek
>> >>
>> >> [1] http://jackrabbit.apache.org/oak/docs/architecture/
>> >> nodestate.html#The_commit_hook_mechanism
>> >> [2] https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html
>> >>
>> >> --
>> >> Tomek Rękawek | Adobe Research | www.adobe.com
>> >> reka...@adobe.com
>> >>
>> >> > On 14 May 2017, at 12:20, Marco Piovesana <pioves...@esteco.com>
>> wrote:
>> >> >
>> >> > Hi all,
>> >> > I'm trying to deal with backward-incompatible changes on my repository
>> >> > structure. I was looking at the oak-upgrade module but, as far as I
>> could
>> >> > understand, I can't really make modifications that require some logic
>> >> (e.g.
>> >> > remove a property and add a new mandatory property with a value based
>> on
>> >> > the removed one).
>> >> > I saw that one of the options might be the "namespace migration":
>> >> > - remap the current namespace to a different prefix;
>> >> > - create a new namespace with original prefix;
>> >> > - port all nodes from old namespace to new namespace applying the
>> >> required
>> >> > modifications.
>> >> >
>> >> > I couldn't find much documentation on the topic, so my question is: is
>> >> this
>> >> > the right way to do it? There are other suggested approaches to the
>> >> > problem? There's already a tool that can be used to define how to map
>> a
>> >> > source CND definition into a destination CND definition and then apply
>> >> the
>> >> > modifications to a repository?
>> >> >
>> >> > Marco.
>> >>
>>


Re: upgrade repository structure with backward-incompatible changes

2017-05-19 Thread Julian Sedding
Hi Marco

It sounds like you are dealing with a JCR-based application and thus
you should be using the JCR API (directly or indirectly, e.g. via
Sling) to change your content.

CommitHook is an Oak internal API that does not enforce any JCR
semantics. So if you were to go down that route, you would need to be
very careful not to change the content structure in a way  that
essentially corrupts JCR semantics.

Regards
Julian


On Tue, May 16, 2017 at 6:33 PM, Marco Piovesana  wrote:
> Hi Tomek,
> yes I'm trying to upgrade within the same repository type but I can decide
> weather to migrate the repository or not based on what makes the upgrade
> easier.
> The CommitHooks can only be used inside an upgrade to a new repository?
> What is the suggested way to apply backward-incompatible changes if i don't
> want to migrate the data from one repository to another but I want to apply
> the modifications to the original one?
>
> Marco.
>
> On Tue, May 16, 2017 at 4:04 PM, Tomek Rekawek 
> wrote:
>
>> Hi Marco,
>>
>> the main purpose of the oak-upgrade is to migrate a Jackrabbit 2 / CRX2
>> repository into Oak or to migrate one Oak node store (eg. segment) to
>> another (like Mongo). On the other hand, it’s not a good choice to use it
>> for the application upgrades within the same repository type. You didn’t
>> mention if your upgrade involves the repository migration (in this case
>> choosing oak-upgrade would be justified) or not.
>>
>> If you still want to use oak-upgrade, it allows to use custom CommitHooks
>> [1] during the migration. They should be included in the class path with
>> the ServiceLoader mechanism [2].
>>
>> Regards,
>> Tomek
>>
>> [1] http://jackrabbit.apache.org/oak/docs/architecture/
>> nodestate.html#The_commit_hook_mechanism
>> [2] https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html
>>
>> --
>> Tomek Rękawek | Adobe Research | www.adobe.com
>> reka...@adobe.com
>>
>> > On 14 May 2017, at 12:20, Marco Piovesana  wrote:
>> >
>> > Hi all,
>> > I'm trying to deal with backward-incompatible changes on my repository
>> > structure. I was looking at the oak-upgrade module but, as far as I could
>> > understand, I can't really make modifications that require some logic
>> (e.g.
>> > remove a property and add a new mandatory property with a value based on
>> > the removed one).
>> > I saw that one of the options might be the "namespace migration":
>> > - remap the current namespace to a different prefix;
>> > - create a new namespace with original prefix;
>> > - port all nodes from old namespace to new namespace applying the
>> required
>> > modifications.
>> >
>> > I couldn't find much documentation on the topic, so my question is: is
>> this
>> > the right way to do it? There are other suggested approaches to the
>> > problem? There's already a tool that can be used to define how to map a
>> > source CND definition into a destination CND definition and then apply
>> the
>> > modifications to a repository?
>> >
>> > Marco.
>>


Re: new name for the multiplexing node store

2017-05-11 Thread Julian Sedding
+1 to CompositeNodeStore

Regards
Julian

On Thu, May 11, 2017 at 10:36 AM, Bertrand Delacretaz
 wrote:
> On Thu, May 11, 2017 at 9:33 AM, Robert Munteanu  wrote:
>> ...MultiplexingNodeStore is a pretty standard implementation
>> of the Composite design pattern...
>
> So CompositeNodeStore maybe? I like it.
>
> -Bertrand


Re: new name for the multiplexing node store

2017-05-05 Thread Julian Sedding
Hi Tomek

In all related discussions the term "mount" appears a lot. So why not
Mounting NodeStore? The module could be "oak-store-mount".

Regards
Julian


On Fri, May 5, 2017 at 1:39 PM, Tomek Rekawek  wrote:
> Hello oak-dev,
>
> the multiplexing node store has been recently extracted from the oak-core 
> into a separate module and I’ve used it as an opportunity to rename the 
> thing. The name I suggested is Federated Node Store. Robert doesn’t agree 
> it’s the right name, mostly because the “partial” node stores, creating the 
> combined (multiplexing / federated) one, are not usable on their own and 
> stores only a part of the overall repository content.
>
> Our arguments in their full lengths can be found in the OAK-6136 (last 3-4 
> comments), so there’s no need to repeat them here. We wanted to ask you for 
> opinion about the name. We kind of agree that the “multiplexing” is not the 
> best choice - can you suggest something else or maybe you think that 
> “federated” is good enough?
>
> Thanks for the feedback.
>
> Regards,
> Tomek
>
> --
> Tomek Rękawek | Adobe Research | www.adobe.com
> reka...@adobe.com
>


Re: oak-run: Enforcing size

2017-04-28 Thread Julian Sedding
I also think that the build should not produce different artifacts
depending on a profile.

If the jar file gets too big when embedding the JDBC driver, we may
want to consider producing two build artifacts: the jar file without
RDB support and another one (e.g. with classifier "rdb") that embeds
the drivers.

Regards
Julian



On Fri, Apr 28, 2017 at 1:15 PM, Davide Giannella  wrote:
> On 26/04/2017 09:32, Julian Reschke wrote:
>> On 2017-04-26 10:28, Davide Giannella wrote:
>>
>>> a release we're not triggering any specific profile.
>>
>> Well, in that case we're not triggering the profile, right?
>
> Exactly. Therefore the released oak-run never embedded any jdbc so far.
> Anyone correct me if I'm wrong.
>
>>
>>> Regardless, the fastest solution is to increase the size according to
>>> what you see. However is this a new dependency you're adding as of new
>>> features?
>>
>> No, it always has been the case.
>>
>> However, if you select all RDB profiles you'll include essentially all
>> JDBC drivers, in which case maintaining the limit becomes pretty
>> pointless...
>
> I'd say you could change the size for the RDB profiles only (adding the
> enforcer size under the profiles) or simply increase the general size.
>
> It seems strange to me that we're not embedding the jdbc dependencies
> for the released jar. Maybe we want to change that and simplify.
>
> How bit is the generated jar for the RDB profiles?
>
> D.


Re: [ops] Unify NodeStore/DataStore configurations by using nstab

2017-04-28 Thread Julian Sedding
Hi Arek

I agree that we could benefit from a way to bootstrap a repository
from a single configuration file.

Regarding the format you suggest, I am sceptical that it is suitable
to cover all (required) complexities of setting up a repository.
Consider that besides the persistence, there are various security
components, initial content providers etc that (may) need to be
considered.

I suggest you create a POC in a separate Maven module. That's probably
the best way to find out whether your suggested configuration language
suits the requirements of setting up an Oak repository.

Regarding the implementation, I assume you should be able to get quite
far with just using the classes Oak and Jcr. They should also give an
impression of the configuration options you may want to cover.
Furthermore, you would need a way to map some class names to
short-hand names (e.g. Segment, File etc from your examples). I'd
start with a hard-coded Map or a Properties file. Once the POC is done
and we want to integrate it, we can consider replacing this registry
mechanism.

Regards
Julian


On Fri, Apr 28, 2017 at 12:56 PM, Arek Kita  wrote:
> Hi,
>
> I've noticed recently that with many different NodeStore
> implementation (Segment, Document, Multiplexing) but also DataStore
> implementation (File, S3, Azure) and some composite ones like
> (Hierarchical, Federated - that was already mentioned in [0]) it
> becomes more and more difficult to set up everything correctly and be
> able to know the current persistence state of repository (especially
> with pretty aged repos).
>
> Moreover, the configuration pattern that is based on individual PID of
> one service becomes problematic (i.e. recent change for
> SegmentNodeStoreService).
>
> From the operations and user perspective everything should be treated
> as a whole IMHO no matter which service handles which fragment of
> persistence layout. Oak should know itself how to "autowire" different
> parts, obviously with some hints and pointers from users as they want
> to run Oak in their own preferred layout.
>
> My proposal would be to integrate everything together to a pretty old
> concept called "fstab". For our purposes I would call it "nstab".
>
> This could look like [1] for the most simple case (with internal
> blobs), [2] for typical SegmentMK + FDS, [3] for SegmentMK + S3DS, [4]
> for MultiplexingNodeStore with some areas of repo set as read only. I
> think we could also model Hierarchical and Federated DataStores as
> well in the future.
>
> Examples are for illustration purposes but I guess such setup will
> help changing layout without a need to inspect many OSGi
> configurations in a current setup and making sure some conflicting
> ones aren't active.
>
> The schema is also similar to an UNIX-way of configuring filesystem so
> it will help Oak users to understand the layout (at least better than
> it is now). I see also advantage for automated tooling like
> oak-upgrade for complex cases in the future - user just provides
> source nstab and target nstab in order to migrate repository.
>
> The config should be also simpler avoiding things like customBlobStore
> (it will be inferred from context).
>
> WDYT? I have some thoughts how could this be implemented but first I
> would like to know your opinions on that.
>
> Thanks in advance for feedback!
> Arek
>
>
> [0] http://oak.markmail.org/thread/22dvuo6b7ab5ib7m
> [1] 
> https://gist.githubusercontent.com/kitarek/f755dab6e889d1dfc5a1c595727f0171/raw/53d41ac7f935886783afd6c85d60e38e565a9259/nstab.1
> [2] 
> https://gist.githubusercontent.com/kitarek/f755dab6e889d1dfc5a1c595727f0171/raw/53d41ac7f935886783afd6c85d60e38e565a9259/nstab.2
> [3] 
> https://gist.githubusercontent.com/kitarek/f755dab6e889d1dfc5a1c595727f0171/raw/53d41ac7f935886783afd6c85d60e38e565a9259/nstab.3
> [4] 
> https://gist.githubusercontent.com/kitarek/f755dab6e889d1dfc5a1c595727f0171/raw/53d41ac7f935886783afd6c85d60e38e565a9259/nstab.4


Re: [m12n] Location of InitialContent

2017-04-20 Thread Julian Sedding
Hi Angela

>From the features you describe it sounds like it should go into
org.apache.jackrabbit.oak.jcr (or at least most of it). It looks like
it is being used in lots of tests in oak-core, however, so this may
just be wishful thinking...

Regards
Julian


On Thu, Apr 20, 2017 at 10:07 AM, Angela Schreiber  wrote:
> hi
>
> the original intention of the 'InitialContent' was just to registers
> built-in JCR node types, which explains it's location in
> org.apache.jackrabbit.oak.plugins.nodetype.write
>
> in the mean time it has a evolved to container for all kind of initial
> content required for a JCR repository: mandatory structure, version
> storage, uuid-index and most recently document ns specific configuration
> (see also OAK-5656 ).
>
> to me the location in the org.apache.jackrabbit.oak.plugins.nodetype.write
> package no longer makes sense and i would suggest to move it to the
> org.apache.jackrabbit.oak package along with the Oak, OakInitializer,
> OakVersion and other utilities used to create an JCR/Oak repository.
>
> wdyt?
>
> kind regards
> angela
>
>


Re: [DISCUSS] Which I/O statistics should the FileStore expose?

2017-02-13 Thread Julian Sedding
Hi Francesco

I believe you should implement an IOMonitor using the metrics in the
org.apache.jackrabbit.oak.stats package. These can be backed by
swappable StatisticsProvider implementations. I believe by default
it's a NOOP implementation. However, I believe that if the
MetricStatisticsProvider implementation is used, it automatically
exposes the metrics via JMX. So all you need to do is feed the correct
data into a suitable metric. I believe Chetan contributed these, so he
will know more about the details.

Regards
Julian


On Mon, Feb 13, 2017 at 6:21 PM, Francesco Mari
 wrote:
> Hi all,
>
> The recently introduced IOMonitor allows the FileStore to trigger I/O
> events. Callback methods from IOMonitor can be implemented to receive
> information about segment reads and writes.
>
> A trivial implementation of IOMonitor is able to track the following raw data.
>
> - The number of segments read and write operations.
> - The duration in nanoseconds of every read and write.
> - The number of bytes read or written by each operation.
>
> We are about to expose this kind of information from an MBean - for
> the sake of discussion, let's call it IOMonitorMBean. I'm currently in
> favour of starting small and exposing the following statistics:
>
> - The duration of the latest write (long).
> - The duration of the latest read (long).
> - The number of write operations (long).
> - The number of read operations (long).
>
> I would like your opinion about what's the most useful way to present
> this data through an MBean. Should just raw data be exposed? Is it
> appropriate for IOMonitorMBean to perform some kind of aggregation,
> like sum and average? Should richer data be returned from the MBean,
> like tabular data?
>
> Please keep in mind that this data is supposed to be consumed by a
> monitoring solution, and not a by human reader.


Re: Slowness while multiple uploads

2017-01-10 Thread Julian Sedding
If I read the code correctly, you only logout the session in a catch
block. Assuming the code is otherwise sane, that would indicate a
session leak, because most sessions are not logged out. That could
also explain the gradual slowdown over two days.

You have to always logout JCR sessions that you create via
repository.login(...). As Clay mentioned, you should use a
try..finally construct with logout in the finally block.

Regards
Julian

On Tue, Jan 10, 2017 at 6:18 AM, Clay Ferguson  wrote:
> * First determine if the time is being spent in the streaming, or the JCR
> saving, so you know what problem to solve
>
> * Use System.currentMillis() or whatever to capture the begin+end times of
> different sections, and subtract them to get the time. Then keep
> a running total of those times. Then at end log out the total times, to see
> what part is slow (low tech profiler!)
>
> * Close your stream in a finally block.
>
> * User Buffered stream (input stream)
>
> * Don't save 1000s of times into the same session if you can avoid it.
> Create new sessions every 100th time or so and close out the previous
> session to be sure
> it isn't draining resources.
>
> * Print amount of free memory after calling GC(), like every 100th file, to
> see if it's leaking, running low. (again low-tech profiler!)
>
>
> Best regards,
> Clay Ferguson
> wcl...@gmail.com
>
>
> On Mon, Jan 9, 2017 at 12:03 PM, ravindar.singh
>  wrote:
>>
>> Please check the below code. correct me if any thing wrong.
>>
>> After restarting the server it is quit normal then 1 day later it is
>> taking
>> 2 min to upload the file.
>>
>> private void contentStroe(UploadParameters repContent, List
>> configParams,
>> String workspace, String table) throws Exception{
>> Session repSession = null;
>> Repository repository = null;
>> try{
>> String path = repContent.getModule();
>> Map nodeProps = repContent.getParams();
>> for(ProjectProp prop: configParams){
>> if("1".equals(prop.getMppFolderYn())){
>> Object value =
>> nodeProps.get(prop.getMppParameterName());
>> if(value!=null)
>> path += "/" + value.toString();
>> }
>> }
>> logger.info("Path => "+path);
>> Calendar cal = Calendar.getInstance();
>> DateFormat dateFormat = new SimpleDateFormat("/MM/dd
>> HH:mm:ss");
>> repository = getRepository();
>> repSession = repository.login(new SimpleCredentials("admin",
>> "admin".toCharArray()), workspace);
>> Node folderNode = repSession.getRootNode();
>> String[] docPath = path.split("/");
>> long docSize = 0;
>> String docExtn = "", docVersion = "";
>> for(String nodes : docPath){
>> if (folderNode.hasNode(nodes)) {
>> folderNode = folderNode.getNode(nodes);
>> } else {
>> boolean versioned = isVersioned(folderNode);
>> if(versioned)
>> folderNode.checkout();
>> Node subFolderNode = folderNode.addNode(nodes);
>> subFolderNode.addMixin("mix:referenceable");
>> subFolderNode.addMixin("mix:versionable");
>> subFolderNode.setProperty("Created",
>> dateFormat.format(cal.getTime()));
>> subFolderNode.setProperty("CreatedBy",
>> repContent.getUpdUser());
>> repSession.save();
>> if(versioned)
>> folderNode.checkin();
>> subFolderNode.checkin();
>> folderNode = folderNode.getNode(nodes);
>> }
>> }
>>
>> if(repContent.getUpdFile()!=null){
>> String name = repContent.getUpdFileName();
>> docExtn = name.substring(name.lastIndexOf(".")+1);
>> }
>> repContent.setDocName(repContent.getDocName()+"."+docExtn);
>> logger.info("File Store
>> Path=>"+path+"/"+repContent.getDocName());
>> if (folderNode.hasNode(repContent.getDocName())) {
>> boolean versioned = isVersioned(folderNode);
>> if(versioned)
>> folderNode.checkout();
>> Node fileNode =
>> folderNode.getNode(repContent.getDocName());
>> boolean fileversioned = isVersioned(fileNode);
>> if(fileversioned)
>> fileNode.checkout();
>> fileNode.setProperty("lastModified",
>> dateFormat.format(cal.getTime()));
>> fileNode.setProperty("UpdateBy", repContent.getUpdUser());
>> docSize = addRepoContents(repSession, fileNode,
>> repContent,
>> configParams);
>>   

Re: New Jackrabbit Committer: Andrei Dulceanu

2016-12-19 Thread Julian Sedding
Congratulations Andrei & welcome to the team!

Regards
Julian

On Mon, Dec 19, 2016 at 9:35 AM, Michael Dürig  wrote:
> Hi,
>
> Please welcome Andrei as a new committer and PMC member of the Apache
> Jackrabbit project. The Jackrabbit PMC recently decided to offer Andrei
> committership based on his contributions. I'm happy to announce that he
> accepted the offer and that all the related administrative work has now been
> taken care of.
>
> Welcome to the team, Andrei!
>
> Michael


Clarifiing Blob#getReference and BlobStore#getReference

2016-12-09 Thread Julian Sedding
Hi all

I was wondering if Blob#getReference could be used in
AbstractBlob#equal to optimize blob comparison (OAK-5253).
Specifically whether blobA.getReference() != blobB.getReference()
(pseudocode) allows us to determine that the blobs are not equal.

However, the API docs[0,1] only state that they return a "secure
reference" to the Blob. They do not explain what "safe" is supposed to
mean in this context.

Thanks for your insights!

Regards
Julian

[0] 
http://static.javadoc.io/org.apache.jackrabbit/oak-core/1.5.14/org/apache/jackrabbit/oak/api/Blob.html#getReference()
[1] 
http://static.javadoc.io/org.apache.jackrabbit/oak-blob/1.5.14/org/apache/jackrabbit/oak/spi/blob/BlobStore.html#getReference(java.lang.String)


Re: Is Lucene CopyOnRead/CopyOnWrite beneficial with SegmentNodeStore?

2016-11-21 Thread Julian Sedding
Thanks Chetan, that's very helpful.

Regards
Julian

On Mon, Nov 21, 2016 at 3:47 PM, Chetan Mehrotra
<chetan.mehro...@gmail.com> wrote:
> In general its better that you use a BlobStore even with
> SegmenNodeStore. In that case CoR and CoW allows using Lucene's memory
> mapped FSDirectory support providing better performance.
>
> Some old numbers can be seen at [1]
>
> Chetan Mehrotra
> [1] 
> https://issues.apache.org/jira/browse/OAK-1702?focusedCommentId=13965551=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13965551
>
>
> On Mon, Nov 21, 2016 at 8:03 PM, Julian Sedding <jsedd...@apache.org> wrote:
>> Hi all
>>
>> Do we have experience or measurements to suggest that using Lucene's
>> CopyOnRead and CopyOnWrite features is beneficial when the
>> SegmentNodeStore isused?
>>
>> The documentation indirectly suggests that CopyOnRead is only
>> beneficial with remote NodeStores (i.e. DocumentNodeStore). So the
>> question is whether the SegmentNodeStore implementation allows
>> sufficiently fast access to Lucene's index files, or whether the
>> associated overhead still makes CopyOnRead beneficial.
>>
>> Thanks for any insights!
>>
>> Regards
>> Julian


Is Lucene CopyOnRead/CopyOnWrite beneficial with SegmentNodeStore?

2016-11-21 Thread Julian Sedding
Hi all

Do we have experience or measurements to suggest that using Lucene's
CopyOnRead and CopyOnWrite features is beneficial when the
SegmentNodeStore isused?

The documentation indirectly suggests that CopyOnRead is only
beneficial with remote NodeStores (i.e. DocumentNodeStore). So the
question is whether the SegmentNodeStore implementation allows
sufficiently fast access to Lucene's index files, or whether the
associated overhead still makes CopyOnRead beneficial.

Thanks for any insights!

Regards
Julian


Re: [VOTE] Release Apache Jackrabbit Oak 1.4.10

2016-11-11 Thread Julian Sedding
+1 Release this package as Apache Jackrabbit Oak 1.4.10

Regards
Julian

On Fri, Nov 11, 2016 at 6:38 AM, Amit Jain  wrote:
> On Thu, Nov 10, 2016 at 7:23 PM, Davide Giannella  wrote:
>
>> Please vote on releasing this package as Apache Jackrabbit Oak 1.4.10.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Jackrabbit PMC votes are cast.
>>
>
> +1 Release this package as Apache Jackrabbit Oak 1.4.10
>
> Thanks
> Amit


Re: [VOTE] Release Apache Jackrabbit Oak 1.5.13

2016-11-11 Thread Julian Sedding
+1 Release this package as Apache Jackrabbit Oak 1.5.13

Regards
Julian

On Wed, Nov 9, 2016 at 10:29 AM, Alex Parvulescu
 wrote:
> [X] +1 Release this package as Apache Jackrabbit Oak 1.5.13
>
> On Tue, Nov 8, 2016 at 4:33 PM, Davide Giannella  wrote:
>
>>
>> A candidate for the Jackrabbit Oak 1.5.13 release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.5.13/
>>
>> The release candidate is a zip archive of the sources in:
>>
>>
>> https://svn.apache.org/repos/asf/jackrabbit/oak/tags/
>> jackrabbit-oak-1.5.13/
>>
>> The SHA1 checksum of the archive is
>> c023a1924941e1609abf82b4e63a8617276e6091.
>>
>> A staged Maven repository is available for review at:
>>
>> https://repository.apache.org/
>>
>> The command for running automated checks against this release candidate is:
>>
>> $ sh check-release.sh oak 1.5.13
>> c023a1924941e1609abf82b4e63a8617276e6091
>>
>> Please vote on releasing this package as Apache Jackrabbit Oak 1.5.13.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Jackrabbit PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Jackrabbit Oak 1.5.13
>> [ ] -1 Do not release this package because...
>>
>> Davide
>>
>>
>>


Re: svn commit: r1767830 - in /jackrabbit/oak/trunk/oak-upgrade/src: main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java test/java/org/apache/jackrabbit/oak/upgrade/Auth

2016-11-07 Thread Julian Sedding
Sorry, my bad. Thanks Chetan!

On Thu, Nov 3, 2016 at 8:58 AM,   wrote:
> Author: chetanm
> Date: Thu Nov  3 07:58:35 2016
> New Revision: 1767830
>
> URL: http://svn.apache.org/viewvc?rev=1767830=rev
> Log:
> OAK-5043: Very old JR2 repositories may have invalid nodetypes for groupsPath 
> and usersPath
>
> Add missing license header
>
> Modified:
> 
> jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java
> 
> jackrabbit/oak/trunk/oak-upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/AuthorizableFolderEditorTest.java
>
> Modified: 
> jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java?rev=1767830=1767829=1767830=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java
>  (original)
> +++ 
> jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/security/AuthorizableFolderEditor.java
>  Thu Nov  3 07:58:35 2016
> @@ -1,3 +1,19 @@
> +/*
> + * Licensed to the Apache Software Foundation (ASF) under one or more
> + * contributor license agreements.  See the NOTICE file distributed with
> + * this work for additional information regarding copyright ownership.
> + * The ASF licenses this file to You under the Apache License, Version 2.0
> + * (the "License"); you may not use this file except in compliance with
> + * the License.  You may obtain a copy of the License at
> + *
> + *  http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
>  package org.apache.jackrabbit.oak.upgrade.security;
>
>  import org.apache.jackrabbit.oak.api.CommitFailedException;
>
> Modified: 
> jackrabbit/oak/trunk/oak-upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/AuthorizableFolderEditorTest.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/AuthorizableFolderEditorTest.java?rev=1767830=1767829=1767830=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/AuthorizableFolderEditorTest.java
>  (original)
> +++ 
> jackrabbit/oak/trunk/oak-upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/AuthorizableFolderEditorTest.java
>  Thu Nov  3 07:58:35 2016
> @@ -1,3 +1,19 @@
> +/*
> + * Licensed to the Apache Software Foundation (ASF) under one or more
> + * contributor license agreements.  See the NOTICE file distributed with
> + * this work for additional information regarding copyright ownership.
> + * The ASF licenses this file to You under the Apache License, Version 2.0
> + * (the "License"); you may not use this file except in compliance with
> + * the License.  You may obtain a copy of the License at
> + *
> + *  http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
>  package org.apache.jackrabbit.oak.upgrade;
>
>  import org.apache.jackrabbit.JcrConstants;
>
>


[jira] [Commented] (JCRVLT-138) Unzip test-packages for easier maintenance

2016-10-31 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCRVLT-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15621925#comment-15621925
 ] 

Julian Sedding commented on JCRVLT-138:
---

Thanks for the quick fix! Looks good to me and should make tests much more 
accessible.

> Unzip test-packages for easier maintenance
> --
>
> Key: JCRVLT-138
> URL: https://issues.apache.org/jira/browse/JCRVLT-138
> Project: Jackrabbit FileVault
>  Issue Type: Improvement
>  Components: vlt
>Affects Versions: 3.1.30
>    Reporter: Julian Sedding
>Priority: Minor
> Fix For: 3.1.32
>
>
> As discussed in JCRVLT-111 it would be easier for maintenance of tests, and 
> more accessible, if the content-packages used in tests were exploaded.
> This can be done relatively easily, as shown in an [example 
> project|https://github.com/code-distillery/filevault-oak-reindex-hook/blob/master/src/test/java/net/distilledcode/tools/InstallHookTestUtils.java#L39].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: svn commit: r1767213 [1/4] - in /jackrabbit/commons/filevault/trunk: parent/ vault-core/src/main/java/org/apache/jackrabbit/vault/packaging/impl/ vault-core/src/test/java/org/apache/jackrabbit/vau

2016-10-31 Thread Julian Sedding
Hi Tobi

Thanks for the change.

Note: some of the files from the extracted packages have Adobe headers
and other are missing the Apache license headers.

Regards
Julian


On Mon, Oct 31, 2016 at 2:55 AM,   wrote:
> Author: tripod
> Date: Mon Oct 31 01:55:37 2016
> New Revision: 1767213
>
> URL: http://svn.apache.org/viewvc?rev=1767213=rev
> Log:
> JCRVLT-138 Unzip test-packages for easier maintenance
>
> Added:
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/config.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/definition/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/definition/.content.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/filter.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/nodetypes.cnd
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/META-INF/vault/properties.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/.content.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/node_a/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/node_a/.content.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/secured/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/secured/.content.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_a.zip/jcr_root/testroot/secured/_rep_policy.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/config.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/definition/
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/definition/.content.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/filter.xml
> 
> jackrabbit/commons/filevault/trunk/vault-core/src/test/resources/org/apache/jackrabbit/vault/packaging/integration/testpackages/mode_ac_test_b.zip/META-INF/vault/nodetypes.cnd
> 
> 

Re: segment-tar depending on oak-core

2016-10-31 Thread Julian Sedding
Hi all

My preference is also with a higher degree of modularity. Compared to
a monolithic application it is a trade-off that leads to both, higher
complexity and higher flexibility. Provided we are willing to change
and learn, I am sure we can easily manage the complexity. Numerous
benefits of the extra flexibility have been mentioned in this thread
before, so I won't repeat them.

As I understand it the Oak package structure was designed to
facilitate modularity very early on. As Jukka wrote back in 2012:

"[...] Ultimately such extra plugin components may well end up as
separate Maven components, but until the related service interfaces
and plugin boundaries are well defined it's better to keep all such
code together and simply use Java package boundaries to separate them.
That's the rationale behind the .oak.plugins package [...]"[0].

IMHO, now that the API boundaries are well defined (I hope), it would
be great to finally move the structure of the code-base and release
artifacts towards a more modular approach.

Regards
Julian

[0] http://markmail.org/thread/cs34a637dr26xscj


On Fri, Oct 28, 2016 at 8:29 AM, Francesco Mari
 wrote:
> Hi
>
> 2016-10-27 19:08 GMT+02:00 Alexander Klimetschek :
>> Maybe looking at this step by step would help.
>
> The oak-segment-tar bundle was supposed to be the first step.
>
>>For example, start with the nodestore implementations and extract everything 
>>into separate modules that is necessary for this - i.e. an oak-store-api 
>>along with the impls. But keep other apis in oak-core in that first step, to 
>>limit the effort. (And try not renaming the API packages, as well as keeping 
>>them backwards compatible, i.e. no major version bump, if possible).
>
> This didn't happen because of lack of consensus. See my previous
> answer to Michael Marth.
>
>>See how that works out and if positive, continue with more.
>
> The reaction to the modularization effort was not positive, so
> oak-segment-tar backed up.
>
>>
>> Cheers,
>> Alex
>>
>> Am 27. Okt. 2016, 03:48 -0700 schrieb Francesco Mari 
>> :
>> Something did happen: the first NodeStore implementation living in its
>> own module was oak-segment-tar. We just decided to go back to the old
>> model exactly because we didn't reach consensus about modularizing its
>> upstream and downstream dependencies.
>>
>> 2016-10-27 12:22 GMT+02:00 Michael Marth :
>> fwiw: last year a concrete proposal was made that seemed to have consensus
>>
>> “Move NodeStore implementations into their own modules"
>> http://markmail.org/message/6ylxk4twdi2lzfdz
>>
>> Agree that nothing happened - but I believe that this move might again find 
>> consenus
>>
>>
>>
>> On 27/10/16 10:49, "Francesco Mari"  wrote:
>>
>> We keep having this conversation regularly but nothing ever changes.
>> As much as I would like to push the modularization effort forward, I
>> recognize that the majority of the team is either not in favour or
>> openly against it. I don't want to disrupt the way most of us are used
>> to work. Michael Dürig already provided an extensive list of what we
>> will be missing if we keep writing software the way we do, so I'm not
>> going to repeat it. The most sensible thing to do is, in my humble
>> opinion, accept the decision of the majority.
>>
>> 2016-10-27 11:05 GMT+02:00 Davide Giannella :
>> On 27/10/2016 08:53, Michael Dürig wrote:
>>
>> +1.
>>
>> It would also help re. backporting, continuous integration, releasing,
>> testing, longevity, code reuse, maintainability, reducing technical
>> debt, deploying, stability, etc, etc...
>>
>> While I can agree on the above, and the fact that now we have
>> https://issues.apache.org/jira/browse/OAK-5007 in place, just for the
>> sake or argument I would say that if we want to have any part of Oak
>> with an independent release cycle we need to
>>
>> Have proper API packages that abstract things. Specially from oak-core
>>
>> As soon as we introduce a separate release cycle for a single module we
>> have to look at a wider picture. What other modules are affected?
>>
>> Taking the example of segment-tar we saw that we need
>>
>> - oak-core-api (name can be changed)
>> - independent releases of the oak tools: oak-run, oak-upgrade, ...
>> - independent release cycle for parent/pom.xml
>> - anything I'm missing?
>>
>> So if we want to go down that route than we have to do it properly and
>> for good. Not half-way.
>>
>> Davide
>>
>>


[jira] [Comment Edited] (JCRVLT-111) Add support for o.a.j.api.security.authorization.PrincipalSetPolicy

2016-10-29 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCRVLT-111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617748#comment-15617748
 ] 

Julian Sedding edited comment on JCRVLT-111 at 10/29/16 8:56 AM:
-

[~tripod] if you look at the implementation of TestVaultPackage it's 
exceedingly simple: it exposes a protected constructor accepting an {{Archive}} 
that is already present in ZipVaultPackage. Zipping the package on the fly 
probably requires more code and makes the tests slower. We could even consider 
providing a public constructor or utility to make testing in downstream 
projects easier.

I created JCRVLT-138 to track this improvement. Let's discuss over there in 
order not to further side-track the discussion in this ticket.


was (Author: jsedding):
[~tripod] if you look at the implementation of TestVaultPackage it's 
exceedingly simple: it exposes a protected constructor accepting an {{Archive}} 
that is already present in ZipVaultPackage. Zipping the package on the fly 
probably requires more code and makes the tests slower. We could even consider 
providing a public constructor or utility to make testing in downstream 
projects easier.

> Add support for o.a.j.api.security.authorization.PrincipalSetPolicy
> ---
>
> Key: JCRVLT-111
> URL: https://issues.apache.org/jira/browse/JCRVLT-111
> Project: Jackrabbit FileVault
>  Issue Type: New Feature
>Reporter: angela
>Assignee: Tobias Bocanegra
> Fix For: 3.1.30
>
> Attachments: JCRVLT-111.patch
>
>
> jackrabbit API has been extended by an additional type of access control 
> policy, which isn't an ACL. fvault should be adjusted to be able to properly 
> import that type of access control policy.
> as discussed: ac-handling {{MERGE}} and {{MERGE_PRESERVE}} should be 
> implemented the same way and just add extra principal names that are not yet 
> present in the set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (JCRVLT-138) Unzip test-packages for easier maintenance

2016-10-29 Thread Julian Sedding (JIRA)
Julian Sedding created JCRVLT-138:
-

 Summary: Unzip test-packages for easier maintenance
 Key: JCRVLT-138
 URL: https://issues.apache.org/jira/browse/JCRVLT-138
 Project: Jackrabbit FileVault
  Issue Type: Improvement
  Components: vlt
Affects Versions: 3.1.30
Reporter: Julian Sedding
Priority: Minor


As discussed in JCRVLT-111 it would be easier for maintenance of tests, and 
more accessible, if the content-packages used in tests were exploaded.

This can be done relatively easily, as shown in an [example 
project|https://github.com/code-distillery/filevault-oak-reindex-hook/blob/master/src/test/java/net/distilledcode/tools/InstallHookTestUtils.java#L39].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCRVLT-111) Add support for o.a.j.api.security.authorization.PrincipalSetPolicy

2016-10-29 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCRVLT-111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617748#comment-15617748
 ] 

Julian Sedding commented on JCRVLT-111:
---

[~tripod] if you look at the implementation of TestVaultPackage it's 
exceedingly simple: it exposes a protected constructor accepting an {{Archive}} 
that is already present in ZipVaultPackage. Zipping the package on the fly 
probably requires more code and makes the tests slower. We could even consider 
providing a public constructor or utility to make testing in downstream 
projects easier.

> Add support for o.a.j.api.security.authorization.PrincipalSetPolicy
> ---
>
> Key: JCRVLT-111
> URL: https://issues.apache.org/jira/browse/JCRVLT-111
> Project: Jackrabbit FileVault
>  Issue Type: New Feature
>Reporter: angela
>Assignee: Tobias Bocanegra
> Fix For: 3.1.30
>
> Attachments: JCRVLT-111.patch
>
>
> jackrabbit API has been extended by an additional type of access control 
> policy, which isn't an ACL. fvault should be adjusted to be able to properly 
> import that type of access control policy.
> as discussed: ac-handling {{MERGE}} and {{MERGE_PRESERVE}} should be 
> implemented the same way and just add extra principal names that are not yet 
> present in the set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: svn commit: r1765583 - in /jackrabbit/oak/trunk: oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/ oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ oak-cor

2016-10-21 Thread Julian Sedding
Thanks Chetan!

On Fri, Oct 21, 2016 at 2:59 PM, Chetan Mehrotra
<chetan.mehro...@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 6:08 PM, Julian Sedding <jsedd...@gmail.com> wrote:
>> I think we could get away with increasing this to 4.1.0 if we can
>> annotate QueryEngineSettingsMBean with @ProviderType.
>
> Makes sense. Opened OAK-4977 for that
>
> Chetan Mehrotra


Re: segment-tar depending on oak-core

2016-10-21 Thread Julian Sedding
> All of this is my understanding and I may be wrong, so please correct me
> if I'm wrong. I'm right, could adding an oak-core-api with independent
> lifecycle solve the situation?

While this may be possible, an arguably simpler solution would be to
give oak-run and oak-upgrade a separate lifecycle. They are consumers
of both segment-tar and oak-core (+ other bundles with same release
cycle). Hence they require interoperable releases of both *before*
they themselves can be released.

The other alternative, as Thomas mentioned, is to release everything
at once, including segment-tar.

Regards
Julian


On Fri, Oct 21, 2016 at 12:46 PM, Davide Giannella  wrote:
> Hello team,
>
> while integrating Oak with segment-tar in other products, I'm facing
> quite a struggle with a sort-of circular dependencies. We have
> segment-tar that depends on oak-core and then we have tools like oak-run
> or oak-upgrade which depends on both oak-core and segment-tar.
>
> this may not be an issue but in case of changes in the API, like for
> 1.5.12 we have the following situation. 1.5.12 has been released with
> segment-tar 0.0.14 but this mix doesn't actually work on OSGi
> environment as of API changes. On the other hand, in order to release
> 0.0.16 we need oak-core 1.5.12 with the changes.
>
> Now oak-run and other tools may fail, or at least be in an unknown
> situation.
>
> All of this is my understanding and I may be wrong, so please correct me
> if I'm wrong. I'm right, could adding an oak-core-api with independent
> lifecycle solve the situation?
>
> Davide
>
>


Re: [REVIEW] Configuration required for node bundling config for DocumentNodeStore - OAK-1312

2016-10-21 Thread Julian Sedding
+1 for initializing the default config unconditionally

Regards
Julian

On Fri, Oct 21, 2016 at 12:14 PM, Chetan Mehrotra
 wrote:
> Opened OAK-4975 for query around default config handling.
> Chetan Mehrotra
>
>
> On Fri, Oct 21, 2016 at 2:14 PM, Davide Giannella  wrote:
>> On 21/10/2016 08:23, Michael Marth wrote:
>>> Hi Chetan,
>>>
>>> Re “Should we ship with a default config”:
>>>
>>> I vote for a small default config:
>>> - default because: if the feature is always-on in trunk we will get better 
>>> insights in day-to-day work (as opposed to switching it on only 
>>> occasionally)
>>> - small because: the optimal bundling is probably very specific to the 
>>> application and its read-write patterns. Your suggestion to include nt:file 
>>> (and maybe rep:AccessControllable) looks reasonable to me, though.
>>>
>> +1 but I would not do it that DocumentNS has to actively register it. I
>> would have a plain RepositoryInitialiser always on beside the
>> InitialContent. So that it's clear it's somewhat different. In the end
>> as far as I understood it doesn't matter if we're running segment, tar
>> or Document. The config will affect only Document.
>>
>> Davide
>>
>>


Re: svn commit: r1765583 - in /jackrabbit/oak/trunk: oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/ oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ oak-cor

2016-10-20 Thread Julian Sedding
> -@Version("4.0.0")
> +@Version("5.0.0")
>  @Export(optional = "provide:=true")
>  package org.apache.jackrabbit.oak.api.jmx;

I think we could get away with increasing this to 4.1.0 if we can
annotate QueryEngineSettingsMBean with @ProviderType. I.e. we don't
expect API consumers to  implement QueryEngineSettingsMBean and
therefore the API change is irrelevant for them.

WDYT?

Regards
Julian



On Wed, Oct 19, 2016 at 2:20 PM,   wrote:
> Author: thomasm
> Date: Wed Oct 19 12:20:56 2016
> New Revision: 1765583
>
> URL: http://svn.apache.org/viewvc?rev=1765583=rev
> Log:
> OAK-4888 Warn or fail queries above a configurable cost value
>
> Added:
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/QueryOptions.java
> Modified:
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/QueryEngineSettingsMBean.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/package-info.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ContentMirrorStoreStrategy.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/Query.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/QueryEngineSettings.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/QueryEngineSettingsMBeanImpl.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/QueryImpl.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/SQL2Parser.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/UnionQueryImpl.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/xpath/Statement.java
> 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/query/xpath/XPathToSQL2Converter.java
> 
> jackrabbit/oak/trunk/oak-core/src/test/java/org/apache/jackrabbit/oak/query/SQL2ParserTest.java
> 
> jackrabbit/oak/trunk/oak-core/src/test/java/org/apache/jackrabbit/oak/query/XPathTest.java
> 
> jackrabbit/oak/trunk/oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/query/QueryTest.java
>
> Modified: 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/QueryEngineSettingsMBean.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/QueryEngineSettingsMBean.java?rev=1765583=1765582=1765583=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/QueryEngineSettingsMBean.java
>  (original)
> +++ 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/QueryEngineSettingsMBean.java
>  Wed Oct 19 12:20:56 2016
> @@ -51,4 +51,19 @@ public interface QueryEngineSettingsMBea
>   */
>  void setLimitReads(long limitReads);
>
> +/**
> + * Whether queries that don't use an index will fail (throw an 
> exception).
> + * The default is false.
> + *
> + * @return true if they fail
> + */
> +boolean getFailTraversal();
> +
> +/**
> + * Set whether queries that don't use an index will fail (throw an 
> exception).
> + *
> + * @param failTraversal the new value for this setting
> + */
> +void setFailTraversal(boolean failTraversal);
> +
>  }
>
> Modified: 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/package-info.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/package-info.java?rev=1765583=1765582=1765583=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/package-info.java
>  (original)
> +++ 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/api/jmx/package-info.java
>  Wed Oct 19 12:20:56 2016
> @@ -15,7 +15,7 @@
>   * limitations under the License.
>   */
>
> -@Version("4.0.0")
> +@Version("5.0.0")
>  @Export(optional = "provide:=true")
>  package org.apache.jackrabbit.oak.api.jmx;
>
>
> Modified: 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ContentMirrorStoreStrategy.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ContentMirrorStoreStrategy.java?rev=1765583=1765582=1765583=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/property/strategy/ContentMirrorStoreStrategy.java
>  (original)
> +++ 
> 

Re: Possibility of making nt:resource unreferenceable

2016-10-12 Thread Julian Sedding
On Wed, Oct 12, 2016 at 11:24 AM, Bertrand Delacretaz
<bdelacre...@apache.org> wrote:
> On Wed, Oct 12, 2016 at 11:18 AM, Julian Sedding <jsedd...@gmail.com> wrote:
>> ...As a remedy for implementations that rely on the current referencable
>> nature, we could provide tooling that automatically adds the
>> "mix:referencable" mixin to existing nt:resource nodes...
>
> Good idea, I suppose this can be done with a commit hook in a non-intrusive 
> way?

For JR2 content being upgraded to Oak (or during an Oak to Oak
"sidegrade"), i.e. in the oak-upgrade module, it would be easy to add
this functionality via a commit hook.

For an existing Oak repository the same functionality could be
implemented on the JCR API and a full repo traversal, I suppose. If we
can get past the node-type validation. Alternatively we could come up
with an extension SPI/API that allows plugging in an implementation
for specific non-trivial node-type updates. This would even allow for
two alternative implementations: one that adds mix:referencable and
another that removes the jcr:uuid property - so JCR users could choose
which strategy they prefer.

Regards
Julian


>
> -Bertrand


Re: Possibility of making nt:resource unreferenceable

2016-10-12 Thread Julian Sedding
I'm with Julian R. on this (as I understand him). We should change the
node-type nt:resource to match the JCR 2.0 spec and deal with the
consequences.

Currently I am under the impression that we have no knowledge of what
*might* break, with varying opinions on the matter. Maybe we should to
find out what *does* break.

As a remedy for implementations that rely on the current referencable
nature, we could provide tooling that automatically adds the
"mix:referencable" mixin to existing nt:resource nodes and recommend
adapting the code to add the mixin as well.

Regards
Julian


On Wed, Oct 12, 2016 at 11:04 AM, Carsten Ziegeler  wrote:
> The latest proposal was not about making nt:resource unreferenceable,
> but silently changing the resource type for a nt:resource child node of
> a nt:file node to Oak:Resource.
>
> I just found three other places in Sling where nt:file nodes are created
> by hand. So with any other mechanism we have to change a lot of places
> in Sling alone. Not to mention all downstream users.
>
> Carsten
>
> Thomas Mueller wrote
>> Hi,
>>
>> I agree with Julian, I think making nt:resource unreferenceable would
>> (hardcoding some "magic" in Oak) would lead to hard-to-find bugs and
>> problems.
>>
>>> So whatever solution we pick, there is a risk that existing code fails.
>>
>> Yes. But I think if we create a new nodetype, at least it would be easier
>> for users to understand the problem.
>>
>> Also, the "upgrade path" with a new nodetype is smoother. This can be done
>> incrementally, even thought it might mean more total work. But making
>> nt:resource unreferenceable would be a hard break, and I think risk of
>> bigger problems is higher.
>>
>> Regards,
>> Thomas
>>
>>
>>
>> On 07/10/16 12:05, "Julian Reschke"  wrote:
>>
>>> On 2016-10-07 10:56, Carsten Ziegeler wrote:
 Julian Reschke wrote
> On 2016-10-07 08:04, Carsten Ziegeler wrote:
>> ...
>> The easiest solution that comes to my mind is:
>>
>> Whenever a nt:resource child node of a nt:file node is created, it is
>> silently changed to oak:resource.
>>
>> Carsten
>> ...
>
> Observation: that might break code that actually wants a referenceable
> node: it would create the node, check for the presence of
> mix:referenceable, and then decide not to add it because it's already
> there.
>

 Well, there might be code that assumes that a file uploaded through
 webdav is using a resource child node that is referenceable.
 Or a file posted through the Sling POST servlet has this. Now, you could
 argue if that code did not create the file, it should check node types,
 but how likely is that if the code has history?

 So whatever solution we pick, there is a risk that existing code fails.
 ...
>>>
>>> That is true..
>>>
>>> However, my preference would be to only break code which is
>>> non-conforming right now. Code should not rely on nt:resource being
>>> referenceable (see
>>> >> ml#3.7.11.5%20nt:resource>).
>>>
>>> So my preference would be to make that change and see what breaks (and
>>> get that fixed).
>>>
 ...
>>>
>>>
>>> Best regards, Julian
>>
>>
>
>
>
>
> --
> Carsten Ziegeler
> Adobe Research Switzerland
> cziege...@apache.org
>


Re: Datastore GC only possible after Tar Compaction

2016-10-05 Thread Julian Sedding
Thanks Amit for your insights.

Is it documented that DS GC is ineffective if no prior tar compaction
is performed? IMHO we should make this as clear as possible, because
the behaviour deviates from JR2 and thus has the potential to throw
lots of users. Possibly even mention it as a possible reason in the
log message if DS GC was ineffective.

Would it be possible to improve the heuristic without traversing the
node tree? I.e. do the segment tar files contain sufficient
information in their indexes to safely determine that some binary
references are dead? I'm looking for no false positives but possibly
many false negatives.

Regards
Julian


On Mon, Oct 3, 2016 at 10:37 AM, Amit Jain <am...@ieee.org> wrote:
> Hi,
>
> On Mon, Oct 3, 2016 at 1:29 PM, Julian Sedding <jsedd...@apache.org> wrote:
>
>> I just became aware that on a system configured with SegmentNodeStore
>> and FileDatastore a Datastore garbage collection can only free up
>> space *after* a Tar Compaction was run.
>>
>>
> Yes that is a pre-requisite.
>
>
>> I would like to discuss whether it is desirable to require a Tar
>> Compaction prior to a DS GC. If someone knows about the rationale
>> behind this behaviour, I would also appreciate these insights!
>>
>> The alternative behaviour, which I would have expected, is to collect
>> only binaries that are referenced from the root NodeState or any of
>> the checkpoint's root NodeStates (i.e. "live" NodeStates).
>>
>> From an implementation perspective, I assume that the current
>> behaviour can be implemented with better performance than a solution
>> that checks only "live" NodeStates. However, IMHO that should not be
>> the only relevant factor in the discussion.
>>
>
> I believe the performance impact of loading all nodes to check whether the
> node has a binary property
> is quite high. What you are referring to was how it is implemented in
> Jackrabbit and
> the reference collection phase took days on larger repositories. But with
> the NodeStore specific implementation for
> blob reference collection this phase takes only a few hours. For example
> there is also an enhancement already implemented in oak-segment-tar
> to have the index of binary reference OAK-4201.
>
> Thanks
> Amit


Datastore GC only possible after Tar Compaction

2016-10-03 Thread Julian Sedding
Hi all

I just became aware that on a system configured with SegmentNodeStore
and FileDatastore a Datastore garbage collection can only free up
space *after* a Tar Compaction was run.

This behaviour is not immediately intuitive to me.

I would like to discuss whether it is desirable to require a Tar
Compaction prior to a DS GC. If someone knows about the rationale
behind this behaviour, I would also appreciate these insights!

The alternative behaviour, which I would have expected, is to collect
only binaries that are referenced from the root NodeState or any of
the checkpoint's root NodeStates (i.e. "live" NodeStates).

>From an implementation perspective, I assume that the current
behaviour can be implemented with better performance than a solution
that checks only "live" NodeStates. However, IMHO that should not be
the only relevant factor in the discussion.

I'm looking forward to your feedback!

Regards
Julian


Re: [VOTE] Require JDK7 for Oak 1.4

2016-09-19 Thread Julian Sedding
+1

Regards
Julian

On Mon, Sep 19, 2016 at 11:22 AM, Michael Dürig  wrote:
>
>
> On 16.9.16 5:16 , Julian Reschke wrote:
>>
>> [X] +1 Yes, require JDK7 for Oak 1.4
>
>
> Michael


Re: Requirement to support multiple NodeStore instance in same setup (OAK-4490)

2016-06-21 Thread Julian Sedding
Hi Chetan

I agree that we should not rely on the service.ranking for this. A
type property makes sense IMO.

On the other hand, do we really need to expose both NodeStores in the
service registry? The secondary (cache) NodeStore could also be
treated as an implementation detail of the DocumentNodeStore and
switched on/off via configuration. Of course the devil is in the
detail then - how to configure different BlobStores, cache sizes etc
of the secondary NodeStore?

Not exposing the secondary NodeStore in the service registry would be
backwards compatible. Introducing the "type" property potentially
breaks existing consumers, i.e. is not backwards compatible.

Regards
Julian


On Tue, Jun 21, 2016 at 9:03 AM, Chetan Mehrotra
 wrote:
> Hi Team,
>
> As part of OAK-4180 feature around using another NodeStore as a local
> cache for a remote Document store I would need to register another
> NodeStore instance (for now a SegmentNodeStore - OAK-4490) with the
> OSGi service registry.
>
> This instance would then be used by SecondaryStoreCacheService to save
> NodeState under certain paths locally and use it later for reads.
>
> With this change we would have a situation where there would be
> multiple NodeStore instance in same service registry. This can confuse
> some component which have a dependency on NodeStore as a reference and
> we need to ensure they bind to correct NodeStore instance.
>
> Proposal A - Use a 'type' service property to distinguish
> ==
>
> Register the NodeStore with a 'type' property. For now the value can
> be 'primary' or 'secondary'. When any component registers the
> NodeStore it also provides the type property.
>
> On user side the reference needs to provide which type of NodeStore it
> needs to bound
>
> This would ensure that user of NodeStore get bound to correct type.
>
> if we use service.ranking then it can cause a race condition where the
> secondary instance may get bound untill primary comes up
>
> Looking for feedback on what approach to take
>
> Chetan Mehrotra


Re: [VOTE] Release Apache Jackrabbit 2.10.3

2016-05-09 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit 2.10.3

Regards
Julian

On Mon, May 9, 2016 at 10:06 AM, Amit Jain  wrote:
> A candidate for the Jackrabbit 2.10.3 release is available at:
>
> https://dist.apache.org/repos/dist/dev/jackrabbit/2.10.3/
>
> The release candidate is a zip archive of the sources in:
>
> https://svn.apache.org/repos/asf/jackrabbit/tags/2.10.3/
>
> The SHA1 checksum of the archive is
> c253cd03e2e39010f28d7d66eaeabc5ffe2c1975.
>
> A staged Maven repository is available for review at:
>
> https://repository.apache.org/
>
> The command for running automated checks against this release candidate is:
>
> $ sh check-release.sh 2.10.3 c253cd03e2e39010f28d7d66eaeabc5ffe2c1975
>
> Please vote on releasing this package as Apache Jackrabbit 2.10.3.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Jackrabbit PMC votes are cast.
>
> [ ] +1 Release this package as Apache Jackrabbit 2.10.3
> [ ] -1 Do not release this package because...
>
> Thanks
> Amit


Re: Duplicate logic in oak-run commands

2016-05-05 Thread Julian Sedding
Hi Francesco

+1 for centralizing logic for creating a NodeStore instance.

I like the idea of encoding the description of a NodeStore instance in
a URI. This is both concise and extensible. We need to also consider
how to express the use of different DataStores as well. I.e. the URI
should ideally describe a complete setup.

Regards
Julian





On Thu, May 5, 2016 at 10:42 AM, Francesco Mari
 wrote:
> Hi all,
>
> While looking into OAK-4246 I figured out that many commands in oak-run
> implement the same logic over and over to create instance of NodeStore from
> command line arguments and options.
>
> Thus, I created OAK-4349 to propose another approach to the problem. A
> connection to a specific NodeStore could be specified by using an URI and
> the logic to create NodeStore instances could be implemented in a single
> place and reused from every command.
>
> I proposed some examples in OAK-4349. I'm looking forward to hearing what
> you think about this suggestion.


[jira] [Commented] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-28 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262151#comment-15262151
 ] 

Julian Sedding commented on JCR-3971:
-

The following system property is supported:
{noformat}
org.apache.jackrabbit.core.security.authorization.acl.CompiledPermissionsImpl.cacheSize
 (default: 5000)
{noformat}

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-28 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15262148#comment-15262148
 ] 

Julian Sedding commented on JCR-3972:
-

The following two system properties are supported:
{noformat}
org.apache.jackrabbit.core.CachingHierarchyManager.cacheSize (default: 1)
org.apache.jackrabbit.core.CachingHierarchyManager.logInterval (default: 6)
{noformat}

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Please vote for the final name of oak-segment-next

2016-04-26 Thread Julian Sedding
Hi

+1 for oak-segment-file or oak-segment-tar.

+0 for oak-segment-store. We *may* implement another segment-based
persistence later, in which case having the persistence strategy in
the name sounds like a good idea to me.

Similarly, a later refactoring of the document store could lead to
oak-document-mongo and oak-document-rdb (plus possibly
oak-document-spi for shared stuff).

Regards
Julian


On Tue, Apr 26, 2016 at 2:00 PM, Thomas Mueller  wrote:
> Hi,
>
> I would keep the "oak-segment-*" name, so that it's clear what it is based
> on. So:
>
> -1 oak-local-store
> -1 oak-embedded-store
>
> +1 oak-segment-*
>
> Within the oak-segment-* options, I don't have a preference.
>
> Regards,
> Thomas
>
>
> On 25/04/16 16:46, "Michael Dürig"  wrote:
>
>>
>>Hi,
>>
>>There is a couple of names that came up in the discussion [1]:
>>
>>oak-local-store
>>oak-segment-file
>>oak-embedded-store
>>oak-segment-store
>>oak-segment-tar
>>oak-segment-next
>>
>>Please vote which of the above six options you would like to see as the
>>final name for oak-segment-next [2]:
>>
>>Put +1 next to those names that you favour, put -1 to veto names and
>>remove the remaining names. Please justify any veto as otherwise it is
>>non binding.
>>
>>The name with the most +1 votes and without any -1 vote will be chosen.
>>
>>The vote is open for the next 72 hours.
>>
>>Michael
>>
>>
>>[1] http://markmail.org/thread/ktk7szjxtucpqd2o
>>[2] https://issues.apache.org/jira/browse/OAK-4245
>


[jira] [Updated] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3971:

Fix Version/s: 2.10.3
   2.8.2

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved JCR-3971.
-
Resolution: Fixed

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding resolved JCR-3972.
-
Resolution: Fixed

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256235#comment-15256235
 ] 

Julian Sedding edited comment on JCR-3971 at 4/25/16 12:23 PM:
---

Fixed in trunk [r1740814|https://svn.apach.org/r1740814]. Original patch from 
[~baedke].

Merged to branches/2.10 in [r1740826|https://svn.apache.org/r1740826].
Merged to branches/2.8 in [r1740829|https://svn.apache.org/r1740829].


was (Author: jsedding):
Fixed in trunk [r1740814|https://svn.apach.org/r1740814]. Original patch from 
[~baedke].

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3972:

Fix Version/s: 2.10.3
   2.8.2

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.8.2, 2.10.3, 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256236#comment-15256236
 ] 

Julian Sedding edited comment on JCR-3972 at 4/25/16 12:23 PM:
---

Fixed in trunk [r1740815|https://svn.apach.org/r1740815]. Original patch from 
[~baedke].

Merged to branches/2.10 in [r1740828|https://svn.apache.org/r1740828].
Merged to branches/2.8 in [r1740831|https://svn.apache.org/r1740831].


was (Author: jsedding):
Fixed in trunk [r1740815|https://svn.apach.org/r1740815]. Original patch from 
[~baedke].

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3971:

Fix Version/s: 2.12.2

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3971:

Affects Version/s: 2.8.1
   2.10.2

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3972:

Fix Version/s: 2.12.2

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated JCR-3972:

Affects Version/s: 2.8.1
   2.10.2

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.8.1, 2.10.2, 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
> Fix For: 2.12.2
>
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256235#comment-15256235
 ] 

Julian Sedding commented on JCR-3971:
-

Fixed in trunk [r1740814|https://svn.apach.org/r1740814]. Original patch from 
[~baedke].

> Make read-permission cache-size in CompiledPermissionsImpl configurable
> ---
>
> Key: JCR-3971
> URL: https://issues.apache.org/jira/browse/JCR-3971
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
>
> Some use-cases require a larger read-permission cache size than the 
> hard-coded 5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256236#comment-15256236
 ] 

Julian Sedding commented on JCR-3972:
-

Fixed in trunk [r1740815|https://svn.apach.org/r1740815]. Original patch from 
[~baedke].

> Make size of ID-cache in CachingHierarchyManager configurable
> -
>
> Key: JCR-3972
> URL: https://issues.apache.org/jira/browse/JCR-3972
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.12.1
>    Reporter: Julian Sedding
>    Assignee: Julian Sedding
>Priority: Minor
>
> Some use-cases require a larger ID cache to perform well than the hard-coded 
> 1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (JCR-3972) Make size of ID-cache in CachingHierarchyManager configurable

2016-04-25 Thread Julian Sedding (JIRA)
Julian Sedding created JCR-3972:
---

 Summary: Make size of ID-cache in CachingHierarchyManager 
configurable
 Key: JCR-3972
 URL: https://issues.apache.org/jira/browse/JCR-3972
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: core
Affects Versions: 2.12.1
Reporter: Julian Sedding
Assignee: Julian Sedding
Priority: Minor


Some use-cases require a larger ID cache to perform well than the hard-coded 
1. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (JCR-3971) Make read-permission cache-size in CompiledPermissionsImpl configurable

2016-04-25 Thread Julian Sedding (JIRA)
Julian Sedding created JCR-3971:
---

 Summary: Make read-permission cache-size in 
CompiledPermissionsImpl configurable
 Key: JCR-3971
 URL: https://issues.apache.org/jira/browse/JCR-3971
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: core
Affects Versions: 2.12.1
Reporter: Julian Sedding
Assignee: Julian Sedding
Priority: Minor


Some use-cases require a larger read-permission cache size than the hard-coded 
5000. This should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Cannot create JIRA issue for JCR project

2016-04-23 Thread Julian Sedding
Thanks a lot Jukka, that did the trick!

Regards
Julian

On Fri, Apr 22, 2016 at 9:33 PM, Jukka Zitting <jukka.zitt...@gmail.com> wrote:
> I added Julian's account to the PMC role of the JCR project in Jira.
>
> Best,
>
> Jukka
>
> On Fri, Apr 22, 2016 at 2:24 PM Robert Munteanu <romb...@apache.org> wrote:
>>
>> On Fri, 2016-04-22 at 15:27 +0200, Julian Sedding wrote:
>> > Hi all
>> >
>> > I can currently not log a JIRA issue for the JCR project. It is not
>> > listed in the Create dialog's project dropdown. Other projects, e.g.
>> > Jackrabbit Oak, Sling etc. are listed.
>> >
>> > Any ideas? Could it be related to the recent infra changes on JIRA?
>> > Should I contact infra?
>> >
>> > Thanks
>> > Julian
>>
>> The timing is certainly suspicious. An admin of the JCR Jira project
>> should check that you have the right role. Any of 'Administrator, PMC,
>> Committer, Contributor and Developer' should work.
>>
>> Robert


Cannot create JIRA issue for JCR project

2016-04-22 Thread Julian Sedding
Hi all

I can currently not log a JIRA issue for the JCR project. It is not
listed in the Create dialog's project dropdown. Other projects, e.g.
Jackrabbit Oak, Sling etc. are listed.

Any ideas? Could it be related to the recent infra changes on JIRA?
Should I contact infra?

Thanks
Julian


Re: Deprecation of 2.2.x plan

2016-04-20 Thread Julian Sedding
+1

Regards
Julian

On Wed, Apr 20, 2016 at 7:01 AM, KÖLL Claus  wrote:
> +1
>
> greets
> claus
>
> -Ursprüngliche Nachricht-
> Von: Davide Giannella [mailto:dav...@apache.org]
> Gesendet: Dienstag, 19. April 2016 15:57
> An: dev
> Betreff: Deprecation of 2.2.x plan
>
> Good afternoon team,
>
> we've not been touching our 2.2.x branch of Jackrabbit since 2012 and I
> feel it's now safe to drop the support.
>
> What it means in actual actions:
>
> - link will be removed from the download page
> - news will be posted on the homepage
> - [announce] will be sent to jr-user, jr-dev, oak-dev
> - branch and tags WILL stay there
>
> I will act on this somewhere next week.
>
> Any concern speak out.
>
> Regards
> Davide
>
>


Re: Increase language level to Java 7

2016-04-07 Thread Julian Sedding
We could enforce java6 signatures for the branches using the
animal-sniffer-maven-plugin. This should help detect bogus backports
quickly.

Regards
Julian

On Thu, Apr 7, 2016 at 10:57 AM, Francesco Mari
 wrote:
> Language features would be available for new, backport-free developments.
> Existing code doesn't have to use those features if they would be an issue
> during backports.
>
> 2016-04-07 10:25 GMT+02:00 Davide Giannella :
>
>> On 06/04/2016 15:25, Francesco Mari wrote:
>> > I was talking about trunk, of course. Developers working in areas where
>> > backports are the norm have to carefully consider if and when using Java
>> 7
>> > language features would be appropriate. New portions of the codebase
>> could
>> > use of the new features freely.
>> >
>>
>> We were discussing this on chat. Generally I'd say +1 for trunk but we
>> risk to introduce problems for backports.
>>
>> Davide
>>
>>
>>


Re: [VOTE] Release Apache Jackrabbit Oak 1.5.0

2016-03-30 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit Oak 1.5.0

Regards
Julian

On Tue, Mar 29, 2016 at 3:07 PM, Alex Parvulescu
 wrote:
> [X] +1 Release this package as Apache Jackrabbit Oak 1.5.0
>
> best,
> alex
>
> On Tue, Mar 29, 2016 at 10:57 AM, Amit Jain  wrote:
>
>> A candidate for the Jackrabbit Oak 1.5.0 release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.5.0/
>>
>> The release candidate is a zip archive of the sources in:
>>
>>
>> https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.5.0/
>>
>> The SHA1 checksum of the archive is
>> 1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08.
>>
>> A staged Maven repository is available for review at:
>>
>> https://repository.apache.org/
>>
>> The command for running automated checks against this release candidate is:
>>
>> $ sh check-release.sh oak 1.5.0
>> 1c4b3a95c8788a80129c1b7efb7dc38f4d19bd08
>>
>> Please vote on releasing this package as Apache Jackrabbit Oak 1.5.0.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Jackrabbit PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Jackrabbit Oak 1.5.0
>> [ ] -1 Do not release this package because...
>>
>> My vote is +1.
>>
>> Thanks
>> Amit
>>


Re: [VOTE] Release Apache Jackrabbit Oak 1.4.1

2016-03-30 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit Oak 1.4.1

Regards
Julian

On Wed, Mar 30, 2016 at 1:52 PM, Julian Reschke  wrote:
> On 2016-03-24 15:32, Davide Giannella wrote:
>>
>> ...
>
>
> [X] +1 Release this package as Apache Jackrabbit Oak 1.4.1
>
> Best regards, Julian
>


Re: New Jackrabbit committer: Tomek Rękawek

2016-03-23 Thread Julian Sedding
Congratulations Tomek!

Regards
Julian

On Tue, Mar 22, 2016 at 12:05 PM, Manfred Baedke
 wrote:
> Welcome, Tomek!
>
> Manfred
>
>
> On 3/21/2016 6:21 PM, Michael Dürig wrote:
>>
>> Hi,
>>
>> Please welcome Tomek as a new committer and PMC member of the Apache
>> Jackrabbit project. The Jackrabbit PMC recently decided to offer Tomek
>> committership based on his contributions. I'm happy to announce that he
>> accepted the offer and that all the related administrative work has now been
>> taken care of.
>>
>> Welcome to the team, Tomek!
>>
>> Michael
>
>


Re: [VOTE] Release Apache Jackrabbit Oak 1.4.0 (take 3)

2016-03-07 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit Oak 1.4.0

Regards
Julian

On Mon, Mar 7, 2016 at 11:51 AM, Davide Giannella  wrote:
> A candidate for the Jackrabbit Oak 1.4.0 release is available at:
>
> https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.4.0/
>
> The release candidate is a zip archive of the sources in:
>
>
> https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.4.0/
>
> The SHA1 checksum of the archive is
> 483493eacea4c64a6a568982058996d745ad4e18.
>
> A staged Maven repository is available for review at:
>
> https://repository.apache.org/
>
> The command for running automated checks against this release candidate is:
>
> $ sh check-release.sh oak 1.4.0 483493eacea4c64a6a568982058996d745ad4e18
>
> Please vote on releasing this package as Apache Jackrabbit Oak 1.4.0.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Jackrabbit PMC votes are cast.
>
> [ ] +1 Release this package as Apache Jackrabbit Oak 1.4.0
> [ ] -1 Do not release this package because...
>
> Davide
>


Re: svn commit: r1733315 - /jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt

2016-03-03 Thread Julian Sedding
Nitpick:

> +Changes in Oak 1.2.0
IMHO that should be "Changes in Oak 1.4.0"

This probably doesn't warrant a re-release, but if there *is* a
re-release due to OAK-4085[0] it would be nice to correct it.

Regards
Julian

[0] https://issues.apache.org/jira/browse/OAK-4085

On Wed, Mar 2, 2016 at 4:45 PM,   wrote:
> Author: davide
> Date: Wed Mar  2 15:45:23 2016
> New Revision: 1733315
>
> URL: http://svn.apache.org/viewvc?rev=1733315=rev
> Log:
> OAK-4073 - Release Oak 1.4.0
>
> release notes
>
>
> Modified:
> jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt
>
> Modified: jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt?rev=1733315=1733314=1733315=diff
> ==
> --- jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt (original)
> +++ jackrabbit/oak/branches/1.4/RELEASE-NOTES.txt Wed Mar  2 15:45:23 2016
> @@ -1,4 +1,4 @@
> -Release Notes -- Apache Jackrabbit Oak -- Version 1.3.16
> +Release Notes -- Apache Jackrabbit Oak -- Version 1.4.0
>
>  Introduction
>  
> @@ -7,32 +7,809 @@ Jackrabbit Oak is a scalable, high-perfo
>  repository designed for use as the foundation of modern world-class
>  web sites and other demanding content applications.
>
> -Apache Jackrabbit Oak 1.3.16 is an unstable release cut directly from
> -Jackrabbit Oak trunk, with a focus on new features and other
> -improvements. For production use we recommend the latest stable 1.2.x
> -release.
> +Jackrabbit Oak 1.4 is an incremental feature release based on and
> +compatible with earlier stable Jackrabbit Oak 1.x releases. Jackrabbit
> +Oak 1.4.x releases are considered stable and targeted for production
> +use.
>
>  The Oak effort is a part of the Apache Jackrabbit project.
>  Apache Jackrabbit is a project of the Apache Software Foundation.
>
> -Changes in Oak 1.3.16
> +Changes in Oak 1.2.0
>  -
>
>  Sub-task
>
> +[OAK-318] - Excerpt support
> +[OAK-1708] - extend DocumentNodeStoreService to support
> +RDBPersistence
> +[OAK-1828] - Improved SegmentWriter
> +[OAK-1860] - unit tests for concurrent DocumentStore access
> +[OAK-1940] - memory cache for RDB persistence
> +[OAK-2008] - authorization setup for closed user groups
> +[OAK-2171] - oak-run should support repository upgrades with all
> +available options
> +[OAK-2410] - [sonar]Some statements not being closed in
> +RDBDocumentStore
> +[OAK-2502] - Provide initial implementation of the Remote
> +Operations specification
> +[OAK-2509] - Support for faceted search in query engine
> +[OAK-2510] - Support for faceted search in Solr index
> +[OAK-2511] - Support for faceted search in Lucene index
> +[OAK-2512] - ACL filtering for faceted search
> +[OAK-2630] - Cleanup Oak jobs on buildbot
> +[OAK-2634] - QueryEngine should expose name query as property
> +restriction
> +[OAK-2700] - Cleanup usages of mk-api
> +[OAK-2701] - Move oak-mk-api to attic
> +[OAK-2702] - Move oak-mk to attic
> +[OAK-2747] - Admin cannot create versions on a locked page by
> +itself
> +[OAK-2756] - Move mk-package of oak-commons to attic
> +[OAK-2760] - HttpServer in Oak creates multiple instance of
> +ContentRepository
> +[OAK-2770] - Configurable mode for backgroundOperationLock
> +[OAK-2781] - log node type changes and the time needed to traverse
> +the repository
> +[OAK-2813] - Create a benchmark for measuring the lag of async
> +index
> +[OAK-2826] - Refactor ListeneableFutureTask to commons
> +[OAK-2828] - Jcr builder class does not allow overriding most of
> +its dependencies
> +[OAK-2850] - Flag states from revision of an external change
> +[OAK-2856] - improve RDB diagnostics
> +[OAK-2901] - RDBBlobStoreTest should be able to run against
> +multiple DB types
> +[OAK-2915] - add (experimental) support for Apache Derby
> +[OAK-2916] - RDBDocumentStore: use of "GREATEST" in SQL apparently
> +doesn't have test coverage in unit tests
> +[OAK-2918] - RDBConnectionHandler: handle failure on setReadOnly()
> +gracefully
> +[OAK-2923] - RDB/DB2: change minimal supported version from 10.5
> +to 10.1, also log decimal version numbers as well
> +[OAK-2930] - RDBBlob/DocumentStore throws NPE when used after
> +being closed
> +[OAK-2931] - RDBDocumentStore: mitigate effects of large query
> +result sets
> +[OAK-2940] - RDBDocumentStore: "set" operation on _modified
> +appears to be implemented as "max"
> +[OAK-2943] - Support measure for union queries
> +[OAK-2944] - Support merge iterator for union order by queries
> +[OAK-2949] - RDBDocumentStore: no custom SQL needed for GREATEST
> +[OAK-2950] - RDBDocumentStore: conditional fetch logic is reversed
> +[OAK-2952] - RDBConnectionHandler: 

Re: testing blob equality

2016-02-29 Thread Julian Sedding
Yes, the LengthCachingDataStore is exactly the way to go. You need to
wrap the original datastore in the length caching datastore (using the
repository.xml). The LengthCachingDataStore not only caches the
length, but (for the FileDataStore at least) it also prevents a call
to File.exists(). These add up on the FS and I expect even more so on
S3.

Should we automatically wrap the DS in the LengthCachingDatastore in
oak-upgrade? Or provide an option for the cache-file path, which turns
it on if set?

Regards
Julian


On Mon, Feb 29, 2016 at 3:17 PM, Tomek Rekawek  wrote:
> Thanks Chetan, I haven’t noticed the length() invocation in the createBlob(). 
> It seems that the LengthCachingDataStore is something I was looking for.
>
> Best regards,
> Tomek
>
> --
> Tomek Rękawek | Adobe Research | www.adobe.com
> reka...@adobe.com
>
>> On 29 Feb 2016, at 14:35, Chetan Mehrotra  wrote:
>>
>> On Mon, Feb 29, 2016 at 6:42 PM, Tomek Rekawek  wrote:
>>> I wonder if we can switch the order of length and identity comparison in 
>>> AbstractBlob#equal() method. Is there any case in which the 
>>> getContentIdentity() method will be slower than length()?
>>
>> That can be switched but I am afraid that it would not work as
>> expected. In JackrabbitNodeState#createBlob determining the
>> contentIdentity involves determining the length. You can give
>> org.apache.jackrabbit.oak.upgrade.blob.LengthCachingDataStore a try
>> (See OAK-2882 for details)
>>
>> Chetan Mehrotra
>


Re: oak-upgrade test failures (was Re: Oak 1.3.16 release plan)

2016-02-15 Thread Julian Sedding
The test failures in the issue seem to suggest that this may be relates to
simple versionables. IIRC we recently added support for some broken JR2
constructs. Could they have been fixed in the last JR release? If that's
the case it may no longer be possible to populate the source repository for
the tests.

Just pure guesses, but I thought it might help.

Regards
Julian


On Monday, February 15, 2016, Davide Giannella  wrote:

> On 12/02/2016 18:36, Manfred Baedke wrote:
> > Hi,
> >
> > This is due to change 1721196 (associated with JCR-2633), which
> > changes the persistent data model. Probably the test has just to be
> > tweaked accordingly, I'll look into it during WE.
> Thank you very much Manfred.
>
> I've filed https://issues.apache.org/jira/browse/OAK-4018 to keep track
> and block 1.3.16.
>
> From here, once it's fixed in JR we have potentially 2 options:
>
> 1) unlock 1.3.16 by downgrading to JR 2.11.3
> 2) release JR 2.12.1, upgrade to Oak, release 1.3.16. Which will bring
> the oak relase around 4-5 days late.
>
> I'm for two as it will give us more coverage around the inclusion of the
> new stable JR release.
>
> Thoughts?
>
> Davide
>
>
>


Re: Anchor tags on doc pages get positioned wrongly under top menu

2016-02-14 Thread Julian Sedding
Hi Vikas

I agree that having the anchor text hidden is a usability hazard. I
tried your suggested approach in Firefox (via FireBug) and didn't have
any success. However, a slight variation of the scheme, still relying
on the ":target" pseudo selector, did the trick for me.

h2 > a:target {
position: relative;
top: -40px;
}

I scoped the rule to the "h2" element, which is defined to have a
height of 40px. I think it's then ok to repeat this value.

Regards
Julian


On Fri, Feb 12, 2016 at 6:24 PM, Vikas Saurabh  wrote:
> Hi,
>
> I'm sure we all have noticed that our anchor tags scroll the page a
> little too much such that the actual position gets hidden under the
> same menu.
>
> With google and this link [0], it seems, we can just plug-in
>
> ```
> :target:before {
> content:"";
> display:block;
> height:40px; /* fixed header height*/
> margin:-40px 0 0; /* negative fixed header height */
> }
> ```
> in oak-doc/src/site/resources/site.css to fix the issue.
>
> But, since I suck at html/css, I wasn't sure if this is fine. '40px'
> is manual hit-and-trial. Is there something better?
>
> Thanks,
> Vikas
>
> [0]: 
> https://www.itsupportguides.com/tech-tips-tricks/how-to-offset-anchor-tag-link-using-css/


Re: [VOTE] Release Apache Jackrabbit Oak 1.3.15

2016-02-04 Thread Julian Sedding
[X] +1 Release this package as Apache Jackrabbit Oak 1.3.15

Regards
Julian

On Wed, Feb 3, 2016 at 10:44 PM, Alex Parvulescu
 wrote:
> [X] +1 Release this package as Apache Jackrabbit Oak 1.3.15
>
> On Wed, Feb 3, 2016 at 5:00 PM, Davide Giannella  wrote:
>
>> A candidate for the Jackrabbit Oak 1.3.15 release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.3.15/
>>
>> The release candidate is a zip archive of the sources in:
>>
>>
>> https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.3.15/
>>
>> The SHA1 checksum of the archive is
>> aba9e0aea9400edb47eb498f63bede1969a31132.
>>
>> A staged Maven repository is available for review at:
>>
>> https://repository.apache.org/
>>
>> The command for running automated checks against this release candidate is:
>>
>> $ sh check-release.sh oak 1.3.15
>> aba9e0aea9400edb47eb498f63bede1969a31132
>>
>> Please vote on releasing this package as Apache Jackrabbit Oak 1.3.15.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Jackrabbit PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Jackrabbit Oak 1.3.15
>> [ ] -1 Do not release this package because...
>>
>> Davide
>>


Re: svn commit: r1727297 - /jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java

2016-01-30 Thread Julian Sedding
Right yes. All good then!

On Sat, Jan 30, 2016 at 2:18 PM, Alex Parvulescu
<alex.parvule...@gmail.com> wrote:
> you missed a couple of lines up:
>
>> -System.out.println("Debug " + args[0]);
>> +   System.out.println("Debug " + file);
>
>
> On Fri, Jan 29, 2016 at 5:41 PM, Julian Sedding <jsedd...@gmail.com> wrote:
>
>> > + System.out.println("Debug " + file);
>>
>> Is this on purpose or an oversight?
>>
>> Regards
>> Julian
>>
>> On Thu, Jan 28, 2016 at 10:56 AM,  <alexparvule...@apache.org> wrote:
>> > Author: alexparvulescu
>> > Date: Thu Jan 28 09:56:31 2016
>> > New Revision: 1727297
>> >
>> > URL: http://svn.apache.org/viewvc?rev=1727297=rev
>> > Log:
>> > OAK-3928 oak-run debug should use a read-only store
>> >
>> > Modified:
>> >
>>  
>> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>> >
>> > Modified:
>> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>> > URL:
>> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java?rev=1727297=1727296=1727297=diff
>> >
>> ==
>> > ---
>> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>> (original)
>> > +++
>> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>> Thu Jan 28 09:56:31 2016
>> > @@ -839,12 +839,9 @@ public final class Main {
>> >  System.exit(1);
>> >  } else {
>> >  // TODO: enable debug information for other node store
>> implementations
>> > -System.out.println("Debug " + args[0]);
>> >  File file = new File(args[0]);
>> > -FileStore store = newFileStore(file)
>> > -.withMaxFileSize(256)
>> > -.withMemoryMapping(false)
>> > -.create();
>> > +System.out.println("Debug " + file);
>> > +ReadOnlyStore store = new ReadOnlyStore(file);
>> >  try {
>> >  if (args.length == 1) {
>> >  debugFileStore(store);
>> >
>> >
>>


Re: svn commit: r1727297 - /jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java

2016-01-29 Thread Julian Sedding
> + System.out.println("Debug " + file);

Is this on purpose or an oversight?

Regards
Julian

On Thu, Jan 28, 2016 at 10:56 AM,   wrote:
> Author: alexparvulescu
> Date: Thu Jan 28 09:56:31 2016
> New Revision: 1727297
>
> URL: http://svn.apache.org/viewvc?rev=1727297=rev
> Log:
> OAK-3928 oak-run debug should use a read-only store
>
> Modified:
> 
> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>
> Modified: 
> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
> URL: 
> http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java?rev=1727297=1727296=1727297=diff
> ==
> --- 
> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>  (original)
> +++ 
> jackrabbit/oak/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java
>  Thu Jan 28 09:56:31 2016
> @@ -839,12 +839,9 @@ public final class Main {
>  System.exit(1);
>  } else {
>  // TODO: enable debug information for other node store 
> implementations
> -System.out.println("Debug " + args[0]);
>  File file = new File(args[0]);
> -FileStore store = newFileStore(file)
> -.withMaxFileSize(256)
> -.withMemoryMapping(false)
> -.create();
> +System.out.println("Debug " + file);
> +ReadOnlyStore store = new ReadOnlyStore(file);
>  try {
>  if (args.length == 1) {
>  debugFileStore(store);
>
>


[jira] [Commented] (JCR-3937) jackrabbit-jcr-commons bundle incorrectly has google dependency in Export-Package uses clause

2015-12-14 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055990#comment-15055990
 ] 

Julian Sedding commented on JCR-3937:
-

I suggest to update to maven-bundle-plugin version 3.0.1 due to FELIX-5062. See 
also OAK-3706.

> jackrabbit-jcr-commons bundle incorrectly has google dependency in 
> Export-Package uses clause
> -
>
> Key: JCR-3937
> URL: https://issues.apache.org/jira/browse/JCR-3937
> Project: Jackrabbit Content Repository
>  Issue Type: Bug
>  Components: jackrabbit-jcr-commons
>Affects Versions: 2.11.3
>Reporter: David Bosschaert
> Attachments: JCR-3937-2.patch, JCR-3937.patch
>
>
> jackrabbit-jcr-commons 2.11.3 has the following Export-Package line:
> {code}org.apache.jackrabbit.value;uses:="javax.jcr,org.apache.jackrabbit.util,com.google.common.collect";version="2.2.1"{code}
> The google uses is actually unnecessary and generated by a bug in the 
> maven-bundle-plugin. Using the latest version of that makes the google 
> transitive dependency go away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: fixVersions in jira

2015-12-08 Thread Julian Sedding
+1 - Setting the next release as fixVersion for blockers makes sense.
For all other issues setting the fixVersion once it is fixed seems
more sensible.

Regards
Julian

On Tue, Dec 8, 2015 at 8:27 AM, Marcel Reutegger  wrote:
> On 07/12/15 14:55, "Davide Giannella" wrote:
>>The process I'm proposing is:
>>
>>- fixVersion = 1.4
>>- fix it
>>- fixVersion = 1.3.x
>
> +1
>
> Regards
>  Marcel
>


Re: Segment Store modularization

2015-12-08 Thread Julian Sedding
I agree with Francesco. SNFE should be an implementation detail of the
Segment bundle. If any code outside of this module depends on SNFE in
order to handle it differently, I would consider that a leaked
abstraction. The special handling should instead be moved into the
Segment bundle (which may not be trivial and could require API
changes/additions).

IMHO, that's how modularization can help drive good APIs. Together
with baselining + impprt/export packages, violations of module
boundaries become visible.

Regards
Julian

On Tue, Dec 8, 2015 at 10:32 AM, Michael Dürig  wrote:
>>> IMO SNFE should be exported so upstream projects can depend on it.
>>> Otherwise there is no value in throwing a specific exception in the first
>>> place.
>>>
>>>
>> My goal is to move the Segment Store into its own bundle without having
>> circular dependencies between this new bundle and oak-core. I could have
>> tried to create two bundles - one with the exported API of the Segment
>> Store and one with its implementation - but I prefer not to go this way at
>> the moment. Defining a proper Segment Store API seems to require a
>> refactoring way deeper than the one I'm doing, and I'm not sure if we want
>> to go head first into this task, given the current changes currently in
>> progress on the Segment Store.
>
>
> Right, makes sense. Can we come up with a different way of (somewhat)
> reliable conveying a SNFE up the stack so interested parties could hook into
> it?
>
> Michael


Re: [VOTE] Shut down oakcomm...@jackrabbit.apache.org mailing list

2015-12-08 Thread Julian Sedding
[X] +1, shut down oakcomm...@jackrabbit.apache.org

Regards
Julian



On Tuesday, December 8, 2015, Michael Dürig  wrote:

>
> Hi,
>
> NOTE: this vote is about oakcomm...@jackrabbit.apache.org. NOT about
> oak-comm...@jackrabbit.apache.org. Mind the dash!
>
> It is unknown how oakcomm...@jackrabbit.apache.org came into existence
> and it was most likely by human error. The list archives are empty [1] and
> I guess most if not all of you didn't even know of its existence. I only
> learned of it recently through the reporter tool [2].
>
> I'm thus proposing to shut that list down but we need to agree consensus
> through this list [3]. Therefore, please vote:
>
> [ ] +1, shut down oakcomm...@jackrabbit.apache.org
> [ ] -1, do not shut down oakcomm...@jackrabbit.apache.org
>
> The vote is open for 72h.
>
> Michael
>
> [1]
> http://mail-archives.apache.org/mod_mbox/jackrabbit-oakcommits/index.html_
> [2] https://reporter.apache.org/#mailinglists_jackrabbit
> [3]
> https://issues.apache.org/jira/browse/INFRA-10916?focusedCommentId=15047031=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15047031
>


Re: New Jackrabbit committer: Vikas Saurabh

2015-11-09 Thread Julian Sedding
Congratulations Vikas!

Regards
Julian

On Mon, Nov 9, 2015 at 9:08 AM, Marcel Reutegger  wrote:
> Welcome to the team, Vikas!
>
> Regards
>  Marcel
>
> On 06/11/15 14:08, "Michael Dürig" wrote:
>
>>Hi,
>>
>>Please welcome Vikas as a new committer and PMC member of the Apache
>>Jackrabbit project. The Jackrabbit PMC recently decided to offer Vikas
>>committership based on his contributions. I'm happy to announce that he
>>accepted the offer and that all the related administrative work has now
>>been taken care of.
>>
>>Welcome to the team, Vikas!
>>
>>Michael
>


  1   2   >