Re: In relation to https://issues.apache.org/jira/browse/OAK-9319

2022-03-08 Thread Torgeir Veimo
This is a long outstanding request, that apache doesn't seem to want
to fully support. There's workarounds though.

Here's what I've written in the past;

Oak doesn't guarantee the count to be accurate, it might be an
estimate. You can get the estimate by sorting by jcr:score. Just
append order by [jcr:score] to your query.

To make sure you have a correct count, you really need to iterate over
all elements and count yourself. One might not agree with the design
decisions leading to requiring such a workaround, but that's where
it's at. Here's an example method that takes offset and page size into
account when loading a paged query result with correct total count;

private QueryResults toAssets(NodeIterator iterator, int
offset, int pageSize, Session session)
throws RepositoryException {

QueryResults assets = new QueryResults<>(); //
structure holding entries to return, total count, and some other
properties
List entries = new ArrayList<>();
int totalCount = 0;

Date timestamp = new Date(); int count = (pageSize < 1 ?
Integer.MIN_VALUE : 0);
while (iterator.hasNext()&& offset-- > 0 ) {
iterator.nextNode();
++totalCount;
}

while (iterator.hasNext() && count++ < pageSize) {
Node node = iterator.nextNode();
entries.add(toAsset(node, session)); // only load items
that we're requesting
++totalCount;
}

// count remaining items
while (iterator.hasNext()) {
iterator.nextNode();
++totalCount;
}

assets.setTotalCount(totalCount);
assets.setEntries(entries);
return assets;

On Wed, 9 Mar 2022 at 06:25, Vishnu Aggarwal
 wrote:
>
> Hello Team,
>
> The use case is to draw pagination in UI for all the nodes fetched on the
> basis of a search query. For that, we fetch 10 records on one page. But
> also, we need to get the total number of records matching our query.
>
> Currently, there doesn't seem to get the count directly for a query, as we
> do in RDBMS like SELECT COUNT(*) FROM  WHERE .
> What is being done right now is to get all the nodes in memory and
> calculate the size which would be costly in terms of processing as the
> number of nodes will increase.
>
> There was a ticket created for this
> https://issues.apache.org/jira/browse/OAK-9319. Any update for that? Or any
> workaround/help to get the count of nodes in quick time response.
>
> Regards,
> Vishnu Aggarwal



-- 
-Tor


Re: Guava version update

2021-11-19 Thread Torgeir Veimo
Michal, if you need an immediate fix you can use a shaded oak
dependency where the guava and luene dependencies are included and
shaded, so not to interfere with other dependencies in your project.
See

https://github.com/tveimo/oak-lucene-shaded

Use as

buildscript {
  repositories {
maven { url "https://jitpack.io; }
  }
}

implementation 'com.github.tveimo:oak-mega-shaded:1.40.0'

On Fri, 19 Nov 2021 at 19:21, Julian Reschke  wrote:
>
> Am 19.11.2021 um 10:00 schrieb Konrad Windszus:
> > Hi Michał,
> > look at https://issues.apache.org/jira/browse/OAK-7182? 
> >  for details why upgrading 
> > is not that simple
> >
> > Konrad
>
> Right. It is a complex issue, and we may be able to make some progress
> in the next few months.
>
> The main issue here is that we need to first get rid of all
> Guava-related signatures in exported APIs.
>
> Best regards, Julian
>


-- 
-Tor


Re: changelog

2020-10-23 Thread Torgeir Veimo
Can you show some code? There's a real lack of code showing how to do
GC without using the default osgi configuration.

On Fri, 23 Oct 2020 at 19:08, Marco Piovesana  wrote:
>
> We use Oak alone so we handle the garbage collection, we used a specific
> contractor for the *MarkSweepGarbageCollector *object that does not exist
> anymore (now it has different parameters, no really big deal).
>
> Marco.
>
> On Fri, Oct 23, 2020 at 10:58 AM Julian Reschke 
> wrote:
>
> > Am 23.10.2020 um 10:22 schrieb Marco Piovesana:
> > > Right, didn't think of jira for that, thanks Julian. Would that work also
> > > for breaking changes? Do you guys have a convention for the jira issues
> > to
> > > highlight it?
> > > I'm updating from the 1.14 to the latest stable 1.32, and there was just
> > > one very easy breaking change.
> > > ...
> >
> > It would include all changes; "breaking" changes do not have special
> > treatment in Jira.
> >
> > That said: I wasn't aware of a breaking change. Can you elaborate?
> >
> > Best regards, Julian
> >



-- 
-Tor


Re: [GitHub] [jackrabbit-oak] larsgrefer opened a new pull request #226: OAK-9082 Remove unnecessary (un)boxing in oak-store-composite

2020-05-20 Thread Torgeir Veimo
Maybe it would be an idea to have a separate mailing list for the pull-requests?

On Thu, 21 May 2020 at 04:24, GitBox  wrote:
>
>
> larsgrefer opened a new pull request #226:
> URL: https://github.com/apache/jackrabbit-oak/pull/226
>
>
>
>
>
> 
> This is an automated message from the Apache Git Service.
> To respond to the message, please log on to GitHub and use the
> URL above to go to the specific comment.
>
> For queries about this service, please contact Infrastructure at:
> us...@infra.apache.org
>
>


-- 
-Tor


Re: Jackrabbit Lucene Upgrade ?

2020-05-12 Thread Torgeir Veimo
The oak-lucene module exposes an api that is oak specific and doesn't
contain any lucene package names, but it uses lucene internally.
Shading the lucene jar translates the package names so instead of
having names like org.apache.lucene, they are called
prefix.org.apache.lucene etc. The oak-lucene implementing classes also
change their import to import prefix.org.apache.lucene.

So any other code in your project can now use a newer version of
lucene, since there's no jars in your project otherwise that have
classes with package names org.apache.lucene.

On Wed, 13 May 2020 at 15:30, KÖLL Claus  wrote:
>
> You are right it was requested years ago .. I think it's work but not 
> impossible .. hopefully :-)
>
> Maybe i do not understand your idea .. What ist the plan with a shaded jar 
> file ?
> How would it help to upgrade to a newer version ?
>
> thanks
> claus
>
> >I understand what you'd like to do, but there's a reason it hasn't
> >happened yet. It was requested many years ago, and it's not easy.
>


-- 
-Tor


Re: Jackrabbit Lucene Upgrade ?

2020-05-12 Thread Torgeir Veimo
I understand what you'd like to do, but there's a reason it hasn't
happened yet. It was requested many years ago, and it's not easy.

On Tue, 12 May 2020 at 21:28, KÖLL Claus  wrote:
>
> Hi !
>
> If i understand you right i have not the problem to use 2 different versions 
> of lucene in the same project.
> My goal is to update the lucene library in jackrabbit itself.
>
> This means also maybe a migration path or a rebuild (that would be ok for me 
> personally) to a newer lucene index inside the the jackrabbit repository.
>
> greets
> claus
>


-- 
-Tor


Re: Jackrabbit Lucene Upgrade ?

2020-05-12 Thread Torgeir Veimo
You can have your own shaded version of the lucene 3.6 jars. We're
doing just that with the oak-lucene artifact, as we use elasticsearch
which is using a much more recent version of lucene in the same
project. I can't give exact details on how to set up the maven
artifact for the old jackrabbit version. Here's the pom file for the
oak-lucene-shaded artifact for oak at least:
https://github.com/tveimo/oak-lucene-shaded

On Tue, 12 May 2020 at 20:56, KÖLL Claus  wrote:
>
> Hi @ all..
>
>
>
> I hope some „old“ jackrabbit developers will participate to this dicussion.
>
> We are using jackrabbit  as a plain dms over years now. It is a central 
> component in our infrastructure and we are really satisfiying with it.
>
>
>
> We are not planning to migrate to jackrabit oak as we have no requirements to 
> new features.
>
> I know that jackrabbit is in maintenance mode, but 3’rd party libraries will 
> still be upgradet to the newest versions (primarily thanks to julian).
>
>
>
> We will use jackrabbit surely for the next years but one thing worries me. 
> The strong integration of lucene in jackrabbit makes it really difficult to 
> upgrade to a newer version.
>
> As you all know we are using lucene 3.6 and it is in the meanwhile really 
> outdatet.
>
> I’m no lucene expert but i have tried to upgrade to version 4 as the first 
> step.
>
>
>
> As there are many api changes it’s not really easy.
>
>
>
> Maybe some jackrabbit developers which have written the code give some 
> estimates what it would mean to upgrade to a newer version.
>
>
>
> Thanks for any input !
>
>
>
> greets
>
> Claus
>
>
>
>



-- 
-Tor


Re: getSize() return -1 with oak

2019-05-02 Thread Torgeir Veimo
There might be a faster way to do it with the new faceting support,
but you need lucene indexes, and I'm not sure how to configure it.
Someone else might have some advice.

On Thu, 2 May 2019 at 17:38, zhouxu  wrote:
>
> Thanks for your reply! The amount of data we need to count is extraordinarily
> large, well over 10,000.
> Does outofmemory occur when the amount of data is too large?
>
>
>
> --
> Sent from: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Dev-f523400.html



-- 
-Tor


Re: getSize() return -1 with oak

2019-04-29 Thread Torgeir Veimo
Oak doesn't guarantee the count to be accurate, it might be an
estimate. You can get the estimate by sorting by jcr:score. Just
append order by [jcr:score] to your query.

To make sure you have a correct count, you really need to iterate over
all elements and count yourself. One might not agree with the design
decisions leading to requiring such a workaround, but that's where
it's at. Here's an example method that takes offset and page size into
account when loading a paged query result with correct total count;

private QueryResults toAssets(NodeIterator iterator, int
offset, int pageSize, Session session)
throws RepositoryException {

QueryResults assets = new QueryResults<>(); //
structure holding entries to return, total count, and some other
properties
List entries = new ArrayList<>();
int totalCount = 0;

Date timestamp = new Date(); int count = (pageSize < 1 ?
Integer.MIN_VALUE : 0);
while (iterator.hasNext()&& offset-- > 0 ) {
iterator.nextNode();
++totalCount;
}

while (iterator.hasNext() && count++ < pageSize) {
Node node = iterator.nextNode();
entries.add(toAsset(node, session)); // only load items
that we're requesting
++totalCount;
}

// count remaining items
while (iterator.hasNext()) {
iterator.nextNode();
++totalCount;
}

assets.setTotalCount(totalCount);
assets.setEntries(entries);
return assets;

  }

On Mon, 29 Apr 2019 at 17:27, zhouxu  wrote:
>
> We use oak1.10.2 inside a web application to store documents
>
> How can we program statistics such as :
> the total number of documents
> compute the number of documents per property values for a given property ?
>
> we try these code,but the .getSize() return -1,What should I do?
>
> The code is as follows:
>
> QueryManager qm = session.getWorkspace().getQueryManager();
> Query q = qm.createQuery("SELECT * FROM [nt:file] WHERE LOCALNAME() LIKE
> '%.txt' and [\"jcr:createdBy\"] = 'anonymous'", Query.JCR_SQL2);
> QueryResult qr = q.execute();
> long stat = qr.getRows().getSize();
>
>
>
> --
> Sent from: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Dev-f523400.html



--
-Tor


Re: Problem in configuring more than Jackrabbit-oak server for the same data.

2018-03-26 Thread Torgeir Veimo
You need to use a mongodb backend if you want multiple oak instances
accessing the same data. The segmentstore (the default) only allows a
single instance.

On 22 March 2018 at 00:31, Sudhir Kumar  wrote:

> Hello All,
>
>
>
> Currently I am using Jackrabbit-OAK 1.6.1 version. We are using the web
> version, So deploying the war file to the tomcat server.
>
>
>
> We are using default FileDataStore for the repository.
>
>
>
> We are planning to have multiple jackrabbit-oak server  so as to handle
> the load. We are not able to connect to the same repository from multiple
> server.
>
>
>
> Please guide me how can I achieve this. Any help is highly appreciated.
>
>
>
> This is very urgent for me to have a solution on how to have multiple
> jackrabbit-oak server.
>
>
>
>
>
> Regards,
>
> Sudhir
>
>
>
>


-- 
-Tor


Re: Fwd: dump content of segment tar files

2018-01-30 Thread Torgeir Veimo
offline compactation. Comparing with the backup from the day before it
seems there's an extra few tar files. I was tempted to just add those files
to the pre-compactation repo directory, but it's about 50gb so it's a bit
hard to duplicate / move around to try out. Also, the naming of the files
is slightly different pre-compactation.

It would be handy if oak-run didn't go ahead with modifying files if it
might run out of disk space.

from the 29th, before compacting

-rw-r--r-- 1 tomcat tomcat 270022656 Jan 28 06:56 data00958a.tar

-rw-r--r-- 1 tomcat tomcat 269849600 Jan 28 14:34 data00959a.tar

-rw-r--r-- 1 tomcat tomcat 270107648 Jan 28 19:01 data00960a.tar

-rw-r--r-- 1 tomcat tomcat 253981696 Jan 30 03:42 data00961a.tar

-rw-r--r-- 1 tomcat tomcat 252685312 Jan 29 02:20 data00961a.tar.bak

-rw-r--r-- 1 tomcat tomcat529408 Jan 30 03:44 data00962a.tar

additional files in the jan 30th repo, after compacting

-rw-r--r-- 1 tomcat tomcat   5035520 Jan 30 01:16 data00960b.tar

-rw-r--r-- 1 tomcat tomcat   1405952 Jan 30 01:16 data00961b.tar

-rw-r--r-- 1 tomcat tomcat  26518528 Jan 30 01:16 data00962b.tar

-rw-r--r-- 1 tomcat tomcat   4634112 Jan 30 01:16 data00963b.tar

-rw-r--r-- 1 tomcat tomcat  32503296 Jan 30 01:16 data00964b.tar

-rw-r--r-- 1 tomcat tomcat  13696512 Jan 30 01:16 data00965b.tar

-rw-r--r-- 1 tomcat tomcat   4169216 Jan 30 01:16 data00966b.tar

-rw-r--r-- 1 tomcat tomcat  36781056 Jan 30 01:16 data00967b.tar

-rw-r--r-- 1 tomcat tomcat 269238784 Jan 30 01:15 data00968a.tar

-rw-r--r-- 1 tomcat tomcat 269567488 Jan 30 01:16 data00969a.tar

-rw-r--r-- 1 tomcat tomcat 113282048 Jan 30 01:16 data00970a.tar

-rw-r--r-- 1 tomcat tomcat996352 Jan 30 02:11 data00971a.tar


On 30 January 2018 at 20:53, Michael Dürig <mdue...@apache.org> wrote:

>
> Hi,
>
> Unfortunately there is currently no OOTB tooling apart from oak-run check
> followed by a manual roll back to repair a repository.
>
> What where you doing with oak-run while the disk run full?
>
> Michael
>
>
> On 30.01.18 11:14, Torgeir Veimo wrote:
>
>> Is there a tool to inspect the content of segment tar files?
>>
>> I've had a case of oak-run corrupting a repository due to the disk going
>> full, and need to see if there's any data for the last 24 hours that i can
>> get back from the segment files remaining.
>>
>>
>>


-- 
-Tor


Fwd: dump content of segment tar files

2018-01-30 Thread Torgeir Veimo
Is there a tool to inspect the content of segment tar files?

I've had a case of oak-run corrupting a repository due to the disk going
full, and need to see if there's any data for the last 24 hours that i can
get back from the segment files remaining.


-- 
-Tor


Re: Consider making Oak 1.8 an Oak 2.0

2017-12-06 Thread Torgeir Veimo
Upgrading lucene to version 6 would probably warrant using 2.0, but that's
not ready yet for 1.8?

On 6 December 2017 at 10:39, Thomas Mueller 
wrote:

> I vote for 1.8. I don't see any big changes that would justify version
> 2.0. The modularization (moving code around) is an ongoing process, I don't
> think this is "fixed", and shouldn't have a big impact on users.
>
>


-- 
-Tor


Re: chroot-like content segregation?

2017-09-21 Thread Torgeir Veimo
Could you implement a JcrDocumentStore which relays to an underlying JCR
repository with subpath jailing for this purpose? Catching it at any other
level seems to lead to complications and special cases.

On 22 September 2017 at 01:13, Bertrand Delacretaz 
wrote:

> Hi,
>
> I'm presenting next week at https://adapt.to on creating multi-tenant
> HTTP request processing / rendering farms with Sling, showing a mix of
> Sling-based experiments and theoretical considerations on what would
> help creating such farms.
>
> Having chroot-style [1] user segregation at the repository level would
> help: after opening a session as a member of the jail group "foo",
> /jails/foo becomes my new root, blocking me from accessing anything
> above that and transparently mapping my repository root to /jails/foo.
>
> Access control can of course help implementing this, but having the
> path mapping to transparently jail the user or group in their own
> subtree makes things much easier at the application level.
>
> Has anyone already played with something like this?
> Any prototypes or experiments worth mentioning?
>
> -Bertrand
>
> [1] https://linux.die.net/man/2/chroot
>



-- 
-Tor


Re: Document indexing design of Oak

2017-03-20 Thread Torgeir Veimo
"The index definitions nodes have following properties

type - It determines the type of index. For e.g. it can be property,
lucene, solr etc. Based on the typeIndexUpdate would look for IndexEditor
of given type from registered IndexEditorProvider"

If I as a casual developer consult the documentation to see what is allowed
to be specified for type, I don't need to see 'etc', I need an exact list
of allowed types.



On 20 March 2017 at 23:11, Chetan Mehrotra 
wrote:

> Hi Team,
>
> As part of OAK-5917, OAK-5946 I am trying to document current Oak
> indexing design at  [1].
>
> Would be helpful if team can review it or add any topics to be covered
> in OAK-5946. Also feel free to tweak and enhance it directly in svn
>
> I hope to complete this work by this week and would ping again for
> final review before publishing it
>
> Chetan Mehrotra
> [1] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-
> doc/src/site/markdown/query/indexing.md
>



-- 
-Tor


Re: trying to get more info on IllegalStateException when migrating

2017-02-06 Thread Torgeir Veimo
Ok, I'll create an issue.

While I've got the attention, what is the replacement for
SegmentStore.close() in 1.6? I am unable to restart webapps since the
segment tar store seems to be still in use.

On 6 February 2017 at 21:03, Michael Dürig <mdue...@apache.org> wrote:

>
> Hi,
>
> Ouch, I see. To work around this it should be possible to temporarily
> rename the directory on your end.
>
> Could you also create an issue on the Oak issue tracker [1] for this? I
> guess the current behaviour is too much driven by our internal use case.
>
> Michael
>
> [1] https://issues.apache.org/jira/browse/OAK
>
>
> On 6.2.17 10:10 , Torgeir Veimo wrote:
>
>> I just added some log statements. It seems that the upgrade process
>> appends
>> /segmentstore to the path I specify, while the oak directory only contains
>> tar files, no subdirectories.
>>
>> log.info("builder path: {}", builder.directory.getPath());
>> 06.02.2017 19:07:59.699 [main] *INFO*
>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore - builder path:
>> /Users/tor/content/karriere/oak/segmentstore
>>
>> On 6 February 2017 at 18:54, Michael Dürig <mdue...@apache.org> wrote:
>>
>>
>>>
>>> On 6.2.17 9:47 , Torgeir Veimo wrote:
>>>
>>> That is very odd. The /Users/tor/content/karriere/oak path definitely
>>>> contains an old segment repository. I'm currently accessing it using the
>>>> deprecated org.apache.jackrabbit.oak.plugins.segment code in 1.6.0
>>>> until
>>>> I
>>>> can get it migrated;
>>>>
>>>> segmentStore = FileStore.builder(new File(oakRepositoryPath))
>>>>   .withMaxFileSize(256).build();
>>>> nodeStore = SegmentNodeStore.builder(segmentStore)
>>>>   .build();
>>>>
>>>>
>>> In that case there might be something wrong with the migration. Tomek
>>> Rekawek might be able to help on that side.
>>>
>>> Michael
>>>
>>>
>>>
>>> On 6 February 2017 at 18:30, Michael Dürig <mdue...@apache.org> wrote:
>>>>
>>>>
>>>> Hi,
>>>>>
>>>>> This is the exception you get when trying to open an newer storage
>>>>> format
>>>>> with an older version of the code (newer versions of the code will be
>>>>> more
>>>>> helpful in the wording of the error).
>>>>>
>>>>> Apparently
>>>>>
>>>>> segment-old:/Users/tor/content/karriere/oak ~/content/karriere/oak-16
>>>>>
>>>>>>
>>>>>>
>>>>> points to a migrate store already. Migrated to Oak Segment Tar, that
>>>>> is.
>>>>>
>>>>> Michael
>>>>>
>>>>>
>>>>> On 2.2.17 3:50 , Torgeir Veimo wrote:
>>>>>
>>>>> Am running oak-upgrade with the follow parameters and output. How do I
>>>>>
>>>>>> determine what is the exact problem?
>>>>>>
>>>>>>
>>>>>> Torgeirs-MacBook-Pro:jackrabbit-oak-1.5.18 tor$ java -jar
>>>>>> ./oak-upgrade/target/oak-upgrade-1.5.18.jar --copy-binaries
>>>>>> segment-old:/Users/tor/content/karriere/oak ~/content/karriere/oak-16
>>>>>>
>>>>>> 03.02.2017 00:39:21.037 [main] *INFO*
>>>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
>>>>>> copyVersions parameter set to 1970-01-01
>>>>>>
>>>>>> 03.02.2017 00:39:21.040 [main] *INFO*
>>>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
>>>>>> copyOrphanedVersions parameter set to 1970-01-01
>>>>>>
>>>>>> 03.02.2017 00:39:21.040 [main] *INFO*
>>>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
>>>>>> Cache
>>>>>> size: 256 MB
>>>>>>
>>>>>> 03.02.2017 00:39:21.042 [main] *INFO*
>>>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments -
>>>>>> Source:
>>>>>> SEGMENT[segment-old:/Users/tor/content/karriere/oak]
>>>>>>
>>>>>> 03.02.2017 00:39:21.044 [main] *INFO*
>>>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments -
>>>>>> Destination:
>>>>>> SEGMENT_TAR[/Users/tor/content/karriere/o

Re: trying to get more info on IllegalStateException when migrating

2017-02-06 Thread Torgeir Veimo
I just added some log statements. It seems that the upgrade process appends
/segmentstore to the path I specify, while the oak directory only contains
tar files, no subdirectories.

log.info("builder path: {}", builder.directory.getPath());
06.02.2017 19:07:59.699 [main] *INFO*
org.apache.jackrabbit.oak.plugins.segment.file.FileStore - builder path:
/Users/tor/content/karriere/oak/segmentstore

On 6 February 2017 at 18:54, Michael Dürig <mdue...@apache.org> wrote:

>
>
> On 6.2.17 9:47 , Torgeir Veimo wrote:
>
>> That is very odd. The /Users/tor/content/karriere/oak path definitely
>> contains an old segment repository. I'm currently accessing it using the
>> deprecated org.apache.jackrabbit.oak.plugins.segment code in 1.6.0 until
>> I
>> can get it migrated;
>>
>> segmentStore = FileStore.builder(new File(oakRepositoryPath))
>>   .withMaxFileSize(256).build();
>> nodeStore = SegmentNodeStore.builder(segmentStore)
>>   .build();
>>
>
> In that case there might be something wrong with the migration. Tomek
> Rekawek might be able to help on that side.
>
> Michael
>
>
>
>> On 6 February 2017 at 18:30, Michael Dürig <mdue...@apache.org> wrote:
>>
>>
>>> Hi,
>>>
>>> This is the exception you get when trying to open an newer storage format
>>> with an older version of the code (newer versions of the code will be
>>> more
>>> helpful in the wording of the error).
>>>
>>> Apparently
>>>
>>> segment-old:/Users/tor/content/karriere/oak ~/content/karriere/oak-16
>>>>
>>>
>>> points to a migrate store already. Migrated to Oak Segment Tar, that is.
>>>
>>> Michael
>>>
>>>
>>> On 2.2.17 3:50 , Torgeir Veimo wrote:
>>>
>>> Am running oak-upgrade with the follow parameters and output. How do I
>>>> determine what is the exact problem?
>>>>
>>>>
>>>> Torgeirs-MacBook-Pro:jackrabbit-oak-1.5.18 tor$ java -jar
>>>> ./oak-upgrade/target/oak-upgrade-1.5.18.jar --copy-binaries
>>>> segment-old:/Users/tor/content/karriere/oak ~/content/karriere/oak-16
>>>>
>>>> 03.02.2017 00:39:21.037 [main] *INFO*
>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
>>>> copyVersions parameter set to 1970-01-01
>>>>
>>>> 03.02.2017 00:39:21.040 [main] *INFO*
>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
>>>> copyOrphanedVersions parameter set to 1970-01-01
>>>>
>>>> 03.02.2017 00:39:21.040 [main] *INFO*
>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions - Cache
>>>> size: 256 MB
>>>>
>>>> 03.02.2017 00:39:21.042 [main] *INFO*
>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments - Source:
>>>> SEGMENT[segment-old:/Users/tor/content/karriere/oak]
>>>>
>>>> 03.02.2017 00:39:21.044 [main] *INFO*
>>>>  org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments -
>>>> Destination:
>>>> SEGMENT_TAR[/Users/tor/content/karriere/oak-16]
>>>>
>>>> Exception in thread "main" java.lang.IllegalStateException
>>>>
>>>> at com.google.common.base.Preconditions.checkState(Precondition
>>>> s.java:134)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.<
>>>> init>(FileStore.java:403)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.<
>>>> init>(FileStore.java:92)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$
>>>> ReadOnlyStore.(FileStore.java:1449)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$
>>>> ReadOnlyStore.(FileStore.java:1446)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$
>>>> Builder.buildReadOnly(FileStore.java:393)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.upgrade.cli.node.SegmentFactory.ha
>>>> sExternalBlobReferences(SegmentFactory.java:119)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.upgrade.cli.node.StoreFactory.hasE
>>>> xternalBlobReferences(StoreFactory.java:67)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments.
>>>> srcUsesEmbeddedDatastore(StoreArguments.java:113)
>>>>
>>>> at
>>>> org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(
>>>> OakUpgrade.java:64)
>>>>
>>>> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpg
>>>> rade.java:48)
>>>>
>>>>
>>>> --
>>>> -Tor
>>>>
>>>>
>>>>
>>
>>


-- 
-Tor


trying to get more info on IllegalStateException when migrating

2017-02-02 Thread Torgeir Veimo
Am running oak-upgrade with the follow parameters and output. How do I
determine what is the exact problem?


Torgeirs-MacBook-Pro:jackrabbit-oak-1.5.18 tor$ java -jar
./oak-upgrade/target/oak-upgrade-1.5.18.jar --copy-binaries
segment-old:/Users/tor/content/karriere/oak ~/content/karriere/oak-16

03.02.2017 00:39:21.037 [main] *INFO*
 org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
copyVersions parameter set to 1970-01-01

03.02.2017 00:39:21.040 [main] *INFO*
 org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions -
copyOrphanedVersions parameter set to 1970-01-01

03.02.2017 00:39:21.040 [main] *INFO*
 org.apache.jackrabbit.oak.upgrade.cli.parser.MigrationOptions - Cache
size: 256 MB

03.02.2017 00:39:21.042 [main] *INFO*
 org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments - Source:
SEGMENT[segment-old:/Users/tor/content/karriere/oak]

03.02.2017 00:39:21.044 [main] *INFO*
 org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments - Destination:
SEGMENT_TAR[/Users/tor/content/karriere/oak-16]

Exception in thread "main" java.lang.IllegalStateException

at com.google.common.base.Preconditions.checkState(Preconditions.java:134)

at
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:403)

at
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:92)

at
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.(FileStore.java:1449)

at
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.(FileStore.java:1446)

at
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$Builder.buildReadOnly(FileStore.java:393)

at
org.apache.jackrabbit.oak.upgrade.cli.node.SegmentFactory.hasExternalBlobReferences(SegmentFactory.java:119)

at
org.apache.jackrabbit.oak.upgrade.cli.node.StoreFactory.hasExternalBlobReferences(StoreFactory.java:67)

at
org.apache.jackrabbit.oak.upgrade.cli.parser.StoreArguments.srcUsesEmbeddedDatastore(StoreArguments.java:113)

at
org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:64)

at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(OakUpgrade.java:48)


--
-Tor


http://jackrabbit.apache.org/oak/docs/construct.html out of sync with 1.6.0

2017-02-02 Thread Torgeir Veimo
It looks like the documentation is not in sync with the 1.6.0 codebase, eg

http://jackrabbit.apache.org/oak/docs/construct.html

refers to using repository construction with the constructor of the
SegmentNodeStore
class.


-- 
-Tor


Re: guava ByteStreams.equal()

2016-12-02 Thread Torgeir Veimo
Ok, filed https://issues.apache.org/jira/browse/OAK-5215 with a patch. It
makes it possible to run with guava up to version 19.0. There are a few
additional changes required to run with version 20.0.

On 1 December 2016 at 23:58, Alex Parvulescu <alex.parvule...@gmail.com>
wrote:

> Hi,
>
> It's not a bad idea, I see we use this method only in 2 places [0] and [1],
> and this change would not even need to have the guava version updated on
> oak as far as I see it.
> It would be good if you could create an issue and attach a patch for
> review.
>
> best,
> alex
>
>
> [0]
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-
> core/src/main/java/org/apache/jackrabbit/oak/plugins/memory/
> AbstractBlob.java#L68
> [1]
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-
> upgrade/src/test/java/org/apache/jackrabbit/oak/upgrade/blob/
> LengthCachingDataStoreTest.java#L109
>
>
>
> On Thu, Dec 1, 2016 at 2:26 PM, Torgeir Veimo <torgeir.ve...@gmail.com>
> wrote:
>
> > The method
> >
> > ByteStreams.equal(InputSupplier, InputSupplier)
> >
> > is deprecated in guava 17 and removed in guava 18. Would it be an idea to
> > replace the call to it by using the
> >
> > ByteSource.contentEquals(ByteSource)
> >
> > as suggested in guava 17?
> >
> > This would make it easier to deploy in a non-ogsi environment when using
> > guava 18.0+.
> > --
> > -Tor
> >
>



-- 
-Tor


guava ByteStreams.equal()

2016-12-01 Thread Torgeir Veimo
The method

ByteStreams.equal(InputSupplier, InputSupplier)

is deprecated in guava 17 and removed in guava 18. Would it be an idea to
replace the call to it by using the

ByteSource.contentEquals(ByteSource)

as suggested in guava 17?

This would make it easier to deploy in a non-ogsi environment when using
guava 18.0+.
-- 
-Tor


Re: oak-lucene shaded

2016-11-25 Thread Torgeir Veimo
The module imports the source from the lucene index plugin, modifies the
imports to use the org.shaded package name, and then compiles and includes
the lucene index plugin in the uber-jar itself.

You can see what classes gets included in the resulting jar, it also
renames the services files so that it matches the shaded lucene codecs, so
that initialisation works properly.

I wasn't suggesting oak should adopt this approach at this time, it's
merely a solution for those that need to combining oak with other code
(usually elasticsearch) in a non-osgi environment (usually spring).


On 25 November 2016 at 18:56, Chetan Mehrotra <chetan.mehro...@gmail.com>
wrote:

> On Fri, Nov 25, 2016 at 2:11 PM, Torgeir Veimo <torgeir.ve...@gmail.com>
> wrote:
> > Which other spi impl depends
> > on the lucene libraries being exposed?
>
> There are currently 2 defined in
> org.apache.jackrabbit.oak.plugins.index.lucene.spi package
>
> Chetan Mehrotra
>



-- 
-Tor


Re: oak-lucene shaded

2016-11-25 Thread Torgeir Veimo
Yes I agree this is for special deployments. Which other spi impl depends
on the lucene libraries being exposed?

On 25 November 2016 at 14:32, Chetan Mehrotra <chetan.mehro...@gmail.com>
wrote:

> Hi Torgeir,
>
> We would not be able shade Lucene classes as they are exported and
> meant to be used by certain SPI implementations. So as of now there is
> no solution for using a different Lucene version in non OSGi world
>
>
> Chetan Mehrotra
>
>
> On Wed, Nov 23, 2016 at 7:15 PM, Torgeir Veimo <torgeir.ve...@gmail.com>
> wrote:
> > Second version, this pom file can be put in a separate directly as a self
> > contained maven artifact and includes oak-lucene remotely.
> >
> > 
> >
> > 
> >
> > http://maven.apache.org/POM/4.0.0; xmlns:xsi="
> > http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="
> > http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-
> v4_0_0.xsd
> > ">
> > 4.0.0
> >
> > no.karriere
> > 0.1-SNAPSHOT
> > oak-lucene-shaded
> > Oak Lucene (shaded)
> > Oak Lucene integration subproject
> >
> > 
> > 1.5
> > 4.7.1
> > 1.4.6
> > 
> >
> > 
> > 
> > 
> > 
> > org.apache.maven.plugins
> > maven-source-plugin
> > 3.0.1
> > 
> > 
> > generate-sources-for-shade-plugin
> > package
> > 
> > jar-no-fork
> > 
> > 
> > 
> > 
> > 
> > org.apache.maven.plugins
> > maven-shade-plugin
> > 3.0.0-SNAPSHOT
> > 
> > 
> > package
> > 
> > shade
> > 
> > 
> >
> > 
> >
> > false
> > true shadeSourcesContent>
> > true
> > 
> >  > implementation="org.apache.maven.plugins.shade.resource.
> ServicesResourceTransformer"/>
> > 
> > 
> > 
> > org.apache.lucene
> >
> > org.shaded.apache.lucene
> > 
> > 
> > org.tartarus.
> snowball
> >
> > org.shaded.tartarus.snowball
> > 
> > 
> > 
> > 
> >
> > org.apache.jackrabbit:oak-core
> >
> > org.apache.jackrabbit:oak-commons
> >
> > org.apache.jackrabbit:oak-blob
> >
> > com.google.guava:guava
> >
> > commons-codec:commons-codec
> > commons-io:commons-
> io
> > javax.jcr:jcr
> >
> > org.apache.jackrabbit:jackrabbit-api
> >
> > org.apache.jackrabbit:jackrabbit-jcr-commons
> >
> > org.apache.tika:tika-core
> > org.slf4j:slf4j-api exclude>
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >
> > 
> > 
> > org.apache.jackrabbit
> > oak-core
> > ${oak.version}
> > 
> > 
> > org.apache.jackrabbit
> > oak-lucene
> > ${oak.version}
> > 
> > 
> > org.apache.tika
> > tika-core
> > ${tika.version}
> > 
> >
> > 
> > 
> > org.apache.lucene
> > lucene-core
> > ${lucene.version}
> > 
> > 
> > org.apache.lucene
> > lucene-analyzers-common
> > ${lucen

Re: oak-lucene shaded

2016-11-23 Thread Torgeir Veimo
Second version, this pom file can be put in a separate directly as a self
contained maven artifact and includes oak-lucene remotely.





http://maven.apache.org/POM/4.0.0; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="
http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd
">
4.0.0

no.karriere
0.1-SNAPSHOT
oak-lucene-shaded
Oak Lucene (shaded)
Oak Lucene integration subproject


1.5
4.7.1
1.4.6






org.apache.maven.plugins
maven-source-plugin
3.0.1


generate-sources-for-shade-plugin
package

jar-no-fork





org.apache.maven.plugins
maven-shade-plugin
3.0.0-SNAPSHOT


package

shade





false
true
true





org.apache.lucene

org.shaded.apache.lucene


org.tartarus.snowball

org.shaded.tartarus.snowball





org.apache.jackrabbit:oak-core

org.apache.jackrabbit:oak-commons

org.apache.jackrabbit:oak-blob

com.google.guava:guava

commons-codec:commons-codec
commons-io:commons-io
javax.jcr:jcr

org.apache.jackrabbit:jackrabbit-api

org.apache.jackrabbit:jackrabbit-jcr-commons

org.apache.tika:tika-core
org.slf4j:slf4j-api











org.apache.jackrabbit
oak-core
${oak.version}


org.apache.jackrabbit
oak-lucene
${oak.version}


org.apache.tika
tika-core
${tika.version}




org.apache.lucene
lucene-core
${lucene.version}


org.apache.lucene
lucene-analyzers-common
${lucene.version}


org.apache.lucene
lucene-queryparser
${lucene.version}


org.apache.lucene
lucene-queries
${lucene.version}


org.apache.lucene
lucene-suggest
${lucene.version}


org.apache.lucene
lucene-highlighter
${lucene.version}


org.apache.lucene
lucene-memory
${lucene.version}


org.apache.lucene
lucene-misc
${lucene.version}


org.apache.lucene
lucene-facet
${lucene.version}



org.apache.tika
tika-parsers
${tika.version}
test


commons-logging
commons-logging







org.apache
Apache snapshots
http://repository.apache.org/content/repositories/snapshots


true






On 15 November 2016 at 11:12, Torgeir Veimo <torgeir.ve...@gmail.com> wrote:

> I'm in need of running oak in a non-osgi environment with a more recent
> version of lucene already on the classpath, so I've experimented with using
> the maven shade plugin to embed a shaded lucene jar inside the oak-lucene
> jar.
>
> I'm not very familiar with the shade plugin, so there might be better ways
> of doing this, but here's a modified pom.xml that will built to an artifact
> that can be included with the id org.apache.jackrabbit:oak-lucene-shaded
>
> This might be useful for someone.
>
> 
>
> 
>
> http://maven.apache.org/POM/4.0.0; xmlns:xsi="
> http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://
> maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd &quo

oak-lucene shaded

2016-11-14 Thread Torgeir Veimo
I'm in need of running oak in a non-osgi environment with a more recent
version of lucene already on the classpath, so I've experimented with using
the maven shade plugin to embed a shaded lucene jar inside the oak-lucene
jar.

I'm not very familiar with the shade plugin, so there might be better ways
of doing this, but here's a modified pom.xml that will built to an artifact
that can be included with the id org.apache.jackrabbit:oak-lucene-shaded

This might be useful for someone.





http://maven.apache.org/POM/4.0.0; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="
http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd
">
4.0.0


org.apache.jackrabbit
oak-parent
1.4.6
../oak-parent/pom.xml


oak-lucene-shaded
Oak Lucene (shaded)

Oak Lucene integration subproject


1.5






org.apache.maven.plugins
maven-source-plugin


generate-sources-for-shade-plugin
package

jar-no-fork





org.apache.maven.plugins
maven-shade-plugin
2.4.3


package

shade




false
true
true
true



org.apache.lucene

org.shaded.apache.lucene


org.tartarus.snowball

org.shaded.tartarus.snowball





org.apache.jackrabbit:oak-core

org.apache.jackrabbit:oak-commons

org.apache.jackrabbit:oak-blob

com.google.guava:guava

commons-codec:commons-codec
commons-io:commons-io
javax.jcr:jcr

org.apache.jackrabbit:jackrabbit-api

org.apache.jackrabbit:jackrabbit-jcr-commons

org.apache.tika:tika-core
org.slf4j:slf4j-api












org.osgi
org.osgi.core
provided


org.osgi
org.osgi.compendium
provided


biz.aQute.bnd
bndlib
provided


org.apache.felix
org.apache.felix.scr.annotations
provided



org.apache.jackrabbit
oak-core
${project.version}


org.apache.tika
tika-core
${tika.version}




org.apache.lucene
lucene-core
${lucene.version}


org.apache.lucene
lucene-analyzers-common
${lucene.version}


org.apache.lucene
lucene-queryparser
${lucene.version}


org.apache.lucene
lucene-queries
${lucene.version}


org.apache.lucene
lucene-suggest
${lucene.version}


org.apache.lucene
lucene-highlighter
${lucene.version}


org.apache.lucene
lucene-memory
${lucene.version}


org.apache.lucene
lucene-misc
${lucene.version}


org.apache.lucene
lucene-facet
${lucene.version}




org.slf4j
slf4j-api




com.google.code.findbugs
jsr305




ch.qos.logback
logback-classic
test


junit
junit
test


org.mongodb
mongo-java-driver
test


org.apache.jackrabbit
oak-core
${project.version}
tests
test


org.apache.jackrabbit
oak-jcr
${project.version}
test


org.apache.jackrabbit
   

Re: oak-upgrade problems migrating from Jackrabbit 2 to Oak

2016-09-22 Thread Torgeir Veimo
Maybe you can work around it by upgrading using tarmk, then copying to a
mongodb repository when it's all done.

On 23 September 2016 at 00:08, Robert Haycock <
robert.hayc...@artificial-solutions.com> wrote:

> Thanks Tomek,
>
> Getting closer!
>
> Looks like a setting somewhere needs increasing...
>
> Exception in thread "main" java.lang.RuntimeException:
> javax.jcr.RepositoryException: Failed to copy content
> at com.google.common.io.Closer.rethrow(Closer.java:149)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> migrate(OakUpgrade.java:58)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.main(
> OakUpgrade.java:42)
> Caused by: javax.jcr.RepositoryException: Failed to copy content
> at org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.
> copy(RepositoryUpgrade.java:551)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> upgrade(OakUpgrade.java:65)
> at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.
> migrate(OakUpgrade.java:53)
> ... 1 more
> Caused by: org.bson.BsonSerializationException: Size 24184261 is larger
> than MaxDocumentSize 16793600.
> at org.bson.BsonBinaryWriter.backpatchSize(
> BsonBinaryWriter.java:367)
> at org.bson.BsonBinaryWriter.doWriteEndDocument(
> BsonBinaryWriter.java:122)
> at org.bson.AbstractBsonWriter.writeEndDocument(
> AbstractBsonWriter.java:293)
> at com.mongodb.DBObjectCodec.encodeMap(DBObjectCodec.java:222)
> at com.mongodb.DBObjectCodec.writeValue(DBObjectCodec.java:196)
> at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:128)
> at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:61)
> at org.bson.codecs.BsonDocumentWrapperCodec.encode(
> BsonDocumentWrapperCodec.java:63)
> at org.bson.codecs.BsonDocumentWrapperCodec.encode(
> BsonDocumentWrapperCodec.java:29)
> at com.mongodb.connection.UpdateCommandMessage.writeTheWrites(
> UpdateCommandMessage.java:84)
> at com.mongodb.connection.UpdateCommandMessage.writeTheWrites(
> UpdateCommandMessage.java:42)
> at com.mongodb.connection.BaseWriteCommandMessage.
> encodeMessageBodyWithMetadata(BaseWriteCommandMessage.java:129)
> at com.mongodb.connection.RequestMessage.encodeWithMetadata(
> RequestMessage.java:160)
> at com.mongodb.connection.WriteCommandProtocol.sendMessage(
> WriteCommandProtocol.java:212)
> at com.mongodb.connection.WriteCommandProtocol.execute(
> WriteCommandProtocol.java:101)
> at com.mongodb.connection.UpdateCommandProtocol.execute(
> UpdateCommandProtocol.java:64)
> at com.mongodb.connection.UpdateCommandProtocol.execute(
> UpdateCommandProtocol.java:37)
> at com.mongodb.connection.DefaultServer$
> DefaultServerProtocolExecutor.execute(DefaultServer.java:159)
> at com.mongodb.connection.DefaultServerConnection.executeProtocol(
> DefaultServerConnection.java:286)
> at com.mongodb.connection.DefaultServerConnection.updateCommand(
> DefaultServerConnection.java:140)
> at com.mongodb.operation.MixedBulkWriteOperation$Run$3.
> executeWriteCommandProtocol(MixedBulkWriteOperation.java:480)
> at com.mongodb.operation.MixedBulkWriteOperation$Run$
> RunExecutor.execute(MixedBulkWriteOperation.java:646)
> at com.mongodb.operation.MixedBulkWriteOperation$Run.execute(
> MixedBulkWriteOperation.java:399)
> at com.mongodb.operation.MixedBulkWriteOperation$1.
> call(MixedBulkWriteOperation.java:179)
> at com.mongodb.operation.MixedBulkWriteOperation$1.
> call(MixedBulkWriteOperation.java:168)
> at com.mongodb.operation.OperationHelper.withConnectionSource(
> OperationHelper.java:230)
> at com.mongodb.operation.OperationHelper.withConnection(
> OperationHelper.java:221)
> at com.mongodb.operation.MixedBulkWriteOperation.execute(
> MixedBulkWriteOperation.java:168)
> at com.mongodb.operation.MixedBulkWriteOperation.execute(
> MixedBulkWriteOperation.java:74)
> at com.mongodb.Mongo.execute(Mongo.java:781)
> at com.mongodb.Mongo$2.execute(Mongo.java:764)
> at com.mongodb.DBCollection.executeBulkWriteOperation(
> DBCollection.java:2195)
> at com.mongodb.DBCollection.executeBulkWriteOperation(
> DBCollection.java:2188)
> at com.mongodb.BulkWriteOperation.execute(
> BulkWriteOperation.java:121)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.sendBulkUpdate(MongoDocumentStore.java:1088)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.bulkUpdate(MongoDocumentStore.java:989)
> at org.apache.jackrabbit.oak.plugins.document.mongo.
> MongoDocumentStore.createOrUpdate(MongoDocumentStore.java:927)
> at org.apache.jackrabbit.oak.plugins.document.util.
> LeaseCheckDocumentStoreWrapper.createOrUpdate(
> LeaseCheckDocumentStoreWrapper.java:135)
> at 

Re: Fwd: new atomicCounter support (OAK-2220)

2016-01-12 Thread Torgeir Veimo
Is there any plans to introduce a cluster safe mechanism of providing
unique counter values?

Most applications where a sequence is used doesn't require full
sequentiality, but rather a unique number that is easier to memorize /
write than uuids for humans. So a type similar to mongodbs ObjectId type
would be sufficient:

https://docs.mongodb.org/manual/reference/object-id/



On 4 January 2016 at 20:35, Davide Giannella <dav...@apache.org> wrote:

> On 21/12/2015 14:17, Torgeir Veimo wrote:
> > Can someone confirm if the atomic counter support code is suitable for
> > sequences?
> >
> I think on Segment it can safely be used but not on a mongo cluster.
>
> What the atomic counter ensure is that the amount of
> increments/decrements is consistent from a persistence POV no matter the
> number of concurrent operations you're pushing in.
>
> While the sequence use case is more like: give me the next available
> value. it's definitely a good starting point on top of which you can
> build a sequence mechanism.
>
> if we speak of single instance Segment only, you can opt for an
> application side lock in which you put in the increments and fetch of
> the value.  Something like:
>
> public static long nextSeq(String p, Session s) {
>   // ... fecthing of the node by path `p`
>   // initiate application side lock
>   session.refresh(); // fetch whatever may come from other MVCC
>   counter.setProperty("oak:increment", 1);
>   session.save();
>   long seq =  counter.getProperty(...).getLong();
>   // release lock
>   return seq;
> }
>
> As said, on segment ONLY, it should work fairly fine.
>
> It won't work on a mongo clustered instance as the cluster nodes
> alignment is asynchronous and because of OAK-2472. So to be put in
> words, you may have the same sequence returned by two different cluster
> nodes.
>
> HTH
> Davide
>



-- 
-Tor


Fwd: new atomicCounter support (OAK-2220)

2015-12-21 Thread Torgeir Veimo
Can someone confirm if the atomic counter support code is suitable for
sequences?

https://jackrabbit.apache.org/oak/docs/apidocs/org/apache/jackrabbit/oak/plugins/atomic/AtomicCounterEditor.html

If my reading of the spec is correct, separate sessions will see their
results from the increment immediately and isolated from other sessions
incrementing the same counter, so using the results from an increment can
safely be used as a sequence number as long as the counter is ever
incremented in any code?

On 13 November 2015 at 14:40, Torgeir Veimo <torgeir.ve...@gmail.com> wrote:

> Am trying to use the new atomicSupport feature, and am wondering if I'm
> using it the correct way;
>
> My counter is initialised with
>
> Node counter = JcrUtils.getOrCreateByPath("/testCounter",
> "oak:Unstructured", session);
> if (counter.isNew()) {
>   counter.addMixin("mix:atomicCounter");
>   session.save();
> }
>
> then when fetching a new value
>
> counter.setProperty("oak:increment", 1);
> session.save();
> long value = counter.getProperty("oak:counter").getLong();
>
> What happens if the same code is called from a different thread / session
> after the session.save() but before the getProperty() call? Will the value
> remain unchanged in this thread / session?
>
> --
> -Tor
>



-- 
-Tor



-- 
-Tor


Re: New Oak Example - Standalone Runnable Example based on Spring Boot

2015-12-03 Thread Torgeir Veimo
Do you have a similar example on how to configure with an actual embedded
osgi container running?

I'd imagine it would be nice to be able to shield all oak dependencies (eg
lucene) from other dependencies a webapp uses.

On 3 December 2015 at 18:50, Chetan Mehrotra 
wrote:

> Hi Team,
>
> For past sometime I was working on a new example to demonstrate usage
> and setup of Oak as part of standalone application (OAK-3687). It
> enables a user to start fully configured Oak in short time and get
> going. For more details refer to readme at [1]
>
> Main objectives with this example app are
>
> 1. How to configure and setup Oak in embedded mode
>
> 2. Quick setup with support for both Segment (default) and Mongo
>
> 3. Makes use of in built OSGi support without requiring to run in
> actual OSGi container. Thus makes it easier to embed Oak in normal
> webapps or other standalone apps
>
> 4, Integrated with Felix WebConsole and Script Console
>
> Kindly give the example a try and provide your valuable feedback!
>
> Chetan Mehrotra
> [1]
> https://github.com/apache/jackrabbit-oak/tree/trunk/oak-examples/standalone
>



-- 
-Tor


Re: JVM usage

2015-10-16 Thread Torgeir Veimo
Does your webapp restart during this time?

On 17 October 2015 at 03:14, Jim.Tully  wrote:
> This is probably something that we are doing incorrectly, but it has me 
> scratching my head.
>
> We are running Oak embedded in a web application.  We construct a repository 
> at startup using fairly standard construction:
>
>
> DocumentNodeStore ns= new 
> DocumentMK.Builder().setMongoDB(createMongoDB()).getNodeStore();
>
>
> Oak oak = new Oak(ns);
>
>
> LuceneIndexProvider provider = new LuceneIndexProvider();
>
> Jcr jcr = new Jcr(oak).with((QueryIndexProvider) provider).with((Observer) 
> provider)
>
> .with(new LuceneIndexEditorProvider()).withAsyncIndexing();
>
> repository = jcr.createRepository();
>
> Looking at the JVM memory using JVisualVM, we see that memory usage increases 
> over time.  Using heap dumps, we've determined that oak object is constantly 
> growing in terms of  memory footprint.
>
> Obviously the object will never be garbage collected, but I'm trying to track 
> down why it is growing over time.  Our interactions with Oak all follow the 
> same pattern:
> Session session = repository.login(credentials, null);
> ... do some work
> session.logout();
>
> Any thoughts on possible reasons for Oak to keep increasing in size?
>
> Thanks,
>
> Jim



-- 
-Tor


Re: Recipe for using Oak in standalone environments

2015-08-19 Thread Torgeir Veimo
Hi,

better documentation is always welcome.

For those using a spring environment, it would be nice if there was a
version of the oak-lucene module that used jar shading to avoid
leaking the embedded lucene 4.7 classes causing potential conflicts
with other lucene versions which might be required in a project.


On 18 August 2015 at 22:22, Chetan Mehrotra chetan.mehro...@gmail.com wrote:
 Hi,

 Off late I have seen quite a few queries from people trying to use Oak
 in non OSGi environment like embedding repository in a webapp deployed
 on Tomcat or just in standalone application. Our current documented
 way [1] is very limited and repository constructed in such a way does
 not expose the full power of Oak stack and user also has to know many
 internal setup details of Oak to get it working correctly.

 Quite a bit of setup logic in Oak is now dependent on OSGi
 configuration. Trying to setup Oak without using those would not
 provide a stable and performant setup. Instead of that if we can have
 a way to reuse all the OSGi based setup support and still enable users
 to use Oak in non OSGi env then that would provide a more stable setup
 approach.

 Recipe
 ==

 For past sometime I have been working on oak-pojosr module [2]. This
 module can now stable and can be used to setup Oak with all the OSGi
 support in a non OSGi world like webapp. For an end user the steps
 required would be

 1. Create a config file for enabling various parts of Oak

 {
 org.apache.felix.jaas.Configuration.factory-LoginModuleImpl: {
 jaas.controlFlag: required,
 jaas.classname:
 org.apache.jackrabbit.oak.security.authentication.user.LoginModuleImpl,
 jaas.ranking: 100
 },
 ...,
 org.apache.jackrabbit.oak.jcr.osgi.RepositoryManager: {},
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService : {
 mongouri : mongodb://${mongodb.host}:${mongodb.port},
 db : ${mongodb.name}
 }
 }

 2. Add dependency to oak-pojosr and thus include various transitive
 dependencies like Felix Connect, SCR, ConfigAdmin etc

 3. Construct a repository instance

 import org.apache.jackrabbit.commons.JcrUtils;

 MapString,String config = new HashMapString, String();
 config.put(org.apache.jackrabbit.repository.home, /path/to/repo);
 config.put(org.apache.jackrabbit.oak.repository.configFile,
 /path/to/oak-config.json);

 Repository repository = JcrUtils.getRepository(config);

 Thats all! This would construct a full stack Oak based on OSGi config
 with all Lucene, Solr support usable.

 Examples
 

 WebApp
 

 I have adapted the existing Jackrabbit Webapp module to work with Oak
 and be constructed based on oak-pojor [3]. You can check out the app
 and just run

 mvn jetty:run

 Access the webui at http://localhost:8080 and create a repository as
 per UI. It currently has following features

 1. Repository is configurable via a JSON file copies to 'oak' folder (default)

 2. Felix WebConsole is integrated - Allows developer to view OSGi
 state and config etc
 Check /osgi/system/console

 3. Felix Script Console integrated to get programatic access to repository

 4. All Oak MBean registered and can be used by user to perform
 maintainence tasks

 Spring Boot
 

 Clay has been working a Oak based application [4] which uses Spring
 Boot [7]. The fork of the same at [5] is now using pojosr to configure
 a repository to be used in Spring [6]. In addition again Felix
 WebConsole etc would also work

 To try it out checkout the application and build it. Then run following 
 command

 java -jar target/com.meta64.mobile-0.0.1-SNAPSHOT.jar --jcrHome=oak 
 --jcrAdminPassword=password --aeskey=password --server.port=8990 
 --spring.config.location=classpath:/application.properties,classpath:/application-dev.properties

 And then access the app at 8990 port

 Proposal
 ===

 Do share your feedback around above proposed approach. In particular
 following aspect

 Q - Should we make oak-pojosr based setup as one of the
 recommended/supported approach for configuring Oak in non OSGi env

 Chetan Mehrotra
 [1] http://jackrabbit.apache.org/oak/docs/construct.html
 [2] https://github.com/apache/jackrabbit-oak/tree/trunk/oak-pojosr
 [3] https://github.com/apache/jackrabbit-oak/tree/trunk/oak-examples/webapp
 [4] https://github.com/Clay-Ferguson/meta64
 [5] https://github.com/chetanmeh/meta64/tree/oak-pojosr
 [6] 
 https://github.com/chetanmeh/meta64/blob/oak-pojosr/src/main/java/com/meta64/mobile/repo/OakRepository.java#L218
 [7] http://projects.spring.io/spring-boot/



-- 
-Tor


Re: JCR + MongoDb + Lucene Holy Grail Finally Found

2015-08-06 Thread Torgeir Veimo
On 5 August 2015 at 18:43, Bertrand Delacretaz bdelacre...@apache.org wrote:
 And on top of that, as you say, when one needs to integrate
 legacy/ugly code OSGi can be a lifesaver.

Actually, OSGi is a requirement if integrating oak-lucene with any
code that uses lucene, as oak-lucene is hard coded to lucene 4.7.1,
and even includes lucene classes directly. I'd actually label
oak-lucene the 'legacy' component here.


-- 
-Tor


Re: [jira] [Updated] (OAK-2736) Oak instance does not close the executors created upon ContentRepository creation

2015-07-20 Thread Torgeir Veimo
This issue was resolved then reopened. Was there any more activity
towards getting it properly resolved?

On 20 July 2015 at 20:05, Davide Giannella (JIRA) j...@apache.org wrote:

  [ 
 https://issues.apache.org/jira/browse/OAK-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
  ]

 Davide Giannella updated OAK-2736:
 --
 Fix Version/s: (was: 1.3.3)
1.3.4

 Bulk move to 1.3.4

 Oak instance does not close the executors created upon ContentRepository 
 creation
 -

 Key: OAK-2736
 URL: https://issues.apache.org/jira/browse/OAK-2736
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: CI, Jenkins
 Fix For: 1.3.4

 Attachments: OAK-2736-2.patch, OAK-2736.patch


 Oak.createContentRepository does not closes the executors it creates upon 
 close. It should close the executor if that is created by itself and not 
 passed by outside
 Also see recent [thread|http://markmail.org/thread/rryydj7vpua5qbub].



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)


Fwd: using mix:lockable throwing Unresolved conflicts in oak 1.2.2

2015-05-28 Thread Torgeir Veimo
I have some code which has been working flawlessly for a long time for
getting sequence numbers, but with oak 1.2.2 it's started throwing
exceptions. I am wondering if there's a way to use the
org.apache.jackrabbit.util.Locked class so that it will not throw
exceptions, when used with oak and the different session isolation
configuration it has compared to jackrabbit 2.8.*?

javax.jcr.RepositoryException: Unable to unlock node /ka:system/ka:counter
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.unlock(NodeDelegate.java:837)
at 
org.apache.jackrabbit.oak.jcr.lock.LockManagerImpl$8.perform(LockManagerImpl.java:176)
at 
org.apache.jackrabbit.oak.jcr.lock.LockManagerImpl$8.perform(LockManagerImpl.java:170)
at 
org.apache.jackrabbit.oak.jcr.lock.LockOperation.perform(LockOperation.java:68)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:216)
at 
org.apache.jackrabbit.oak.jcr.lock.LockManagerImpl.perform(LockManagerImpl.java:214)
at 
org.apache.jackrabbit.oak.jcr.lock.LockManagerImpl.unlock(LockManagerImpl.java:170)
at org.apache.jackrabbit.util.Locked.runAndUnlock(Locked.java:280)
at org.apache.jackrabbit.util.Locked.with(Locked.java:195)
at org.apache.jackrabbit.util.Locked.with(Locked.java:124)
at org.apache.jackrabbit.util.Locked.with(Locked.java:103)
at no.xx. 
content.services.repository.RepositoryService.getNewSerialNumber(RepositoryService.java:230)

[...]

then

Caused by: org.apache.jackrabbit.oak.api.CommitFailedException:
OakState0001: Unresolved conflicts in /ka:system/ka:counter
at 
org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
at 
org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyChanged(ConflictValidator.java:60)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyChanged(CompositeEditor.java:91)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyChanged(EditorDiff.java:93)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:596)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:456)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at 
org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at 
org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
at 
org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:61)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:61)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:405)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:428)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:484)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:162)
at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:313)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:338)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.unlock(NodeDelegate.java:831)

... 114 more

13:19:22,066 WARN  o.a.j.o.j.s.SessionContext.perform() l: 397
[127.0.0.1] Failed to unlock a session scoped lock
javax.jcr.lock.LockException: Node /ka:system/ka:counter is not locked
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.unlock(NodeDelegate.java:825)
at 
org.apache.jackrabbit.oak.jcr.session.SessionContext$1.perform(SessionContext.java:395)
at 
org.apache.jackrabbit.oak.jcr.session.SessionContext$1.perform(SessionContext.java:387)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:216)
at 
org.apache.jackrabbit.oak.jcr.session.SessionContext.unlockAllSessionScopedLocks(SessionContext.java:387)
at 
org.apache.jackrabbit.oak.jcr.session.SessionContext.dispose(SessionContext.java:369)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$10.perform(SessionImpl.java:481)
at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$10.perform(SessionImpl.java:478)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:216)
at 

Re: builder must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder

2015-05-05 Thread Torgeir Veimo
Excellent, this was just what I needed!

I had to add a null check condition on .withBlobStore(blobStore) and
builder.setBlobStore(blobStore) so that it didn't fail on NPE when no
blob store was configured, but otherwise it worked flawlessly on
converting my 800MB test repository.


On 5 May 2015 at 19:56, Julian Sedding jsedd...@gmail.com wrote:
 Hi Torgeir

 Take a look at OAK-2643[0], that may help. You can copy the repository
 on the NodeStore level, similar to the upgrade scenario.

 Regards
 Julian

 [0] https://issues.apache.org/jira/browse/OAK-2643

 On Tue, May 5, 2015 at 2:20 AM, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 Had a look in the source, and it appears this is due to the merge()
 method only being available in the DocumentRootBuilder, not in the
 NodeBuilder interface?

 Is there any other way to move a repository from tar storage to
 mongodb without regenerating UUID / identifier values?

 On 4 May 2015 at 23:26, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 I didn't have much luck on the users list with this problem. Is there
 any work towards upgrading from tarmk to documentmk easier?

 I'm trying to move a repository from tarmk to mongodb, and am using
 the oak-run restore command, but am getting an exception;

 spazzo:oak-run torgeir$ java -jar ./target/oak-run-1.2.1.jar restore
 mongodb://localhost/oak ~/oak-repository-backup-20150427a

 Apache Jackrabbit Oak 1.2.1
 Exception in thread main java.lang.IllegalArgumentException: builder
 must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder
 at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.asDocumentRootBuilder(DocumentNodeStore.java:2232)
 at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1490)
 [...]

 Looking at the mailing list it might be due to OAK-2049. I've tried
 running the node count script in oak console to find the offending
 node, but it detects no errors.

 http://oak-dev.jackrabbit.apache.narkive.com/KVUViR6K/cannot-backup-and-restore-aem-tar-repository

 Are there any other things I need to fix in the repository? I've run
 the check, compact and then backup commands on the repository prior to
 trying to restore it to mongodb.

 --
 -Tor



 --
 -Tor



-- 
-Tor


Fwd: builder must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder

2015-05-04 Thread Torgeir Veimo
I didn't have much luck on the users list with this problem. Is there
any work towards upgrading from tarmk to documentmk easier?

I'm trying to move a repository from tarmk to mongodb, and am using
the oak-run restore command, but am getting an exception;

spazzo:oak-run torgeir$ java -jar ./target/oak-run-1.2.1.jar restore
mongodb://localhost/oak ~/oak-repository-backup-20150427a

Apache Jackrabbit Oak 1.2.1
Exception in thread main java.lang.IllegalArgumentException: builder
must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.asDocumentRootBuilder(DocumentNodeStore.java:2232)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1490)
[...]

Looking at the mailing list it might be due to OAK-2049. I've tried
running the node count script in oak console to find the offending
node, but it detects no errors.

http://oak-dev.jackrabbit.apache.narkive.com/KVUViR6K/cannot-backup-and-restore-aem-tar-repository

Are there any other things I need to fix in the repository? I've run
the check, compact and then backup commands on the repository prior to
trying to restore it to mongodb.

-- 
-Tor


Re: builder must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder

2015-05-04 Thread Torgeir Veimo
Had a look in the source, and it appears this is due to the merge()
method only being available in the DocumentRootBuilder, not in the
NodeBuilder interface?

Is there any other way to move a repository from tar storage to
mongodb without regenerating UUID / identifier values?

On 4 May 2015 at 23:26, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 I didn't have much luck on the users list with this problem. Is there
 any work towards upgrading from tarmk to documentmk easier?

 I'm trying to move a repository from tarmk to mongodb, and am using
 the oak-run restore command, but am getting an exception;

 spazzo:oak-run torgeir$ java -jar ./target/oak-run-1.2.1.jar restore
 mongodb://localhost/oak ~/oak-repository-backup-20150427a

 Apache Jackrabbit Oak 1.2.1
 Exception in thread main java.lang.IllegalArgumentException: builder
 must be a org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder
 at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.asDocumentRootBuilder(DocumentNodeStore.java:2232)
 at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1490)
 [...]

 Looking at the mailing list it might be due to OAK-2049. I've tried
 running the node count script in oak console to find the offending
 node, but it detects no errors.

 http://oak-dev.jackrabbit.apache.narkive.com/KVUViR6K/cannot-backup-and-restore-aem-tar-repository

 Are there any other things I need to fix in the repository? I've run
 the check, compact and then backup commands on the repository prior to
 trying to restore it to mongodb.

 --
 -Tor



-- 
-Tor


Re: working lucene fulltext index

2015-03-10 Thread Torgeir Veimo
Thank you! This example helped me iron out the errors in my index configuration!

It would be good to have a bit more example code online for these things.

On 6 March 2015 at 04:16, Chetan Mehrotra chetan.mehro...@gmail.com wrote:
 Hi Torgeir,

 Sorry for the delay here as got stuck with other issues. I tried your
 approach and it looks like you had a typo in your index defintion

 -  .setProperty(isRegExp, true)
 +  .setProperty(isRegexp, true)
   .setProperty(nodeScopeIndex, true);

 I tried to create a standalone example which you can give a try to see
 lucene index in action [1]

 Let me know if you still face any issue

 Chetan Mehrotra
 [1] https://gist.github.com/chetanmeh/c1ccc4fa588ed1af467b


 On Wed, Feb 25, 2015 at 7:26 PM, Torgeir Veimo torgeir.ve...@gmail.com 
 wrote:
 Sorted out my lucene version issues, so not getting that exception any
 more, but still not getting any query results. Still seeing multiple
 of these in the logs;

 23:55:14,288 TRACE lucene.IndexDefinition.collectIndexRules() - line
 519 [0:0:0:0:0:0:0:1] - Found rule 'IndexRule: ka:asset' for NodeType
 'ka:asset'
 23:55:14,288 TRACE lucene.IndexDefinition.collectIndexRules() - line
 535 [0:0:0:0:0:0:0:1] - Registering rule 'IndexRule: ka:asset' for
 name 'ka:asset'

 On 25 February 2015 at 16:49, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 I tried without the async: async property on the lucene index, on an
 empty repository, and am seeing an exception.

 Any idea on how I can try to find the cause of this?

 I assume if I tried to run with the lucene index on disk instead of in
 the segment store, I might avoid this, but the documentation doesn't
 really outline how to do this in much detail.

 16:44:09,437 INFO  index.IndexUpdate.enter() - line 110 [] -
 Reindexing will be performed for following indexes:
 [/oak:index/ka:owner, /oak:index/positionref, /oak:index/targetId,
 /oak:index/uuid, /oak:index/ka:id, /oak:index/mail,
 /oak:index/ka:tags, /oak:index/active, /oak:index/ka:applicationState,
 /oak:index/parentTargetId, /oak:index/reference, /oak:index/ka:uid,
 /oak:index/ka:rememberme, /oak:index/ka:state, /oak:index/ka:serial,
 /oak:index/ka:assetType, /oak:index/lucene, /oak:index/ka:series,
 /oak:index/ka:principal, /oak:index/affiliation, /oak:index/ka:expire,
 /oak:index/companyref, /oak:index/title, /oak:index/lastCommentDate,
 /oak:index/ka:subscriptionFrequency, /oak:index/nodetype]
 16:44:09,547 WARN  support.AbstractApplicationContext.refresh() - line
 486 [] - Exception encountered during context initialization -
 cancelling refresh attempt
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'assetOwnerPermission': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: no.karriere.content.dao.AssetRepository
 no.karriere.content.authorization.permissions.AbstractPermission.assetRepository;
 nested exception is
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'assetRepository': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: no.karriere.content.dao.jcr.MediaHelper
 no.karriere.content.dao.jcr.JcrAssetRepository.mediaHelper; nested
 exception is org.springframework.beans.factory.BeanCreationException:
 Error creating bean with name 'mediaHelper': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field:
 no.karriere.content.services.repository.RepositoryService
 no.karriere.content.dao.jcr.MediaHelper.repositoryService; nested
 exception is org.springframework.beans.factory.BeanCreationException:
 Error creating bean with name 'repositoryService': Injection of
 autowired dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: javax.jcr.Repository
 no.karriere.content.services.repository.RepositoryService.oakRepository;
 nested exception is
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'getRepository' defined in class path resource
 [no/karriere/content/dao/jcr/repository/RepositoryConfiguration.class]:
 Instantiation of bean failed; nested exception is
 org.springframework.beans.factory.BeanDefinitionStoreException:
 Factory method [public javax.jcr.Repository
 no.karriere.content.dao.jcr.repository.RepositoryConfiguration.getRepository()
 throws no.karriere.content.exception.ContentException] threw
 exception; nested exception is java.lang.AbstractMethodError:
 org.apache.lucene.store.IndexOutput.getChecksum()J
 at 
 org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:298

Re: working lucene fulltext index

2015-02-25 Thread Torgeir Veimo
Sorted out my lucene version issues, so not getting that exception any
more, but still not getting any query results. Still seeing multiple
of these in the logs;

23:55:14,288 TRACE lucene.IndexDefinition.collectIndexRules() - line
519 [0:0:0:0:0:0:0:1] - Found rule 'IndexRule: ka:asset' for NodeType
'ka:asset'
23:55:14,288 TRACE lucene.IndexDefinition.collectIndexRules() - line
535 [0:0:0:0:0:0:0:1] - Registering rule 'IndexRule: ka:asset' for
name 'ka:asset'

On 25 February 2015 at 16:49, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 I tried without the async: async property on the lucene index, on an
 empty repository, and am seeing an exception.

 Any idea on how I can try to find the cause of this?

 I assume if I tried to run with the lucene index on disk instead of in
 the segment store, I might avoid this, but the documentation doesn't
 really outline how to do this in much detail.

 16:44:09,437 INFO  index.IndexUpdate.enter() - line 110 [] -
 Reindexing will be performed for following indexes:
 [/oak:index/ka:owner, /oak:index/positionref, /oak:index/targetId,
 /oak:index/uuid, /oak:index/ka:id, /oak:index/mail,
 /oak:index/ka:tags, /oak:index/active, /oak:index/ka:applicationState,
 /oak:index/parentTargetId, /oak:index/reference, /oak:index/ka:uid,
 /oak:index/ka:rememberme, /oak:index/ka:state, /oak:index/ka:serial,
 /oak:index/ka:assetType, /oak:index/lucene, /oak:index/ka:series,
 /oak:index/ka:principal, /oak:index/affiliation, /oak:index/ka:expire,
 /oak:index/companyref, /oak:index/title, /oak:index/lastCommentDate,
 /oak:index/ka:subscriptionFrequency, /oak:index/nodetype]
 16:44:09,547 WARN  support.AbstractApplicationContext.refresh() - line
 486 [] - Exception encountered during context initialization -
 cancelling refresh attempt
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'assetOwnerPermission': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: no.karriere.content.dao.AssetRepository
 no.karriere.content.authorization.permissions.AbstractPermission.assetRepository;
 nested exception is
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'assetRepository': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: no.karriere.content.dao.jcr.MediaHelper
 no.karriere.content.dao.jcr.JcrAssetRepository.mediaHelper; nested
 exception is org.springframework.beans.factory.BeanCreationException:
 Error creating bean with name 'mediaHelper': Injection of autowired
 dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field:
 no.karriere.content.services.repository.RepositoryService
 no.karriere.content.dao.jcr.MediaHelper.repositoryService; nested
 exception is org.springframework.beans.factory.BeanCreationException:
 Error creating bean with name 'repositoryService': Injection of
 autowired dependencies failed; nested exception is
 org.springframework.beans.factory.BeanCreationException: Could not
 autowire field: javax.jcr.Repository
 no.karriere.content.services.repository.RepositoryService.oakRepository;
 nested exception is
 org.springframework.beans.factory.BeanCreationException: Error
 creating bean with name 'getRepository' defined in class path resource
 [no/karriere/content/dao/jcr/repository/RepositoryConfiguration.class]:
 Instantiation of bean failed; nested exception is
 org.springframework.beans.factory.BeanDefinitionStoreException:
 Factory method [public javax.jcr.Repository
 no.karriere.content.dao.jcr.repository.RepositoryConfiguration.getRepository()
 throws no.karriere.content.exception.ContentException] threw
 exception; nested exception is java.lang.AbstractMethodError:
 org.apache.lucene.store.IndexOutput.getChecksum()J
 at 
 org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:298)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1148)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)

 [ lots of nested spring stuff]

 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException:
 Factory method [public javax.jcr.Repository
 no.karriere.content.dao.jcr.repository.RepositoryConfiguration.getRepository()
 throws no.karriere.content.exception.ContentException] threw
 exception; nested exception is java.lang.AbstractMethodError:
 org.apache.lucene.store.IndexOutput.getChecksum()J

working lucene fulltext index

2015-02-24 Thread Torgeir Veimo
I'm having trouble getting a working lucene fulltext index with oak
1.1.6. Have posted to the user ml, but with little response, so am
hoping to be excused by posting here.

My repository is configured with the following setup and
InitialContext impl. Still, I am getting no results when using a
simple query such as select * from [nt:base] where
contains(*,'torgeir') .

I am looking very hard, but there's absolutely no java code examples
for configuring lucene indexing anywhere that I can compare my setup
with, can someone share some simple code for how to do this?

NodeBuilder index = IndexUtils.getOrCreateOakIndex(builder);
index.child(lucene)
.setProperty(jcr:primaryType, oak:QueryIndexDefinition, Type.NAME)
.setProperty(compatVersion, 2)
.setProperty(type, lucene)
.setProperty(async, async)
.setProperty(reindex, true)
.child(indexRules)
.setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
.child(nt:base)
.setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
.child(properties)
.setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
.child(allProps)
.setProperty(jcr:primaryType, nt:unstructured,
Type.NAME)
.setProperty(name, .*)
.setProperty(isRegExp, true)
.setProperty(nodeScopeIndex, true);


SegmentStore segmentStore = new FileStore(new File(oakRepositoryPath), 256);
NodeStore nodeStore = new SegmentNodeStore(segmentStore);
Oak oak = new Oak(nodeStore);
LuceneIndexProvider provider = new LuceneIndexProvider();
Repository oakRepository = new Jcr(oak)
.with(new LocalInitialContent())
.with((QueryIndexProvider) provider)
.with((Observer) provider)
.with(new LuceneIndexEditorProvider())
.with(new LuceneInitializerHelper(lucene, (SetString) null))
.withAsyncIndexing()
.createRepository();


I am getting log messages like

21:58:51,075 WARN  lucene.IndexDefinition.collectIndexRules() - line
505 [0:0:0:0:0:0:0:1] - IndexRule node does not have orderable
children in [Lucene Index : genericlucene(/oak:index/lucene)]
21:58:51,076 WARN
lucene.IndexDefinition$IndexingRule.collectPropConfigs() - line 730
[0:0:0:0:0:0:0:1] - Properties node for [IndexRule: nt:base] does not
have orderable children in [Lucene Index :
genericlucene(/oak:index/lucene)]

even though the indexes are configured with nt:unstructured node types.


-- 
-Tor


Re: working lucene fulltext index

2015-02-24 Thread Torgeir Veimo
Thank you for your reply!

Setting the :childOrder helps reduce the warning logs. I am still
getting these log entries a lot;

16:38:00,016 TRACE lucene.IndexDefinition.collectIndexRules() - line
519 [] - Found rule 'IndexRule: ka:asset' for NodeType 'ka:asset'
16:38:00,016 TRACE lucene.IndexDefinition.collectIndexRules() - line
535 [] - Registering rule 'IndexRule: ka:asset' for name 'ka:asset'

I am unsure why these keeps getting repeated, could it be that the
index configuration fails, and retries the next time I make a query?

I am now using this code to configure;

NodeBuilder index = IndexUtils.getOrCreateOakIndex(builder);
index.child(lucene)
.setProperty(jcr:primaryType, oak:QueryIndexDefinition, Type.NAME)
.setProperty(compatVersion, 2)
.setProperty(type, lucene)
.setProperty(async, async)
.setProperty(reindex, true)
.child(indexRules)
.setProperty(:childOrder, ImmutableSet.of(ka:asset), Type.STRINGS)
.child(ka:asset)
.setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
.child(properties)
.setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
.setProperty(:childOrder,
ImmutableSet.of(allProps), Type.STRINGS)
.child(allProps)
.setProperty(jcr:primaryType, nt:unstructured,
Type.NAME)
.setProperty(name, .*)
.setProperty(isRegExp, true)
.setProperty(nodeScopeIndex, true);

select * from [ka:asset] where lower(*) like '%admin%'

yields two entries

select * from [ka:asset] where contains(*,'admin')

yields none.

I am wondering if the setup of the repository is correct? I assume
that with the default LuceneIndexProvider() constructor, it will use
lucene indexes stored as segments in the segment store?


On 25 February 2015 at 10:46, Chetan Mehrotra chetan.mehro...@gmail.com wrote:
 Hi Torgeir,

 By default the Lucene index would be updated every 5 sec. So are you
 performing query immediately after adding the content? If thats the
 case you can remove setting async property at least for your
 testcase to get Lucene index triggered immediately after commit

 21:58:51,075 WARN  lucene.IndexDefinition.collectIndexRules() - line
 505 [0:0:0:0:0:0:0:1] - IndexRule node does not have orderable
 children in [Lucene Index : genericlucene(/oak:index/lucene)]
 21:58:51,076 WARN

 You can create the index definition via JCR API. Basically the
 orderable children are detected via presence of ':childOrder' hidden
 property. When you set a nodetype to nt:unstructured then oak-jcr
 would set this property automatically to record the ordering. However
 if you do it via NodeStore API then that has to be done manually.

 Chetan Mehrotra



-- 
-Tor


Re: working lucene fulltext index

2015-02-24 Thread Torgeir Veimo
(IndexWriter.java:3111)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:913)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:984)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:954)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:151)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:191)
at 
org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:56)
at 
org.apache.jackrabbit.oak.plugins.index.IndexUpdate.enter(IndexUpdate.java:116)
at 
org.apache.jackrabbit.oak.spi.commit.VisibleEditor.enter(VisibleEditor.java:57)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:49)
at 
org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:397)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:428)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:484)
at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:162)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:45)
at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:518)
at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:202)
at 
no.karriere.content.dao.jcr.repository.RepositoryConfiguration.getRepository(RepositoryConfiguration.java:93)




On 25 February 2015 at 16:41, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 Thank you for your reply!

 Setting the :childOrder helps reduce the warning logs. I am still
 getting these log entries a lot;

 16:38:00,016 TRACE lucene.IndexDefinition.collectIndexRules() - line
 519 [] - Found rule 'IndexRule: ka:asset' for NodeType 'ka:asset'
 16:38:00,016 TRACE lucene.IndexDefinition.collectIndexRules() - line
 535 [] - Registering rule 'IndexRule: ka:asset' for name 'ka:asset'

 I am unsure why these keeps getting repeated, could it be that the
 index configuration fails, and retries the next time I make a query?

 I am now using this code to configure;

 NodeBuilder index = IndexUtils.getOrCreateOakIndex(builder);
 index.child(lucene)
 .setProperty(jcr:primaryType, oak:QueryIndexDefinition, Type.NAME)
 .setProperty(compatVersion, 2)
 .setProperty(type, lucene)
 .setProperty(async, async)
 .setProperty(reindex, true)
 .child(indexRules)
 .setProperty(:childOrder, ImmutableSet.of(ka:asset), Type.STRINGS)
 .child(ka:asset)
 .setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
 .child(properties)
 .setProperty(jcr:primaryType, nt:unstructured, Type.NAME)
 .setProperty(:childOrder,
 ImmutableSet.of(allProps), Type.STRINGS)
 .child(allProps)
 .setProperty(jcr:primaryType, nt:unstructured,
 Type.NAME)
 .setProperty(name, .*)
 .setProperty(isRegExp, true)
 .setProperty(nodeScopeIndex, true);

 select * from [ka:asset] where lower(*) like '%admin%'

 yields two entries

 select * from [ka:asset] where contains(*,'admin')

 yields none.

 I am wondering if the setup of the repository is correct? I assume
 that with the default LuceneIndexProvider() constructor, it will use
 lucene indexes stored as segments in the segment store?


 On 25 February 2015 at 10:46, Chetan Mehrotra chetan.mehro...@gmail.com 
 wrote:
 Hi Torgeir,

 By default the Lucene index would be updated every 5 sec. So are you
 performing query immediately after adding the content? If thats the
 case you can remove setting async property at least for your
 testcase to get Lucene index triggered immediately after commit

 21:58:51,075 WARN  lucene.IndexDefinition.collectIndexRules() - line
 505 [0:0:0:0:0:0:0:1] - IndexRule node does not have orderable
 children in [Lucene Index : genericlucene(/oak:index/lucene)]
 21:58:51,076 WARN

 You can create the index definition via JCR API. Basically the
 orderable children are detected via presence of ':childOrder' hidden
 property. When you set a nodetype to nt:unstructured then oak-jcr
 would set this property automatically to record the ordering. However
 if you do it via NodeStore API then that has to be done manually.

 Chetan Mehrotra



 --
 -Tor



-- 
-Tor


Re: how do you enable lucene search without OSGi or repository.xml?

2015-02-17 Thread Torgeir Veimo
This might be a bit late reply for you, but you can use select * from
[nt:base] where lower(*) like '%string%' to search in multiple
properties.

On 23 October 2014 at 02:02, Adrien Lamoureux
lamoureux.adr...@gmail.com wrote:
 Thanks Davide,

 But it turns out that the contains() does not work, (using \\ seems to
 work fine instead of using []).

 So this works:

 statement = select * from [nt:unstructured] as theNode where
 ISDESCENDANTNODE('/searchpath') and myproperty like '%teststring%'

 This does not:

 statement = select * from [nt:unstructured] as theNode where
 ISDESCENDANTNODE('/searchpath') and contains(theNode.*,'teststring');



 My goal is to search for any node under a certain path, and on any property
 containing a string..,  myproperty like '%X%'  is cumbersome to use.

 Am I using contains improperly?

 Adrien

 On Wed, Oct 22, 2014 at 1:17 AM, Davide Giannella dav...@apache.org wrote:

 On 21/10/2014 22:08, Adrien Lamoureux wrote:
  ...
  String statement = select * from \nt:unstructured\ as theNode where
  ISDESCENDANTNODE('/searchpath') and contains(theNode.*,'teststring');
  ...
 Shouldn't it be

 SELECT * FROM [nt:unstructured] AS theNode
 WHERE ISDESCENDANTNODE('...')
 AND CONTAINS(...)

 notice the '[]' rather than the ''.

 D.






-- 
-Tor


support for jcr 2.1 Node.addNodeAutoNamed()

2014-12-15 Thread Torgeir Veimo
Is there any work done on adding support for JCR 2.1
Node.addNodeAutoNamed() in oak?


-- 
-Tor


Re: [VOTE] Release Apache Jackrabbit Oak 1.0.0

2014-05-12 Thread Torgeir Veimo
Is there to be an oak-user mailing list, or should one use the
jackrabbit users mailing list for discussion on deployments?

On 9 May 2014 12:07, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 A candidate for the Jackrabbit Oak 1.0.0 release is available at:

 https://dist.apache.org/repos/dist/dev/jackrabbit/oak/1.0.0/

 The release candidate is a zip archive of the sources in:

 https://svn.apache.org/repos/asf/jackrabbit/oak/tags/jackrabbit-oak-1.0.0/

 The SHA1 checksum of the archive is 52ec6735ad1fcbbcdf1a94408d303b0a5305389b.

 A staged Maven repository is available for review at:

 
 https://repository.apache.org/content/repositories/orgapachejackrabbit-1016

 The command for running automated checks against this release candidate is:

 $ sh check-release.sh oak 1.0.0 52ec6735ad1fcbbcdf1a94408d303b0a5305389b

 Please vote on releasing this package as Apache Jackrabbit Oak 1.0.0.
 The vote is open for the next 72 hours and passes if a majority of at
 least three +1 Jackrabbit PMC votes are cast.

 [ ] +1 Release this package as Apache Jackrabbit Oak 1.0.0
 [ ] -1 Do not release this package because...

 My vote is +1.

 BR,

 Jukka Zitting



-- 
-Tor


Re: Roadmap for Jackrabbit 2.x and 3.0

2014-01-16 Thread Torgeir Veimo
Is there a guide for how to migrate from jackrabbit 2.* to oak?

On 16 January 2014 18:51, Michael Dürig mdue...@apache.org wrote:


 On 15.1.14 7:35 , Jukka Zitting wrote:

 Hi,


 a) Upgrade Jackrabbit Classic to use Lucene 4. As discussed earlier
 (http://markmail.org/message/nv5jeeoda7qe5qen) this is pretty hard,
 and it's questionable whether the benefits are worth the effort.


 -0, too little benefit for the effort it would take.



 b) Downgrade Oak to use Lucene 3. This should be doable with not much
 effort, as the Lucene integration in Oak is much simpler than in
 Jackrabbit Classic. It might even be possible to make oak-lucene
 version-independent, so it would work with both Lucene 3 and 4.


 -1, people will start bugging us about upgrading to Lucene 4 as soon as Oak
 is out.



 c) Ship the jackrabbit deployment packages without Lucene integration
 for Oak. This would allow people to start playing with Oak in their
 existing deployments, but require some deployment changes for full Oak
 functionality.


 +0, not sure how much this degrades the actual value of the deployment
 packages though.



 d) Use the class rewriting tricks in something like the Shade plugin
 [1] to be able to include both Lucene 3 *and* 4 in the same deployment
 packages. I'm not sure if this is even possible with Lucene, or how
 much effort it would require.

 e) Use a custom classloader setup to load the correct version of
 Lucene depending on the selected Jackrabbit mode.


 -10^12, and spend the rest of our lives debugging all kinds of weird class
 loading issues in each and every deployment container ;-)



 f) Adjust the Jackrabbit deployment packages to use an embedded OSGi
 container, and use it to selectively deploy the required
 implementation components, including the correct version of Lucene.


 +1 if we have a strong argument for going with the combined deployment
 option.



 g) Or as a last resort, abandon the idea of a joint deployment
 package. Jackrabbit Classic and Oak would be shipped in separate
 deployment artifacts.


 +1 for its simplicity otherwise.

 Michael



 I'm thinking of trying to implement one or two of these alternatives
 within the next few weeks, and cut Jackrabbit 2.8 based on that work
 and including something like Oak 0.16 as a beta feature. Assuming that
 approach works and Oak stabilizes as planned, we could then follow up
 with Jackrabbit 3.0 fairly soon after 2.8.

 [1] http://maven.apache.org/plugins/maven-shade-plugin/

 BR,

 Jukka Zitting





-- 
-Tor


Re: Upgrade to Lucene 4 ?

2013-10-09 Thread Torgeir Veimo
How about using an external elasticsearch or solr instance instead of
lucene for your indexing needs?

On 9 October 2013 18:21, Cédric Damioli cdami...@apache.org wrote:
 Hi,

 Ok I see. Thanks for your feedback.
 I most likely won't dive alone in this task, given that there's some really
 tricky parts.
 On the other hand, Oak is currently not production-ready, so I have to stick
 to JR 2.x for now.

 I'll probably look at a way to isolate Jackrabbit+Lucene in a separate
 ClassLoader to be able to run Lucene 4 code in the same application. BTW,
 did someone already achieve this (apart from OSGI-based solutions) ?

 Regards,
 Cédric


 Le 09/10/2013 09:07, Marcel Reutegger a écrit :

 Hi Cédric,

 most of the development effort currently goes into Apache
 Oak, which will replace Jackrabbit 2.x at some point. This means
 Jackrabbit 2.x is pretty much in maintenance mode.

 I don't want to discourage you from doing this, but I will probably
 not have the time to help on this...

 Regards
   Marcel

 -Original Message-
 From: Cédric Damioli [mailto:cdami...@apache.org]
 Sent: Montag, 7. Oktober 2013 17:59
 To: dev@jackrabbit.apache.org
 Subject: Upgrade to Lucene 4 ?

 Hi,

 I'm thinking about the potential migration of Jackrabbit to Lucene 4.

 This represents quite a huge work, as many Lucene core API have
 completely changed between 3.x and 4.x
 A quick test by replacing Lucene 3.x by Lucene 4.x shows almost 1000
 compilation problems.

 So before beginning this, I was first wondering if it is even worth
 doing it.
 The main benefits are obviously Lucene 4 new features and performance
 optimizations.
 The main drawback is IMHO that for a given application, a Jackrabbit
 migration would also involve the migration of all Lucene-related code if
 any. Existing repositories should also be reindexed during migration.

 So what do people think about this ?

 And of course, if it seems a good idea, is there some committers around
 here volunteering to help me to do the job ? ;)

 Best regards,



-- 
-Tor


Re: jackrabbit small issue : java.lang.OutOfMemoryError: PermGen

2013-04-04 Thread Torgeir Veimo
This is with jackrabbit 2.6.0? There's no such property on
org.apache.jackrabbit.core.persistence.pool.DerbyPersistenceManager in
2.5.2 at least.

On Thu, Apr 4, 2013 at 10:10 PM, Bernardo D'Auria
bernardo.dau...@uc3m.es wrote:
 Hi to everybody ,
 I'm currently developing a web server application in Tomcat that uses
 jackrabbit.
 I've found a small issue that took me some time to solve and I would like to
 tell you about it,
 I do not know if it deserves to be identified as a small bug. You'll tell
 me.

 I was frequently deploying and undeploying my application while developing
 and after 4 or 5 deployment  repetition  Tomcat was issuing the following
 error
java.lang.OutOfMemoryError: PermGen
 In additionTomcat was revealing leaks in my undeployed application.

 I fixed the issued adding the line
 param name=shutdownOnClose value=true/
 in the repository.xml file like shown in this part of xml code

 PersistenceManager
 class=org.apache.jackrabbit.core.persistence.pool.DerbyPersistenceManager
   param name=url
 value=jdbc:derby:${rep.home}/version/db;create=true/
   param name=schemaObjectPrefix value=version_/
   param name=shutdownOnClose value=true/
 /PersistenceManager

 The problem was due to the fact that the thread derby.antiGC was not
 stopped.

 The issue is that the repository.xml was automatically generated by
 Jackrabbit.
 I would expect it should already include the shutdownOnClose set to true.

 The Jackrabbit version I was using were different the last is 2.6.0 (and
 earlier).
 I tried both with Tomcat 6 and 7

 Best regards,

   Bernardo

 --
 Departamento de Estadística
 Universidad Carlos III de Madrid
 Avda Universidad, 30
 28911 Leganes (Madrid)
 Spain

 email: bernardo.dau...@uc3m.es
 home: www.est.uc3m.es/bdauria
 tel: + 34.91.624.8804
 fax: + 34.91.624.9177




-- 
-Tor


Re: jackrabbit small issue : java.lang.OutOfMemoryError: PermGen

2013-04-04 Thread Torgeir Veimo
Can't really see such a property for jackrabbit 2.6.0 either, for the
class in question?

On Thu, Apr 4, 2013 at 10:18 PM, Torgeir Veimo torgeir.ve...@gmail.com wrote:
 This is with jackrabbit 2.6.0? There's no such property on
 org.apache.jackrabbit.core.persistence.pool.DerbyPersistenceManager in
 2.5.2 at least.

 On Thu, Apr 4, 2013 at 10:10 PM, Bernardo D'Auria
 bernardo.dau...@uc3m.es wrote:
 Hi to everybody ,
 I'm currently developing a web server application in Tomcat that uses
 jackrabbit.
 I've found a small issue that took me some time to solve and I would like to
 tell you about it,
 I do not know if it deserves to be identified as a small bug. You'll tell
 me.

 I was frequently deploying and undeploying my application while developing
 and after 4 or 5 deployment  repetition  Tomcat was issuing the following
 error
java.lang.OutOfMemoryError: PermGen
 In additionTomcat was revealing leaks in my undeployed application.

 I fixed the issued adding the line
 param name=shutdownOnClose value=true/
 in the repository.xml file like shown in this part of xml code

 PersistenceManager
 class=org.apache.jackrabbit.core.persistence.pool.DerbyPersistenceManager
   param name=url
 value=jdbc:derby:${rep.home}/version/db;create=true/
   param name=schemaObjectPrefix value=version_/
   param name=shutdownOnClose value=true/
 /PersistenceManager

 The problem was due to the fact that the thread derby.antiGC was not
 stopped.

 The issue is that the repository.xml was automatically generated by
 Jackrabbit.
 I would expect it should already include the shutdownOnClose set to true.

 The Jackrabbit version I was using were different the last is 2.6.0 (and
 earlier).
 I tried both with Tomcat 6 and 7

 Best regards,

   Bernardo

 --
 Departamento de Estadística
 Universidad Carlos III de Madrid
 Avda Universidad, 30
 28911 Leganes (Madrid)
 Spain

 email: bernardo.dau...@uc3m.es
 home: www.est.uc3m.es/bdauria
 tel: + 34.91.624.8804
 fax: + 34.91.624.9177




 --
 -Tor



-- 
-Tor


Re: Use SOLR 3.x with JCR 2.x

2012-05-07 Thread Torgeir Veimo
On 7 May 2012 14:08, Peri Subrahmanya peri.subrahma...@gmail.com wrote:
 I wanted to get some direction on using SOLR as an external indexing tool
 with JCR2.x I know JCR comes with Lucene but for some custom needs we would
 like to use SOLR as the indexing module in JCR. Please advise.

I assume you would be doing your queries outside of the JCR API as
well in this case? If so, you could probably just index content
manually with SOLR using an event callback when data is modified in
the repository?


-- 
-Tor


Re: Jackrabbit 2.4.2 release plan

2012-04-20 Thread Torgeir Veimo
On 20 April 2012 19:33, Alex Parvulescu alex.parvule...@gmail.com wrote:
 Hi,

 We can start preparing for the 2.4.2 release early next week.

 So far, all the issues that are marked 2.4.2 have already been
 backported. This is what is already included [0].

Any hope to get https://issues.apache.org/jira/browse/JCR-3242
(upgrade to Lucene 3.5/3.6) in this release?

-- 
-Tor


Re: [VOTE] Release Apache Jackrabbit 2.4.1

2012-03-29 Thread Torgeir Veimo
+1

Confirmed https://issues.apache.org/jira/browse/JCR-2836 fixed my
issues with hanging threads.

-- 
-Tor


Re: [jr3] Codename

2012-02-27 Thread Torgeir Veimo
On 27 February 2012 23:15, Angela Schreiber anch...@adobe.com wrote:
 or any other codename

 such as for example leaving the 'leporidae' for another animal
 family...

Blackrabbit?

-- 
-Tor (non-dev-member)


Re: [VOTE] Release Apache Jackrabbit 2.4.0

2012-02-04 Thread Torgeir Veimo
On 4 February 2012 01:16, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Fri, Feb 3, 2012 at 3:06 PM, Torgeir Veimo torg...@netenviron.com wrote:
 On 3 February 2012 22:53, Jukka Zitting jukka.zitt...@gmail.com wrote:
 A candidate for the Jackrabbit 2.4.0 release is available at:

 Feb 4, 2012 12:04:30 AM org.apache.catalina.loader.WebappClassLoader
 clearReferencesThreads
 SEVERE: The web application [/main] appears to have started a thread
 named [Timer-13] but has failed to stop it. This is very likely to
 create a memory leak.
 Feb 4, 2012 12:04:30 AM org.apache.catalina.loader.WebappClassLoader
 clearReferencesThreads
 SEVERE: The web application [/main] appears to have started a thread
 named [DynamicPooledExecutor] but has failed to stop it. This is very
 likely to create a memory leak.
 Feb 4, 2012 12:04:32 AM org.apache.catalina.startup.HostConfig deployWAR

 Any hope to get these issues fixed?

 Is there an issue filed for this? A patch?

 Sounds like something we should do for 2.4.1.

I thought it was fixed in 2.3.0
(https://issues.apache.org/jira/browse/JCR-2836) but it appears it
isn't.


-- 
-Tor


Re: [VOTE] Release Apache Jackrabbit 2.4.0

2012-02-03 Thread Torgeir Veimo
On 3 February 2012 22:53, Jukka Zitting jukka.zitt...@gmail.com wrote:
 A candidate for the Jackrabbit 2.4.0 release is available at:

Feb 4, 2012 12:04:30 AM org.apache.catalina.loader.WebappClassLoader
clearReferencesThreads
SEVERE: The web application [/main] appears to have started a thread
named [Timer-13] but has failed to stop it. This is very likely to
create a memory leak.
Feb 4, 2012 12:04:30 AM org.apache.catalina.loader.WebappClassLoader
clearReferencesThreads
SEVERE: The web application [/main] appears to have started a thread
named [DynamicPooledExecutor] but has failed to stop it. This is very
likely to create a memory leak.
Feb 4, 2012 12:04:32 AM org.apache.catalina.startup.HostConfig deployWAR


Any hope to get these issues fixed?

-- 
-Tor


Re: Jackrabbit 2.4 release plan

2012-01-31 Thread Torgeir Veimo
On 1 February 2012 00:07, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 On Mon, Jan 16, 2012 at 6:46 PM, Jukka Zitting jukka.zitt...@gmail.com 
 wrote:
 I think we should postpone cutting the 2.4.0 release two weeks ahead
 to the end of January.

 OK, I think it's now time to start making the release. I already
 started drafting the release notes, and will still go through the
 issue tracker for any pending tasks that we may have missed.

 I won't cut the release candidate before tomorrow at the earliest, so
 if you still have some improvements that really should be in 2.4 but
 aren't committed yet, please let me know and I'll see what we can do
 to get them included. Once the 2.4.0 release is out, we'll switch the
 2.4 branch to maintenance mode and target all new features to the
 unstable 2.5.x release series.

Any hope for support for lucene version  3.3?


-- 
-Tor


[jira] Updated: (JCR-2490) jackrabbit wrongly think nodetype is changed on nodetype re-registration

2010-04-14 Thread Torgeir Veimo (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torgeir Veimo updated JCR-2490:
---

Affects Version/s: 2.0.0
   (was: 2.0-beta6)

 jackrabbit wrongly think nodetype is changed on nodetype re-registration
 

 Key: JCR-2490
 URL: https://issues.apache.org/jira/browse/JCR-2490
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: nodetype
Affects Versions: 2.0.0
 Environment: Mac, derby pm, jdk 1.6, tomcat 6.0.24
Reporter: Torgeir Veimo

 When trying node type re-registration with jackrabbit 2.0, it wrongly detects 
 a nodetype as having changed, with non-trivial changes. Example nodetype 
 definition;
 [nen:profile]  mix:referenceable mixin orderable
 - nen:dn (string)
 - nen:cn (string)
 - * (string)
 + * multiple
 Exception on nodetype re-registration;
 javax.jcr.RepositoryException: The following nodetype change contains
 non-trivial changes.Up until now only trivial changes are supported.
 (see javadoc for org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff):
 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff[
nodeTypeName={http://netenviron.com/nen/1.0}profile,
mixinFlagDiff=NONE,
supertypesDiff=NONE,
propertyDifferences=[

 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={http://netenviron.com/nen/1.0}dn,
 type=TRIVIAL, operation=MODIFIED],

 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={http://netenviron.com/nen/1.0}cn,
 type=TRIVIAL, operation=MODIFIED],

 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={}*,
 type=TRIVIAL, operation=MODIFIED]
],
childNodeDifferences=[

 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$ChildNodeDefDiff[itemName={}*,
 type=MAJOR, operation=REMOVED],

 org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$ChildNodeDefDiff[itemName={}*,
 type=TRIVIAL, operation=ADDED]
]
 ]
at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.reregisterNodeType(NodeTypeRegistry.java:442)
at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.reregisterNodeType(NodeTypeRegistry.java:363)
at 
 org.apache.jackrabbit.core.nodetype.NodeTypeManagerImpl.registerNodeTypes(NodeTypeManagerImpl.java:589)
at 
 org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:118)
at 
 com.netenviron.content.manager.SessionManager.checkRepositorySchema(SessionManager.java:355)
at 
 com.netenviron.content.manager.SessionManager.afterPropertiesSet(SessionManager.java:199)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1288)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1257)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:438)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:383)
at java.security.AccessController.doPrivileged(Native Method)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:353)
at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:245)
at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:169)
at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:242)
at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
at 
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:269)
at 
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1172)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:940)
at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:437

[jira] Created: (JCR-2606) option to specify in query if empty values are sorted first or last

2010-04-14 Thread Torgeir Veimo (JIRA)
option to specify in query if empty values are sorted first or last
---

 Key: JCR-2606
 URL: https://issues.apache.org/jira/browse/JCR-2606
 Project: Jackrabbit Content Repository
  Issue Type: New Feature
  Components: query
Affects Versions: 2.0.0
Reporter: Torgeir Veimo
Priority: Minor


It would be of great value if there was a way to sort on a property, and 
specify wether nodes with empty / missing property values was put first or last 
in the resulting result.

In SQL, the standard syntax for this is: 

order by ... [asc | desc] [nulls [first | last]] ,... 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: jackrabbit wrongly finds changes in schema

2010-02-10 Thread Torgeir Veimo
On 10 February 2010 21:15, Marcel Reutegger marcel.reuteg...@gmx.net wrote:
 Hi,

 that sounds like a bug. can you please file a jira issue?

Ok, created https://issues.apache.org/jira/browse/JCR-2490 .

-- 
-Tor


[jira] Created: (JCR-2490) jackrabbit wrongly think nodetype is changed on nodetype re-registration

2010-02-10 Thread Torgeir Veimo (JIRA)
jackrabbit wrongly think nodetype is changed on nodetype re-registration


 Key: JCR-2490
 URL: https://issues.apache.org/jira/browse/JCR-2490
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: nodetype
Affects Versions: 2.0-beta6
 Environment: Mac, derby pm, jdk 1.6, tomcat 6.0.24
Reporter: Torgeir Veimo


When trying node type re-registration with jackrabbit 2.0, it wrongly detects a 
nodetype as having changed, with non-trivial changes. Example nodetype 
definition;

[nen:profile]  mix:referenceable mixin orderable
- nen:dn (string)
- nen:cn (string)
- * (string)
+ * multiple

Exception on nodetype re-registration;

javax.jcr.RepositoryException: The following nodetype change contains
non-trivial changes.Up until now only trivial changes are supported.
(see javadoc for org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff):
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff[
   nodeTypeName={http://netenviron.com/nen/1.0}profile,
   mixinFlagDiff=NONE,
   supertypesDiff=NONE,
   propertyDifferences=[
   
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={http://netenviron.com/nen/1.0}dn,
type=TRIVIAL, operation=MODIFIED],
   
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={http://netenviron.com/nen/1.0}cn,
type=TRIVIAL, operation=MODIFIED],
   
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$PropDefDiff[itemName={}*,
type=TRIVIAL, operation=MODIFIED]
   ],
   childNodeDifferences=[
   
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$ChildNodeDefDiff[itemName={}*,
type=MAJOR, operation=REMOVED],
   
org.apache.jackrabbit.core.nodetype.NodeTypeDefDiff$ChildNodeDefDiff[itemName={}*,
type=TRIVIAL, operation=ADDED]
   ]
]

   at 
org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.reregisterNodeType(NodeTypeRegistry.java:442)
   at 
org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.reregisterNodeType(NodeTypeRegistry.java:363)
   at 
org.apache.jackrabbit.core.nodetype.NodeTypeManagerImpl.registerNodeTypes(NodeTypeManagerImpl.java:589)
   at 
org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:118)
   at 
com.netenviron.content.manager.SessionManager.checkRepositorySchema(SessionManager.java:355)
   at 
com.netenviron.content.manager.SessionManager.afterPropertiesSet(SessionManager.java:199)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1288)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1257)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:438)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:383)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:353)
   at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:245)
   at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:169)
   at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:242)
   at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
   at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:269)
   at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1172)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:940)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:437)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:383)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:353

Re: Jackrabbit 3: repository requirements

2010-02-09 Thread Torgeir Veimo
On 10 February 2010 01:55, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Hi,

 Now that Jackrabbit 2.0 is out and the major JCR 2.0 feature work is
 done, it's time to start looking ahead at Jackrabbit 3. We've talked
 about this a bit already at Day and I'll be posting a summary of our
 ideas for further discussion, but before that I'd like to frame the
 discussion by getting a better picture of the range of requirements
 we'll be having for Jackrabbit 3.

 So, please let us know what you expect your repositories to look like
 within the next five or so years. I'm especially interested in answers
 to the following questions:
[...]
 Features:

A feature complete missed from the suggestion list is the ability to
do non-trivial changes to the repository schema in place, ie. without
exporting and reimporting content.

Is there any work done in this area?

Also, it would be good to have an open source variant of the CX TAR PM.

-- 
-Tor


Re: Jackrabbit web app deployment issue

2010-01-25 Thread Torgeir Veimo
2010/1/25 Bart van der Schans b.vandersch...@onehippo.com:
 On Mon, Jan 25, 2010 at 12:10 PM,  george.sib...@bt.com wrote:
 The new Tomcat 6.0.24 release has some support to kill threads when
 the application fails to stop them:
 http://tomcat.apache.org/tomcat-6.0-doc/changelog.html

Too bad it doesn't work with Jackrabbit 2.0, which still leaks;

Jan 25, 2010 11:21:11 PM org.apache.catalina.loader.WebappClassLoader
clearReferencesThreads
SEVERE: A web application appears to have started a thread named
[Timer-2] but has failed to stop it. This is very likely to create a
memory leak.
Jan 25, 2010 11:21:11 PM org.apache.catalina.loader.WebappClassLoader
clearReferencesThreads
WARNING: Failed to terminate thread named [Timer-2]
java.lang.NoSuchFieldException: target
at java.lang.Class.getDeclaredField(Class.java:1882)
at 
org.apache.catalina.loader.WebappClassLoader.clearReferencesThreads(WebappClassLoader.java:1958)
at 
org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1707)
at 
org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1622)
at org.apache.catalina.loader.WebappLoader.stop(WebappLoader.java:710)
at 
org.apache.catalina.core.StandardContext.stop(StandardContext.java:4649)
at 
org.apache.catalina.core.ContainerBase.removeChild(ContainerBase.java:924)
at 
org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1121)
at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1342)
at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:303)
at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at 
org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1337)
at 
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601)
at 
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610)
at 
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590)
at java.lang.Thread.run(Thread.java:637)

-- 
-Tor


Re: [VOTE] Release Apache Jackrabbit 2.0-beta5

2009-12-17 Thread Torgeir Veimo
2009/12/18 Tobias Bocanegra tri...@day.com:
 [X] +1 Release this package as Apache Jackrabbit 2.0-beta5

The javadoc link in the left column menu when you browse the webpage
for the web application still points to JCR 1.0.

-- 
-Tor


Re: Jackrabbit 2.0 release plan

2009-05-20 Thread Torgeir Veimo
2009/5/20 Jukka Zitting jukka.zitt...@gmail.com:
 Now that the trunk is reasonably stable again, I'm planning to cut the
 first 2.0 milestone release next week.

Will a 1.5 repository be two-way binary compatible with a 2.0 repository?

-- 
-Tor


Re: Removing old persistence managers in Jackrabbit 2.0

2009-05-05 Thread Torgeir Veimo
+1
I really need such a tool. Also having some external dump format just to
move from one db server to another with the same pm setup would also be
useful.

2009/5/6 Alexander Klimetschek aklim...@day.com

 On Tue, May 5, 2009 at 5:33 PM, Jukka Zitting jukka.zitt...@gmail.com
 wrote:
  I would like to drop all our non-bundle persistence managers before we
  release Jackrabbit 2.0.

 +1 (but only if we resolve the issue below)

  Besides forcing people to upgrade to the recommended setup

 Well, upgrading from a non-bundle persistence manager to a bundle one
 (or from any pm to a different one) is currently not possible. You can
 do a system-view XML export of the whole repo, but as we know, the
 versions can not be reimported back. Should we offer a PM migration
 tool? We did this for our commercial CRX repo at Day, so it is
 basically possible and not much work (but I am not sure if we can
 donate this tool directly).

 WDYT?

 Regards,
 Alex

 --
 Alexander Klimetschek
 alexander.klimetsc...@day.com




-- 
-Tor


Re: Query regarding JCR-Oracle Universal Content Management

2009-04-27 Thread Torgeir Veimo
This mailing list is about developing the jackrabbit JCR reference
implementation. If you need assistance with an Oracle product, I suggest
that you call their support department.

Most likely you can use JCR to connect to Oracle UCM if you have the Oracle
content server JCR repository adapter package.

2009/4/27 Vivek Sundaram vivek.sunda...@oracle.com

  I am trying to use Oracle Universal Content Management with JCR-API.

 Is it feasible?

 I didn’t get any relevant  documents in the internet regarding the same.




-- 
-Tor


Re: JCR 2.0

2009-04-21 Thread Torgeir Veimo
What about soft references, they would be easy to do without waiting for
jackrabbit 2.0?

2009/4/21 Jukka Zitting jukka.zitt...@gmail.com

 Hi,

 On Tue, Apr 21, 2009 at 3:29 PM, Julian Reschke julian.resc...@gmx.de
 wrote:
  Now that JSR-283 is in public review, do we have a plan how to get the
  Jackrabbit trunk implement the JCR 2.0 API?

 Not yet a written plan, but I'd like to branch Jackrabbit 1.6 in a few
 weeks after which we can replace the current JCR 1.0 dependency in
 trunk with JCR 2.0 and start working towards Jackrabbit 2.0.

 Much of the JCR 2.0 implementation work has already been taking place
 since last summer using the org.apache.jackrabbit.api.jsr283
 extensions in jackrabbit-api.

 BR,

 Jukka Zitting




-- 
-Tor


Re: Incubating Chemistry (Was: IP clearance for the Chemistry contribution)

2009-04-21 Thread Torgeir Veimo
It is a play on the letters; CheMIStry.

2009/4/22 Serge Huber shub...@jahia.com


 I might be jumping the gun a little, but if this can help, here are a few
 name suggestions :

 Apache Plug
 Apache Content Plug
 Apache Jackplug
 Apache Sea Miss (ok it's an awful word game :)) (or could be written
 Seamis, or Seemis)
 Apache Startle (Jackrabbit synonym)

 But I must admit the name Chemistry is quite good, even if potentially
 misleading.

 Regards,
  Serge Huber.

 On 21 avr. 09, at 16:04, Jukka Zitting wrote:

  Hi,

 On Tue, Apr 21, 2009 at 12:12 AM, Jukka Zitting jukka.zitt...@gmail.com
 wrote:

 I'll now ping the gene...@incubator.apache.org list for some early
 comments on the proposal.


 See http://thread.gmane.org/gmane.comp.apache.incubator.general/21727
 for the Incubator thread.

 There are some concerns over the name Chemistry and I actually found
 an existing Java project called Chemistry Development Kit
 (http://apps.sourceforge.net/mediawiki/cdk/), so as painful as it is
 we may still need to reconsider the project name.

 BR,

 Jukka Zitting





-- 
-Tor


Re: Jira notifications (Was: [jira] Commented: (JCRRMI-21) RMI: Unable to register NodeTypes)

2009-04-17 Thread Torgeir Veimo
True, but it's both informal and formal development discussions on the list.
Jira messages goes in the second category. I merely follow the informal
discussions.

2009/4/17 Jukka Zitting jukka.zitt...@gmail.com

 Hi,

 On Fri, Apr 17, 2009 at 12:08 PM, Torgeir Veimo torg...@pobox.com wrote:
  Is there any chance the jira messages could go on the commit mailing
 list?

 That's possible, though see below.

  I assume those how follow the commit list would amount to the same people
  that are interested in jira issues mails?

 On the other hand, I would assume that everyone on dev@ would be
 interested in the Jira issues, as that's where much of the development
 discussion occurs.

 The way I see it, dev@ is for things that should happen and commits@
 is for things that actually did happen with our codebase (and web
 site). As such, Jira notifications IMHO belong on d...@.

 That's just me though, so if enough people disagree then we can of
 course change the Jira settings.

 BR,

 Jukka Zitting




-- 
-Tor


Re: [JCR 2.0] Proposed final draft of JSR-283 released

2009-04-01 Thread Torgeir Veimo
Is there a document somewhere detailing the enhancements relative to
JSR-170?

2009/4/1 Alexander Klimetschek aklim...@day.com

 Hello JCR community,

 if you haven't noticed it yet, the proposed final draft of JSR-283,
 the successor of JSR-170, ie. Java Content Repository version 2.0, is
 out. You'll find it here:

 http://jcp.org/aboutJava/communityprocess/pfd/jsr283/index.html

 Please note that the download button link is currently broken, the
 correct one is here:


 https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_JCP-Site/en_US/-/USD/viewproductdetail-start?productref=content_repository-2.0-pfd-oth-js...@cds-cds_jcp

 There are quite a few changes since the public review, so it's a good
 read :-) And as far as I know, the final version of the spec will most
 likely be identical to the final draft.

 Regards,
 Alex

 --
 Alexander Klimetschek
 alexander.klimetsc...@day.com



Re: Regarding propfindtype request ...

2009-03-23 Thread Torgeir Veimo
If you want a webdav frontend to connect to your custom database schema, you
cannot override the abstract debdav servlet, it will simply break in
numerous ways. Either implement a custom backend to jackrabbit, or simply
use a different webdav servlet. I'd suggest using the simple webdav servlet
that comes with tomcat, which uses jndi to lookup files and directories, and
which can easily be modified to lookup stuff in your custom schema.

2009/3/23 imadhusudhanan madhusudhana...@gmail.com



  Forwarded Mail 
 From : imadhusudhanan madhusudhana...@gmail.com
 To : users us...@jackrabbit.apache.org
 Date :Mon, 23 Mar 2009 00:05:23 -0700
 Subject : Regarding propfindtype request ...
  Forwarded Mail 

 Dear All,

 I m customizing jackrabbit to use  mysql server with the  table
 structures as mentioned below  ,

DavResourceLocator
   ---+--+--+-+-+---+
 | Field  | Type | Null | Key | Default | Extra |
 ++--+--+-+-+---+
 | RESOURCE_ID| bigint(19)   | NO   | PRI | NULL|   |
 | DOC_FOLDER_ID  | bigint(19)   | NO   | | NULL|   |
 | AUTHOR_USER_ID | varchar(100) | NO   | | NULL|   |
 | RESOURCE_NAME  | varchar(255) | NO   | | NULL|   |
 | RESOURCE_TYPE  | varchar(60)  | NO   | | NULL|

 DavResourceMapping

   ++--+-+-+---+
 | Field   | Type   | Null | Key | Default | Extra |
 +-++--+-+-+---+
 | RESOURCE_MAPPING_ID | bigint(19) | NO   | PRI | NULL|   |
 | RESOURCE_ID | bigint(19) | NO   | | NULL|   |
 | AUTHOR_USER_ID  | bigint(19) | NO   | | NULL|   |
 | PARENT_RESOURCE_ID  | bigint(19) | NO   | | NULL|   |
 +-++--+-+-+---+


 The first table will be used to store a node either its nt:file or
 nt:folder and the second one the mapping between the nodes like which
 nt:file node present in nt:folder node. What i did is I overrode the
 AbstractWebdavServlet to fetch data from my db and deliver the same using
 the JCR Objects such the MultiStatus and the MultiStatusResponse for
 propFind Type of requests.
 While doing so initially when I connect to webdav thru cadaver I got the
 propfind type as '0' and its PROPFIND_BY_PROPERTY in DavConstants. And for a
 subsequent propFind Type I get the propFindType as '0' again followed by a
 Error 404  RESOURCE-NAME  kinda response. Kindly let me know anything I
 miss ...





Re: CMIS / Camaieu

2009-02-26 Thread Torgeir Veimo


What about CheMIStry? After all, it's about mixing various types of  
systems here.


--
Torgeir Veimo
torg...@pobox.com






Re: Jackrabbit board report draft

2008-12-12 Thread Torgeir Veimo


On 12 Dec 2008, at 23:55, Jukka Zitting wrote:


Claus Köll joined the Jackrabbit team as a committer and PMC member.

 The slump in community activity over late summer seems to be gone and
 we're back to normal levels of mailing list and commit activity.


How is the ratio of day developer to non-day developers for jackrabbit  
and sling now? I think getting more external committers was one of the  
requirements to get sling out of incubator status?



--
Torgeir Veimo
torg...@pobox.com






Re: [VOTE] Release Apache Jackrabbit 1.5.0

2008-12-05 Thread Torgeir Veimo


On 5 Dec 2008, at 21:37, Marcel Reutegger wrote:


Torgeir Veimo wrote:


On 2 Dec 2008, at 23:45, Jukka Zitting wrote:


* Simple Google-style query language. The new GQL query syntax
  makes it very easy to express simple full text queries.



How do I do a full text search, ie searching for something in _all_
attributes, with this new syntax? Or is this not possible atm?


you simply type in a term. GQL will translate that into a  
jcr:contains() on the

context nodes. though, I'm not sure if that's what you want...



You mean content nodes?

I was hoping that there was a way to search any attribute. I can of  
course do some hacks where as I concatenate the property values of all  
the attributes and put the string into a 'catchall' property, but i  
think such a search would be better handled in the indexing system.



--
Torgeir Veimo
[EMAIL PROTECTED]






Re: jcr-cmis sandbox

2008-12-02 Thread Torgeir Veimo


On 2 Dec 2008, at 20:04, Dominique Pfister wrote:


-- + rest
 + ws



Just as an observation, I think it's insane having two different  
protocols for this standard. It sounds like two factions in the  
standards group that could never agree.


--
Torgeir Veimo
[EMAIL PROTECTED]






Re: Apache Jackrabbit 1.5.0 build 2

2008-10-24 Thread Torgeir Veimo


On 24 Oct 2008, at 17:26, Thomas Müller wrote:


Upgrading to Jackrabbit 1.5
---



Also, it seems now a proper login is required. This could maybe be  
added in the release notes with an example?


And it seems a fairly recent version of commons io is required.

--
Torgeir Veimo
[EMAIL PROTECTED]






Re: Apache Jackrabbit 1.5.0 build 2

2008-10-23 Thread Torgeir Veimo


On 21 Oct 2008, at 10:24, Jukka Zitting wrote:


Hi,

I've now rolled the second preview build of the upcoming 1.5.0
release. This build is for testing and preview purposes, and should
only be discussed here on [EMAIL PROTECTED] You can find the 1.5.0-b2 sources 
and
binaries as a source archive and a staged maven repository at:


Shouldn't these be names *-pre2?


Upgrading to Jackrabbit 1.5
---

TODO


Using with an existing repository.xml which doesn't declare any DTD, I  
get these parsing warnings;


10:41:24,029 WARN  ConfigurationErrorHandler  - Warning parsing the  
configuration at line 3 using system id null:  
org.xml.sax.SAXParseException: Document root element Repository,  
must match DOCTYPE root null.
10:41:24,030 WARN  ConfigurationErrorHandler  - Warning parsing the  
configuration at line 3 using system id null:  
org.xml.sax.SAXParseException: Document is invalid: no grammar found.


But otherwise the repository is fully usable .

--
Torgeir Veimo
[EMAIL PROTECTED]






[jira] Updated: (JCR-1808) deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception

2008-10-14 Thread Torgeir Veimo (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torgeir Veimo updated JCR-1808:
---

Attachment: repository.xml

repository used when creating repository with 1.4.5. 

 deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception
 ---

 Key: JCR-1808
 URL: https://issues.apache.org/jira/browse/JCR-1808
 Project: Jackrabbit
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 1.5.0
 Environment: tomcat 6.0.14, jdk 1.5 OS X, derby persistence
Reporter: Torgeir Veimo
Priority: Blocker
 Attachments: log.txt, repository.xml


 Trying out the new jackrabbit build 1.5.0-b1 with an existing repository 
 created with jackrabbit 1.4.*, throws exception on startup. Log output 
 included.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1808) deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception

2008-10-14 Thread Torgeir Veimo (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12639635#action_12639635
 ] 

Torgeir Veimo commented on JCR-1808:


It appears the repository.xml I'm using with JR 1.5 is different than the one 
used to create the 1.4.X repository. The new one declares the bundle 
persistence manager. Changing to the old persistence manager for the 1.5 
repository.xml makes things work again;

 PersistenceManager 
class=org.apache.jackrabbit.core.state.db.DerbyPersistenceManager
 param name=url value=jdbc:derby:${wsp.home}/db;create=true/
 param name=schemaObjectPrefix value=${wsp.name}_/
---
 PersistenceManager 
 class=org.apache.jackrabbit.core.persistence.bundle.DerbyPersistenceManager
   param name=url value=jdbc:derby:${wsp.home}/db;create=true/
   param name=schemaObjectPrefix value=${wsp.name}_/



 deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception
 ---

 Key: JCR-1808
 URL: https://issues.apache.org/jira/browse/JCR-1808
 Project: Jackrabbit
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 1.5.0
 Environment: tomcat 6.0.14, jdk 1.5 OS X, derby persistence
Reporter: Torgeir Veimo
Priority: Blocker
 Attachments: log.txt, repository.xml


 Trying out the new jackrabbit build 1.5.0-b1 with an existing repository 
 created with jackrabbit 1.4.*, throws exception on startup. Log output 
 included.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1808) deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception

2008-10-14 Thread Torgeir Veimo (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12639641#action_12639641
 ] 

Torgeir Veimo commented on JCR-1808:


My impression was that the repository.xml file was only taken into account when 
creating the repository, not on each subsequent instantiating.

 deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception
 ---

 Key: JCR-1808
 URL: https://issues.apache.org/jira/browse/JCR-1808
 Project: Jackrabbit
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 1.5.0
 Environment: tomcat 6.0.14, jdk 1.5 OS X, derby persistence
Reporter: Torgeir Veimo
Priority: Blocker
 Attachments: log.txt, repository.xml


 Trying out the new jackrabbit build 1.5.0-b1 with an existing repository 
 created with jackrabbit 1.4.*, throws exception on startup. Log output 
 included.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Apache Jackrabbit 1.5.0 build 1

2008-10-13 Thread Torgeir Veimo


On 13 Oct 2008, at 17:30, Jukka Zitting wrote:


You can
find the 1.5.0-b1 sources and binaries as a source archive and a
staged maven repository at:

   http://people.apache.org/~jukka/jackrabbit/



Can't see any binaries there, will they be made available later?

--
Torgeir Veimo
[EMAIL PROTECTED]






Re: Trunk is now 1.6-SNAPSHOT

2008-10-13 Thread Torgeir Veimo


On 13 Oct 2008, at 19:26, Jukka Zitting wrote:


Now that 1.5 is branched, I have upgraded the version number in trunk
to 1.6-SNAPSHOT.


What exiting new features will be worked on in the near future in trunk?

--
Torgeir Veimo
[EMAIL PROTECTED]






Re: Apache Jackrabbit 1.5.0 build 1

2008-10-13 Thread Torgeir Veimo
 
.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown  
Source)
	at  
org 
.apache 
.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown  
Source)
	at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown  
Source)
	at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown  
Source)

at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedStatement.executeUpdate(Unknown  
Source)
	at  
org 
.apache 
.jackrabbit 
.core 
.persistence 
.bundle 
.BundleDbPersistenceManager 
.checkSchema(BundleDbPersistenceManager.java:438)

... 57 more

--
Torgeir Veimo
[EMAIL PROTECTED]






[jira] Created: (JCR-1808) deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception

2008-10-13 Thread Torgeir Veimo (JIRA)
deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception
---

 Key: JCR-1808
 URL: https://issues.apache.org/jira/browse/JCR-1808
 Project: Jackrabbit
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 1.5.0
 Environment: tomcat 6.0.14, jdk 1.5 OS X, derby persistence
Reporter: Torgeir Veimo
Priority: Blocker
 Attachments: log.txt

Trying out the new jackrabbit build 1.5.0-b1 with an existing repository 
created with jackrabbit 1.4.*, throws exception on startup. Log output included.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-1808) deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception

2008-10-13 Thread Torgeir Veimo (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torgeir Veimo updated JCR-1808:
---

Attachment: log.txt

stacktrace

 deploying 1.5.0-b1 with existing DB from 1.4.6 throws exception
 ---

 Key: JCR-1808
 URL: https://issues.apache.org/jira/browse/JCR-1808
 Project: Jackrabbit
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 1.5.0
 Environment: tomcat 6.0.14, jdk 1.5 OS X, derby persistence
Reporter: Torgeir Veimo
Priority: Blocker
 Attachments: log.txt


 Trying out the new jackrabbit build 1.5.0-b1 with an existing repository 
 created with jackrabbit 1.4.*, throws exception on startup. Log output 
 included.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Jackrabbit 1.5 release plan

2008-10-09 Thread Torgeir Veimo


On 10 Oct 2008, at 00:40, Jukka Zitting wrote:


Since there are no major issues pending for 1.



Is the new user management feature documented somewhere, except for  
the jsr-283 draft?


--
Torgeir Veimo
[EMAIL PROTECTED]






Re: how to Connect to the Repository

2008-09-22 Thread Torgeir Veimo


On 23 Sep 2008, at 12:53, xing007008 wrote:


Yes. Now, I need to write the class ExistDBRespository();
Who can give me the demo?



The demo is called Jackrabbit.

--
Torgeir Veimo
[EMAIL PROTECTED]






Re: Link to the venue for today's Amsterdam meetup (wiki down)

2008-04-08 Thread Torgeir Veimo


On 8 Apr 2008, at 06:58, Bertrand Delacretaz wrote:


The wiki is down ATM, if people are looking for the venue it's
Pakhuis de Zwijger, links:



Can we get a rudimentary transcript for those who weren't able to  
attend?


--
Torgeir Veimo
[EMAIL PROTECTED]





Re: Jackrabbit roadmap

2008-01-11 Thread Torgeir Veimo


On 12 Jan 2008, at 03:23, Jukka Zitting wrote:


 * Node type management



Based on this proposal? https://issues.apache.org/jira/browse/JCR-1171

There hasn't been any more comments on that issue for a while.

--
Torgeir Veimo
[EMAIL PROTECTED]





Re: Jackrabbit roadmap

2008-01-11 Thread Torgeir Veimo


On 12 Jan 2008, at 12:35, Torgeir Veimo wrote:


On 12 Jan 2008, at 03:23, Jukka Zitting wrote:


* Node type management


Sorry, meant  * Built-in access control


Based on this proposal? https://issues.apache.org/jira/browse/JCR-1171

There hasn't been any more comments on that issue for a while.


--
Torgeir Veimo
[EMAIL PROTECTED]





Re: AccessManager Help?

2007-08-22 Thread Torgeir Veimo
On Wed, 2007-08-22 at 08:46 -0700, Ashley Martens wrote:
 I'm trying to modify a custom AccessManager by having it check
 properties of the node that is being accessed. However, I can't get to
 the Node from the AccessManager. I know this sounds stupid but how do
 you give the AccessManager a Session?

Either maintain a system session for this purpose, or use a cache with
uuids as keys for those nodes that are protected. Search the mailing
list for examples.

-- 
-Tor



graffito mapping dtd

2007-04-25 Thread Torgeir Veimo
Can this one be made accessible on a public URL now that it's part of  
jackrabbit?


--
Torgeir Veimo
[EMAIL PROTECTED]





Re: Jackrabbit in ApacheCon Europe

2007-01-03 Thread Torgeir Veimo


On 3 Jan 2007, at 11:56, Jukka Zitting wrote:


I'd also like to know if there's interest enough for a potential
half-day tutorial on building JCR applications with Jackrabbit.


My company would be interested at least.

--
Torgeir Veimo
[EMAIL PROTECTED]





jcr taglib: setting node attribute with object doesn't work

2006-12-06 Thread Torgeir Veimo
If I use the properties tag and et a node attribue with a java object,
it doesn't work, since the setNode() method in PropertyTag.java takes a
string as an argument. 

There's no category for the jcr taglib in jira?

-- 
Torgeir Veimo [EMAIL PROTECTED]



Re: jcr taglib: setting node attribute with object doesn't work

2006-12-06 Thread Torgeir Veimo
On Wed, 2006-12-06 at 16:31 -0300, Edgar Poce wrote:
 
 Maybe we should update the taglib to work with jsp 2.0. 

That would be easier, since I use EL extensively in my pages. I'm not
sure how to instruct the page compiler that the tag wants to evaluate
the attribute value itself.



-- 
Torgeir Veimo [EMAIL PROTECTED]



[jira] Created: (JCR-659) import of multivalue properties with single value results in incorrect property creation

2006-12-03 Thread Torgeir Veimo (JIRA)
import of multivalue properties with single value results in incorrect property 
creation


 Key: JCR-659
 URL: http://issues.apache.org/jira/browse/JCR-659
 Project: Jackrabbit
  Issue Type: Bug
  Components: core
Affects Versions: 1.1
 Environment: Linux, Tomcat 5.5.20, JDK 1.6.0rc1, both xml file 
repository and derby db repository.
Reporter: Torgeir Veimo


When importing a file exported with system view, a value of a multivalued 
property is stored as a singlevalue property. The bug seems to be that for some 
reason, even if PropDef.isMultiple() is true for a given property, no 
ValueFormatException is thrown when setting the property as single value.

Workaround:

It works if I change PropInfo.apply() line 136 to 

if (va.length == 1  !def.isMultiple()) {
...



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: multivalue import problem

2006-12-01 Thread Torgeir Veimo
On Fri, 2006-12-01 at 19:41 +, Torgeir Veimo wrote:
 On Fri, 2006-12-01 at 18:08 +, Torgeir Veimo wrote:
  On Fri, 2006-12-01 at 16:21 +, Torgeir Veimo wrote:
   I've got a node type def like this;
   
   [nen:protected]  mix:referenceable mixin orderable
   - nen:owner (string) mandatory multiple
   + nen:ace(nen:ace)=nen:ace multiple
   
   I've verified that this definition is in fact effective in the
   repository;
  
  I'm exporting with session.exportSystemView() and importing with
  session.importXML().
  
  The export file contains fragments such as 
  
  sv:property sv:name=nen:owner
  sv:type=Stringsv:valueauthenticated/sv:value/sv:property
  ^
 
 
 I tried adding some debug statements into PropInfo.apply(), and I get
 output such as 

It works if I change PropInfo.apply() line 136 to 

if (va.length == 1  !def.isMultiple()) {
...

So it seems that for some reason, even if PropDef.isMultiple() is true
for a given property, no ValueFormatException is thrown when setting the
property as single value.

-- 
Torgeir Veimo [EMAIL PROTECTED]



[jira] Commented: (JCR-600) Repository does not release all resources on shutdown

2006-11-29 Thread Torgeir Veimo (JIRA)
[ 
http://issues.apache.org/jira/browse/JCR-600?page=comments#action_12454371 ] 

Torgeir Veimo commented on JCR-600:
---

I've done a bit more testing and it does in-fact seem to work properly now, so 
I guess this issue can be resolved as fixed.

 Repository does not release all resources on shutdown
 -

 Key: JCR-600
 URL: http://issues.apache.org/jira/browse/JCR-600
 Project: Jackrabbit
  Issue Type: Bug
  Components: core
Affects Versions: 1.0, 1.0.1, 1.1
Reporter: Marcel Reutegger
 Assigned To: Marcel Reutegger
Priority: Minor
 Fix For: 1.1.1

 Attachments: Screenshot_1.png, Screenshot_2.png


 When Jackrabbit is shutdown some java.util.Timer threads are still running in 
 the background even though no tasks are scheduled. This prevents the GC from 
 collecting the classes when Jackrabbit is redeployed within a web application.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [VOTE] Release Apache Jackrabbit 1.1.1

2006-11-29 Thread Torgeir Veimo

 [x] +1 Release the packages as Apache Jackrabbit 1.1.1

-- 
Torgeir Veimo [EMAIL PROTECTED]



maven build problem of svn source

2006-11-28 Thread Torgeir Veimo
I'm still having problem building the current svn source. The javacc
compiler seems to have encoding issues. Does anyone more familiar with
maven have an idea what is wrong? I've tried setting my LANG env, but I
don't think javacc picks up that in any case..

[EMAIL PROTECTED] jackrabbit]$ /usr/java/maven-1.1-beta-3/bin/maven 
 __  __
|  \/  |__ _Apache__ ___
| |\/| / _` \ V / -_) ' \  ~ intelligent projects ~
|_|  |_\__,_|\_/\___|_||_|  v. 1.1-beta-3

DEPRECATED: the default goal should be specified in the build section
of project.xml instead of maven.xml
build:start:

java:prepare-filesystem:
Running post goal: java:prepare-filesystem
jackrabbit:generate-sql-parser:
javacc:
[echo] jjtree
grammar: /home/torgeir/java/src/jackrabbit/src/main/javacc/sql/JCRSQL.jjt
javacc:jjtree-generate:
[echo] javaccPackageName:org.apache.jackrabbit.core.query.sql
[echo] jjtreePackageName:org.apache.jackrabbit.core.query.sql
-OUTPUT_DIRECTORY=/home/torgeir/java/src/jackrabbit/target/generated-src/main/org/apache/jackrabbit/core/query/sql
Creating
directory: 
/home/torgeir/java/src/jackrabbit/target/generated-src/main/org/apache/jackrabbit/core/query/sql
Java Compiler Compiler Version 3.2 (Tree Builder)
(type jjtree with no arguments for help)
Reading from
file /home/torgeir/java/src/jackrabbit/src/main/javacc/sql/JCRSQL.jjt . . .
Annotated grammar generated successfully
in 
/home/torgeir/java/src/jackrabbit/target/generated-src/main/org/apache/jackrabbit/core/query/sql/JCRSQL.jj

[echo] 
/home/torgeir/java/src/jackrabbit/target/generated-src/main/org/apache/jackrabbit/core/query/sql/JCRSQL.jj

javacc:javacc-generate:
Java Compiler Compiler Version 3.2 (Parser Generator)
(type javacc with no arguments for help)
Reading from
file 
/home/torgeir/java/src/jackrabbit/target/generated-src/main/org/apache/jackrabbit/core/query/sql/JCRSQL.jj
 . . .
Note: UNICODE_INPUT option is specified. Please make sure you create the
parser/lexer usig a Reader with the correct character encoding.
Parser generated successfully.



BUILD FAILED
File.. file:/home/torgeir/java/src/jackrabbit/maven.xml
Element... ant:delete
Line.. 160
Column 21
/home/torgeir/java/src/jackrabbit/target/generated-src/main/java/org/apache/jackrabbit/core/query/sql
 not found.
Total time   : 3 seconds 
Finished at  : Tuesday, November 28, 2006 9:35:22 AM GMT


-- 
Torgeir Veimo [EMAIL PROTECTED]



  1   2   >