[jira] [Created] (JCR-3520) Indexing configuration dtd outdated

2013-02-13 Thread fabrizio giustina (JIRA)
fabrizio giustina created JCR-3520:
--

 Summary: Indexing configuration dtd outdated
 Key: JCR-3520
 URL: https://issues.apache.org/jira/browse/JCR-3520
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: indexing
Affects Versions: 2.4, 2.6
Reporter: fabrizio giustina


http://jackrabbit.apache.org/dtd/indexing-configuration-1.2.dtd looks outdated, 
some of the examples found in 
http://wiki.apache.org/jackrabbit/IndexingConfiguration are not valid according 
this dtd.

For examples this config (from samples):

  aggregate primaryType=nt:file
include*/include
include*/*/include
include*/*/*/include
  /aggregate

is marked as invalid because according to the dtd include-property is 
required:

The content of element type aggregate is incomplete, it must match 
(include*,include-property).


Also the properties added with JCR-2989 are not valid according to the dtd:

  aggregate primaryType=nt:folder recursive=true recursiveLimit=10
include-propertydummy/include-property
  /aggregate

- Attribute recursiveLimit must be declared for element type aggregate.
- Attribute recursive must be declared for element type aggregate.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (JCR-3375) Excerpts not available with xpath queries filtered by path in JR 2.4

2012-07-04 Thread fabrizio giustina (JIRA)
fabrizio giustina created JCR-3375:
--

 Summary: Excerpts not available with xpath queries filtered by 
path in JR 2.4
 Key: JCR-3375
 URL: https://issues.apache.org/jira/browse/JCR-3375
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Reporter: fabrizio giustina
Priority: Minor


Moving from jackrabbit 2.2 to 2.4 some of the queries were I was retrieving an 
excerpt stopped working (same list of results but no excerpts returned).

After some debugging I found out that with jackrabbit 2.4 excerpts are only 
returned when the xpath queries are not filtered by path, e.g. using:

//*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 'keyword') )  )] 
 order by  @jcr:score descending

I can successfully extract excerpts, while using:

//something//*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 
'keyword') )  )]  order by  @jcr:score descending
or
/jcr:root//*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 
'keyword') )  )]  order by  @jcr:score descending
(or anything different from //*)
... doesn't give any excerpt in jackrabbit 2.4, while it works with 2.2 (same 
identical configuration, tested by switching several time between the two 
versions).

Is this intended? I cound't find any note about similar limits or related 
changes in JR 2.4.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3375) Excerpts not available with xpath queries filtered by path in JR 2.4

2012-07-04 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-3375:
---

Affects Version/s: 2.4

 Excerpts not available with xpath queries filtered by path in JR 2.4
 

 Key: JCR-3375
 URL: https://issues.apache.org/jira/browse/JCR-3375
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.4
Reporter: fabrizio giustina
Priority: Minor

 Moving from jackrabbit 2.2 to 2.4 some of the queries were I was retrieving 
 an excerpt stopped working (same list of results but no excerpts returned).
 After some debugging I found out that with jackrabbit 2.4 excerpts are only 
 returned when the xpath queries are not filtered by path, e.g. using:
 //*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 'keyword') )  
 )]  order by  @jcr:score descending
 I can successfully extract excerpts, while using:
 //something//*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 
 'keyword') )  )]  order by  @jcr:score descending
 or
 /jcr:root//*[( (@jcr:primaryType='my:nodetype')  and  ( jcr:contains(., 
 'keyword') )  )]  order by  @jcr:score descending
 (or anything different from //*)
 ... doesn't give any excerpt in jackrabbit 2.4, while it works with 2.2 (same 
 identical configuration, tested by switching several time between the two 
 versions).
 Is this intended? I cound't find any note about similar limits or related 
 changes in JR 2.4.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: how to disable full-text indexing on jcr:data

2012-05-24 Thread Fabrizio Giustina
Hi,

On Thu, May 3, 2012 at 1:50 PM, Peri Subrahmanya
peri.subrahma...@gmail.com wrote:
 To answer your question if you want to turn off indexing on the data you
 could try something like this;

 In the repository.xml add the following entry under the workspace tag you
 can add the following:
  SearchIndex class=org.apache.jackrabbit.core.query.lucene.SearchIndex
    param name=textFilterClasses value=/
    Š.
    Š
 /SearchIndex

maybe someone could confirm this?
If I am not wrong the implementation in jackrabbit 2.4.1 for the
textFilterClasses attribute is the following...

public void setTextFilterClasses(String filterClasses) {
log.warn(The textFilterClasses configuration parameter has
+  been deprecated, and the configured value will
+  be ignored: {}, filterClasses);
}


thanks
fabrizio


Re: InMemPersistenceManager deprecated?

2011-02-10 Thread Fabrizio Giustina
Hi,
I found out the solution by myself some time after my question: yes,
there is already a better replacement, which is the standard bundle
filesytem persistence manager.

There is a way to set it up using a memory filesystem... looks like
it's not really well documented, but after some digging into
jackrabbit sources I found out that setting the obscure
blobFSBlockSize property to 1 does the trick (not sure what the
value should mean, but I found out in the code that the value = 1 make
the persistence manager use the filesystem configured in the xml
config instead of a local filesystem).

This is a full working config that uses an in-memory BundleFsPersistenceManager:

Workspace name=${wsp.name}
FileSystem class=org.apache.jackrabbit.core.fs.mem.MemoryFileSystem
/FileSystem
PersistenceManager
class=org.apache.jackrabbit.core.persistence.bundle.BundleFsPersistenceManager
  param name=blobFSBlockSize value=1 /!-- store in memory --
/PersistenceManager
SearchIndex class=org.apache.jackrabbit.core.query.lucene.SearchIndex
  [...]
  param name=directoryManagerClass
value=org.apache.jackrabbit.core.query.lucene.directory.RAMDirectoryManager
/
  FileSystem class=org.apache.jackrabbit.core.fs.mem.MemoryFileSystem
  /FileSystem
/SearchIndex
  /Workspace


cheers
fabrizio



2011/2/10 Grégory Joseph gregory.jos...@magnolia-cms.com:
 Hi guys,

 Same question as Fabrizio below - is there an alternative, or any plan to 
 have a non-deprecated InMemPersistenceManager ?
 (As long as it works, I shouldn't care much about the deprecatedness, but 
 it's just clogging the logs, so I'd rather be sure I can really shunt those 
 logs down during tests)

 Cheers,

 -g

 On 27 Dec 2010, at 20:35, Fabrizio Giustina wrote:

 Hi,
 after the deprecation of non-bundle persistence managers in 2.2 also
 the memory-only implementation
 (org.apache.jackrabbit.core.persistence.mem.InMemPersistenceManager)
 has been deprecated and its usage generates leads to a few warnings in
 the log.

 I am currently using the in-memory pm mainly for testing, and I can't
 see any alternative, non deprecated implementation in the 2.2
 release... is the removal of the memory PM intentional? Any plan for
 adding a bundle implementation?


 thanks
 fabrizio





[jira] Updated: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Attachment: JCR-2622-tests_and_patch.diff

patch and updated tests agains current trunk (rev 1054005)

 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Attachment: (was: ISOLatin1AccentLowerCaseTest.java)

 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2011-01-01 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12976424#action_12976424
 ] 

fabrizio giustina commented on JCR-2622:


I tried to trace back the change that broke index analizers in jackrabbit  
2.0.x and it turned out to be the optimization from JCR-2505 (rev. 915718).

The  patch in JCR-2505 added a reusableTokenStream method in 
org.apache.jackrabbit.core.query.lucene.JackrabbitAnalyzer which seemed to 
speed up tests, but it actually breaks them :/
Not sure if such optimization should be either needed, since the abstract base 
org.apache.lucene.analysis.Analyzer class already implements 
reusableTokenStream as an alias for the default tokenStream method.

The attached patch fixes the bug in the trunk release by simply removing the 
optimization. The patch also contains a testcase that shows the issue.

Please, can anybody commit the patch to trunk and to the 2.1/2.2 branches?


 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Affects Version/s: 2.2.0
   2.1.1
   2.1.2
   2.1.3
   Status: Patch Available  (was: Open)

patch attached, updating affects versions.

 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.3, 2.1.2, 2.1.1, 2.1.0, 2.2.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2622) Configured index analizers not working in jackrabbit 2.1 and 2.2

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Summary: Configured index analizers not working in jackrabbit 2.1 and 2.2  
(was: Configured index analizer doesn't really work in 2.1.0?)

 Configured index analizers not working in jackrabbit 2.1 and 2.2
 

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2622) Index analizers that extends StandardAnalyzer need to implement reusableTokenStream() since jackrabbit 2.1

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

 Index analizers that extends StandardAnalyzer need to implement 
 reusableTokenStream() since jackrabbit 2.1
 --

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2622) Index analizers that extends StandardAnalyzer need to implement reusableTokenStream() since jackrabbit 2.1

2011-01-01 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Summary: Index analizers that extends StandardAnalyzer need to implement 
reusableTokenStream() since jackrabbit 2.1  (was: Configured index analizers 
not working in jackrabbit 2.1 and 2.2)

Looks like I spoke too soon, after a deeper analysis I found out the problem 
can be fixed in the analyzer class and doesn't require a fix in jackrabbit 
itself.

The change in JCR-2505 actually broke index analyzers that don't implement the 
reusableTokenStream() method properly: 
any analyzer that extends org.apache.lucene.analysis.standard.StandardAnalyzer 
was working properly in jackrabbit 2.0 which was using the tokenStream() method 
only. But since jackrabbit 2.1 such analizers cannot rely on the superclass 
implementation of reusableTokenStream() and they have to implement such method 
properly.

The correct solution is probably not to extends StandardAnalyzer anymore (the 
reusableTokenStream method is not ovveraidable due to the usage private fields) 
but to extend a plain org.apache.lucene.analysis.Analyzer and reimplement the 
tokenStream method from scratch.

So the problem looks like a but in all the analyzers I was using, but in a part 
that has never been used by jackrabbit before the change in version 2.1... the 
issue can be closed


 Index analizers that extends StandardAnalyzer need to implement 
 reusableTokenStream() since jackrabbit 2.1
 --

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: JCR-2622-tests_and_patch.diff


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



InMemPersistenceManager deprecated?

2010-12-27 Thread Fabrizio Giustina
Hi,
after the deprecation of non-bundle persistence managers in 2.2 also
the memory-only implementation
(org.apache.jackrabbit.core.persistence.mem.InMemPersistenceManager)
has been deprecated and its usage generates leads to a few warnings in
the log.

I am currently using the in-memory pm mainly for testing, and I can't
see any alternative, non deprecated implementation in the 2.2
release... is the removal of the memory PM intentional? Any plan for
adding a bundle implementation?


thanks
fabrizio


[jira] Updated: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2010-12-07 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2622:
---

Attachment: ISOLatin1AccentLowerCaseTest.java

 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: ISOLatin1AccentLowerCaseTest.java


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2010-12-07 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12968749#action_12968749
 ] 

fabrizio giustina commented on JCR-2622:



The attachment ISOLatin1AccentLowerCaseTest.java is a simple unit tests that 
uses an analyzer to filter accented chars. The test works fine in jackrabbit 
2.0 (the analyzer configured is actually used) but fails in newer versions, 
where the analyzer looks ignored (tested on trunk a couple of months ago).

I got no feedback on this report which is here since may, is anybody else using 
this feature? Any hint on what may have changed in jackrabbit 2.1 so that I 
could try investigating for a fix?



 Configured index analizer doesn't really work in 2.1.0?
 ---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: fabrizio giustina
Priority: Critical
 Attachments: ISOLatin1AccentLowerCaseTest.java


 I just tried migrating an existing project which was using jackrabbit 2.0.0 
 to 2.1.0.
 We have an index analyzer configured which filters accented chars: 
 {code}
 public class ItalianSnowballAnalyzer extends StandardAnalyzer
 {
 @Override
 public TokenStream tokenStream(String fieldName, Reader reader)
 {
 return new ISOLatin1AccentFilter(new 
 LowerCaseFilter((super.tokenStream(fieldName, reader;
 }
 }
 {code}
 The project has a good number of unit tests, an xml is loaded in a 
 memory-only jackrabbit repository and several queries are checked against 
 expected results.
 After migrating to 2.1.0 none of the tests that relied on the Index analizer 
 work anymore, for example searching for test doesn't find anymore nodes 
 containing tèst.
 Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
 configuration/code or other libraries at all). Rolling back to the 2.0.0 
 dependency is enough to make all the tests working again.
 I've checked the changes in 2.1 but I couldn't find any apparently related 
 change. Also note that I was already using the patch in JCR-2504 also before 
 (configuration loading works fine in the unpatched 2.1). Another point is 
 that the configured IndexAnalyzer still gets actually called during our tests 
 (checked in debug mode).
 Any idea?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2732) ParseException in xpath query using an escaped string in jackrabbit 2.x (works in 1.6)

2010-12-07 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2732:
---

Attachment: ColonBracketSearchTest.java

 ParseException in xpath query using an escaped string in jackrabbit 2.x 
 (works in 1.6)
 --

 Key: JCR-2732
 URL: https://issues.apache.org/jira/browse/JCR-2732
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query, xpath
Affects Versions: 2.0.0, 2.1.0, 2.1.1
Reporter: fabrizio giustina
 Attachments: ColonBracketSearchTest.java


 For a particular sequence of chars, :) os :(, jackrabbit 2.x seems to 
 break also when the xpath statement is properly escaped
 Looks like the way I escape the :) os :( sequence was used to work fine 
 in jackrabbit 1.6, but produces a parsing error in jackrabbit 2.x.
 The following query, with a space in between : and ) works fine in any 
 version of jackrabbit:
 {code}
 //*[jcr:contains(@title, '\: \)')]
 {code}
 This one, without any space, works only in jackrabbit 1.6:
 {code}
 //*[jcr:contains(@title, '\:\)')]
 {code}
 in 2.x the result is a ParseException: Cannot parse '\:\\)': Encountered  
 ) )
 Is anything changed in how xpath queries must be escaped in 2.x or Is this a 
 bug?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-2732) ParseException in xpath query using an escaped string in jackrabbit 2.x (works in 1.6)

2010-12-07 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12968752#action_12968752
 ] 

fabrizio giustina commented on JCR-2732:


Added a unit test, it works properly in 1.x versions but fails on 2.x with the 
error described.

 ParseException in xpath query using an escaped string in jackrabbit 2.x 
 (works in 1.6)
 --

 Key: JCR-2732
 URL: https://issues.apache.org/jira/browse/JCR-2732
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query, xpath
Affects Versions: 2.0.0, 2.1.0, 2.1.1
Reporter: fabrizio giustina
 Attachments: ColonBracketSearchTest.java


 For a particular sequence of chars, :) os :(, jackrabbit 2.x seems to 
 break also when the xpath statement is properly escaped
 Looks like the way I escape the :) os :( sequence was used to work fine 
 in jackrabbit 1.6, but produces a parsing error in jackrabbit 2.x.
 The following query, with a space in between : and ) works fine in any 
 version of jackrabbit:
 {code}
 //*[jcr:contains(@title, '\: \)')]
 {code}
 This one, without any space, works only in jackrabbit 1.6:
 {code}
 //*[jcr:contains(@title, '\:\)')]
 {code}
 in 2.x the result is a ParseException: Cannot parse '\:\\)': Encountered  
 ) )
 Is anything changed in how xpath queries must be escaped in 2.x or Is this a 
 bug?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2732) ParseException in xpath query using an escaped string in jackrabbit 2.x (works in 1.6)

2010-08-29 Thread fabrizio giustina (JIRA)
ParseException in xpath query using an escaped string in jackrabbit 2.x (works 
in 1.6)
--

 Key: JCR-2732
 URL: https://issues.apache.org/jira/browse/JCR-2732
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query, xpath
Affects Versions: 2.1.1, 2.1.0, 2.0.0
Reporter: fabrizio giustina


For a particular sequence of chars, :) os :(, jackrabbit 2.x seems to break 
also when the xpath statement is properly escaped

Looks like the way I escape the :) os :( sequence was used to work fine in 
jackrabbit 1.6, but produces a parsing error in jackrabbit 2.x.

The following query, with a space in between : and ) works fine in any 
version of jackrabbit:
{code}
//*[jcr:contains(@title, '\: \)')]
{code}


This one, without any space, works only in jackrabbit 1.6:
{code}
//*[jcr:contains(@title, '\:\)')]
{code}

in 2.x the result is a ParseException: Cannot parse '\:\\)': Encountered  ) 
)

Is anything changed in how xpath queries must be escaped in 2.x or Is this a 
bug?



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2733) RepositoryException in xpath query with the OR keyword

2010-08-29 Thread fabrizio giustina (JIRA)
RepositoryException in xpath query with the OR keyword


 Key: JCR-2733
 URL: https://issues.apache.org/jira/browse/JCR-2733
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query, xpath
Affects Versions: 2.1.1, 2.1.0, 1.6.2
Reporter: fabrizio giustina


Any string literal starting with OR , e.g.:
//*[jcr:contains(@title, 'OR ME')]

ends up in a javax.jcr.RepositoryException:
Exception building query: org.apache.lucene.queryParser.ParseException: Cannot 
parse 'OR ME': Encountered  OR OR  at line 1, column 0.


I see this is due to the OR keyword interpreted by Lucene, like described in 
http://stackoverflow.com/questions/1311304/keyword-or-and-search-in-lucene

Is this expected? Shouldn't jackrabbit escape the input string before creating 
the lucene query, since this is implementation-specific and OR should not AFAIK 
be a reserved word in properly delimited string literals in xpath queries?
Also note that the similar AND keyword doesn't cause any problem.

In order to fix it I must check and replace any input string starting with OR 
adding quotes or lowercasing:
//*[jcr:contains(@title, 'OR ME')]
//*[jcr:contains(@title, 'or ME')]



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2622) Configured index analizer doesn't really work in 2.1.0?

2010-05-05 Thread fabrizio giustina (JIRA)
Configured index analizer doesn't really work in 2.1.0?
---

 Key: JCR-2622
 URL: https://issues.apache.org/jira/browse/JCR-2622
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: fabrizio giustina
Priority: Critical


I just tried migrating an existing project which was using jackrabbit 2.0.0 to 
2.1.0.

We have an index analyzer configured which filters accented chars: 

{code}
public class ItalianSnowballAnalyzer extends StandardAnalyzer
{

@Override
public TokenStream tokenStream(String fieldName, Reader reader)
{
return new ISOLatin1AccentFilter(new 
LowerCaseFilter((super.tokenStream(fieldName, reader;
}

}
{code}

The project has a good number of unit tests, an xml is loaded in a memory-only 
jackrabbit repository and several queries are checked against expected results.
After migrating to 2.1.0 none of the tests that relied on the Index analizer 
work anymore, for example searching for test doesn't find anymore nodes 
containing tèst.

Upgrading to jackrabbit 2.1.0 is the only change done (no changes in the 
configuration/code or other libraries at all). Rolling back to the 2.0.0 
dependency is enough to make all the tests working again.
I've checked the changes in 2.1 but I couldn't find any apparently related 
change. Also note that I was already using the patch in JCR-2504 also before 
(configuration loading works fine in the unpatched 2.1). Another point is that 
the configured IndexAnalyzer still gets actually called during our tests 
(checked in debug mode).

Any idea?



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2504) Allow indexingConfiguration to be loaded from the classpath

2010-04-03 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2504:
---

Fix Version/s: 2.1.0

tagging with 2.1.0 as fix version, to not forget the patch for the 2.1 release

 Allow indexingConfiguration to be loaded from the classpath
 ---

 Key: JCR-2504
 URL: https://issues.apache.org/jira/browse/JCR-2504
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: indexing
Affects Versions: 1.6.1, 2.0.0
Reporter: fabrizio giustina
 Fix For: 2.1.0

 Attachments: indexingConfiguration_classpath.diff


 The indexingConfiguration attribute in the SearchIndex configuration 
 (http://wiki.apache.org/jackrabbit/IndexingConfiguration) actually requires 
 an absolute filesystem path.
 It would be nice if SearchIndex would also accept a file available in the 
 classpath... although you can use variables like ${wsp.home} or similar there 
 are many scenarios where a classpath resource would help (for example when 
 creating a new workspace the directory structure is automatically created by 
 jackrabbit and doesn't need to be already available but the indexing 
 configuration file does).
 I am attaching a simple patch to SearchIndex that tries to load the file from 
 the classpath if it has not been found. Since priority is given to the old 
 behavior (file before classpath) so it's fully backward compatible.
 Diff has been generated against trunk, it would be nice to have this patch 
 also on the 2.0 branch.
  
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Jackrabbit 2.1.0 release plan

2010-04-03 Thread Fabrizio Giustina
Hi Jukka,

On Tue, Mar 9, 2010 at 12:56 PM, Jukka Zitting jukka.zitt...@gmail.com wrote:
 Please use the 2.1.0 version in Jira to tag any issues you'd like to
 see fixed in time for this release.

Any chance to see https://issues.apache.org/jira/browse/JCR-2504 (or
the older https://issues.apache.org/jira/browse/JCR-1861) in this
release?

JCR-2504 is a recent patch for allowing indexing configuration to be
loaded from the classpath, very handy if you need to embed and
distribute a configuration file within your own application.
The patch in JCR-1861 is pretty old but extends the handling of
classpath resources to any jackrabbit configuration file... I think
it's a really useful feature for several situations, so it's a pity to
see the patch standing there since jackrabbit 1.4.


thanks
fabrizio


[jira] Updated: (JCR-2504) Allow indexingConfiguration to be loaded from the classpath

2010-03-04 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2504:
---

Status: Patch Available  (was: Open)

see attached indexingConfiguration_classpath.diff

 Allow indexingConfiguration to be loaded from the classpath
 ---

 Key: JCR-2504
 URL: https://issues.apache.org/jira/browse/JCR-2504
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: indexing
Affects Versions: 2.0.0, 1.6.1
Reporter: fabrizio giustina
 Attachments: indexingConfiguration_classpath.diff


 The indexingConfiguration attribute in the SearchIndex configuration 
 (http://wiki.apache.org/jackrabbit/IndexingConfiguration) actually requires 
 an absolute filesystem path.
 It would be nice if SearchIndex would also accept a file available in the 
 classpath... although you can use variables like ${wsp.home} or similar there 
 are many scenarios where a classpath resource would help (for example when 
 creating a new workspace the directory structure is automatically created by 
 jackrabbit and doesn't need to be already available but the indexing 
 configuration file does).
 I am attaching a simple patch to SearchIndex that tries to load the file from 
 the classpath if it has not been found. Since priority is given to the old 
 behavior (file before classpath) so it's fully backward compatible.
 Diff has been generated against trunk, it would be nice to have this patch 
 also on the 2.0 branch.
  
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-2504) Allow indexingConfiguration to be loaded from the classpath

2010-02-21 Thread fabrizio giustina (JIRA)
Allow indexingConfiguration to be loaded from the classpath
---

 Key: JCR-2504
 URL: https://issues.apache.org/jira/browse/JCR-2504
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: indexing
Affects Versions: 2.0.0, 1.6.1
Reporter: fabrizio giustina
 Attachments: indexingConfiguration_classpath.diff

The indexingConfiguration attribute in the SearchIndex configuration 
(http://wiki.apache.org/jackrabbit/IndexingConfiguration) actually requires an 
absolute filesystem path.

It would be nice if SearchIndex would also accept a file available in the 
classpath... although you can use variables like ${wsp.home} or similar there 
are many scenarios where a classpath resource would help (for example when 
creating a new workspace the directory structure is automatically created by 
jackrabbit and doesn't need to be already available but the indexing 
configuration file does).

I am attaching a simple patch to SearchIndex that tries to load the file from 
the classpath if it has not been found. Since priority is given to the old 
behavior (file before classpath) so it's fully backward compatible.

Diff has been generated against trunk, it would be nice to have this patch also 
on the 2.0 branch.
 
 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-2504) Allow indexingConfiguration to be loaded from the classpath

2010-02-21 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-2504:
---

Attachment: indexingConfiguration_classpath.diff

SearchIndex patch

 Allow indexingConfiguration to be loaded from the classpath
 ---

 Key: JCR-2504
 URL: https://issues.apache.org/jira/browse/JCR-2504
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: indexing
Affects Versions: 1.6.1, 2.0.0
Reporter: fabrizio giustina
 Attachments: indexingConfiguration_classpath.diff


 The indexingConfiguration attribute in the SearchIndex configuration 
 (http://wiki.apache.org/jackrabbit/IndexingConfiguration) actually requires 
 an absolute filesystem path.
 It would be nice if SearchIndex would also accept a file available in the 
 classpath... although you can use variables like ${wsp.home} or similar there 
 are many scenarios where a classpath resource would help (for example when 
 creating a new workspace the directory structure is automatically created by 
 jackrabbit and doesn't need to be already available but the indexing 
 configuration file does).
 I am attaching a simple patch to SearchIndex that tries to load the file from 
 the classpath if it has not been found. Since priority is given to the old 
 behavior (file before classpath) so it's fully backward compatible.
 Diff has been generated against trunk, it would be nice to have this patch 
 also on the 2.0 branch.
  
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1952) DOMException: NAMESPACE_ERR thrown when exporting document view

2009-02-15 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12673676#action_12673676
 ] 

fabrizio giustina commented on JCR-1952:


Just a question: what has been changed in the xml serialization code after 
1.4.x and why? The change was related to the problem with the java 1.4 
serializer you cited above (if so, is there a jira for that)?
Export in 1.4 seems to work fine in these situations, maybe rollbacking to the 
old code can be an option...



 DOMException: NAMESPACE_ERR thrown when exporting document view
 ---

 Key: JCR-1952
 URL: https://issues.apache.org/jira/browse/JCR-1952
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: xml
Affects Versions: 1.5.2
Reporter: Lóránt Pintér
 Attachments: MANIFEST.MF


 When I try to export some nodes with ExportDocumentView I get a DOMException 
 with Jackrabbit 1.5.2. Version 1.4.6 works fine. Xerces version was 2.8.1.
 Code:
 Document document = documentBuilder.newDocument();
 Element exportElement = (Element) 
 document.appendChild(document.createElement(Export));
 Result result = new DOMResult(exportElement);
 TransformerHandler transformerHandler = 
 saxTransformerFactory.newTransformerHandler();
 transformerHandler.setResult(result);
 session.exportDocumentView(workflowNode.getPath(), transformerHandler, true, 
 false);
 Exception:
 org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or 
 change an object in a way which is incorrect with regard to namespaces.
   at org.apache.xerces.dom.CoreDocumentImpl.checkDOMNSErr(Unknown Source)
   at org.apache.xerces.dom.AttrNSImpl.setName(Unknown Source)
   at org.apache.xerces.dom.AttrNSImpl.init(Unknown Source)
   at org.apache.xerces.dom.CoreDocumentImpl.createAttributeNS(Unknown 
 Source)
   at org.apache.xerces.dom.ElementImpl.setAttributeNS(Unknown Source)
   at 
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.startElement(SAX2DOM.java:194)
   at 
 com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.closeStartTag(ToXMLSAXHandler.java:204)
   at 
 com.sun.org.apache.xml.internal.serializer.ToSAXHandler.flushPending(ToSAXHandler.java:277)
   at 
 com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.startElement(ToXMLSAXHandler.java:646)
   at 
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerHandlerImpl.startElement(TransformerHandlerImpl.java:263)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.startElement(Exporter.java:438)
   at 
 org.apache.jackrabbit.commons.xml.DocumentViewExporter.exportNode(DocumentViewExporter.java:76)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNode(Exporter.java:298)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNodes(Exporter.java:214)
   at 
 org.apache.jackrabbit.commons.xml.DocumentViewExporter.exportNode(DocumentViewExporter.java:77)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNode(Exporter.java:295)
   at org.apache.jackrabbit.commons.xml.Exporter.export(Exporter.java:144)
   at 
 org.apache.jackrabbit.commons.AbstractSession.export(AbstractSession.java:461)
   at 
 org.apache.jackrabbit.commons.AbstractSession.exportDocumentView(AbstractSession.java:241)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1952) DOMException: NAMESPACE_ERR thrown when exporting document view

2009-02-14 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12673534#action_12673534
 ] 

fabrizio giustina commented on JCR-1952:


I saw the same problem with duplicate namespaces when running an application 
under WAS 6.1 (jackrabbit 1.5.0-1.5.2).
The webapp was bundling xalan 2.1.0 and xerces 2.8.1. Removing xalan causes the 
webapp to break in this case, so I can't confirm if the removal can fix the 
export problem. With jackrabbit 1.4.6 everything works fine.


 DOMException: NAMESPACE_ERR thrown when exporting document view
 ---

 Key: JCR-1952
 URL: https://issues.apache.org/jira/browse/JCR-1952
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: xml
Affects Versions: 1.5.2
Reporter: Lóránt Pintér
 Attachments: MANIFEST.MF


 When I try to export some nodes with ExportDocumentView I get a DOMException 
 with Jackrabbit 1.5.2. Version 1.4.6 works fine. Xerces version was 2.8.1.
 Code:
 Document document = documentBuilder.newDocument();
 Element exportElement = (Element) 
 document.appendChild(document.createElement(Export));
 Result result = new DOMResult(exportElement);
 TransformerHandler transformerHandler = 
 saxTransformerFactory.newTransformerHandler();
 transformerHandler.setResult(result);
 session.exportDocumentView(workflowNode.getPath(), transformerHandler, true, 
 false);
 Exception:
 org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or 
 change an object in a way which is incorrect with regard to namespaces.
   at org.apache.xerces.dom.CoreDocumentImpl.checkDOMNSErr(Unknown Source)
   at org.apache.xerces.dom.AttrNSImpl.setName(Unknown Source)
   at org.apache.xerces.dom.AttrNSImpl.init(Unknown Source)
   at org.apache.xerces.dom.CoreDocumentImpl.createAttributeNS(Unknown 
 Source)
   at org.apache.xerces.dom.ElementImpl.setAttributeNS(Unknown Source)
   at 
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.startElement(SAX2DOM.java:194)
   at 
 com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.closeStartTag(ToXMLSAXHandler.java:204)
   at 
 com.sun.org.apache.xml.internal.serializer.ToSAXHandler.flushPending(ToSAXHandler.java:277)
   at 
 com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.startElement(ToXMLSAXHandler.java:646)
   at 
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerHandlerImpl.startElement(TransformerHandlerImpl.java:263)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.startElement(Exporter.java:438)
   at 
 org.apache.jackrabbit.commons.xml.DocumentViewExporter.exportNode(DocumentViewExporter.java:76)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNode(Exporter.java:298)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNodes(Exporter.java:214)
   at 
 org.apache.jackrabbit.commons.xml.DocumentViewExporter.exportNode(DocumentViewExporter.java:77)
   at 
 org.apache.jackrabbit.commons.xml.Exporter.exportNode(Exporter.java:295)
   at org.apache.jackrabbit.commons.xml.Exporter.export(Exporter.java:144)
   at 
 org.apache.jackrabbit.commons.AbstractSession.export(AbstractSession.java:461)
   at 
 org.apache.jackrabbit.commons.AbstractSession.exportDocumentView(AbstractSession.java:241)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-1747) org.apache.jackrabbit.core.query.lucene.SearchIndex with in-memory lucene index

2008-09-21 Thread fabrizio giustina (JIRA)
org.apache.jackrabbit.core.query.lucene.SearchIndex with in-memory lucene index
---

 Key: JCR-1747
 URL: https://issues.apache.org/jira/browse/JCR-1747
 Project: Jackrabbit
  Issue Type: Improvement
  Components: indexing
Affects Versions: 1.4
Reporter: fabrizio giustina
Priority: Minor


If I'm not wrong, there is actually no way to configure SearchIndex in order to 
use a memory only lucene index.

Since you can configure a repository using a 
org.apache.jackrabbit.core.state.mem.InMemPersistenceManager, it makes sense 
that also search index offers a similar configuration.
MultiIndex and PersistentIndex now always use a 
org.apache.lucene.store.FSDirectory, they should be refactored in order to 
allow a switching to a org.apache.lucene.store.RAMDirectory for this.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1694) System properties does not get replaced in a Cluster configuration

2008-08-11 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12621530#action_12621530
 ] 

fabrizio giustina commented on JCR-1694:


I wasn't aware of the org.apache.jackrabbit.core.cluster.node_id magic 
property, but since you can use system variables everywhere in the config file 
with a standard mechanism (except here, at the moment) IMHO it would be better 
to handle it consistently.
Since it's more generic I think the way system properties are replaced using 
${variables} is easier and more self-explanatory. This could also fully replace 
the org.apache.jackrabbit.core.cluster.node_id property, by just setting the 
default to ${org.apache.jackrabbit.core.cluster.node_id}. My vote to add the 
generic replacement of variables in the cluster config too.

 System properties does not get replaced in a Cluster configuration
 --

 Key: JCR-1694
 URL: https://issues.apache.org/jira/browse/JCR-1694
 Project: Jackrabbit
  Issue Type: Bug
  Components: config
Affects Versions: core 1.4.5
Reporter: fabrizio giustina
 Attachments: JCR-1694-fix.diff, JCR-1694-testcase.diff


 Since JCR-1304 has been added to jackrabbit 1.4 I guess this should be 
 reported as a bug...
 Still not debugged deeply, but if I try to configure a Cluster using:
 Cluster id=${server} syncDelay=10
 after setting a server system property I expect to have the cluster 
 initialized properly using the value of such property... I just realized that 
 my cluster node gets initialized with the final value of ${server} instead 
 :(
 Cluster config is a very good place where to use system properties, since all 
 the configuration is usually identical between cluster nodes while the id 
 property must be different...
 Is there anything I missed/did wrong in my configuration?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1644) make NamespaceContext#getPrefix(java.lang.String) iterative instead of recursive

2008-07-30 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12618259#action_12618259
 ] 

fabrizio giustina commented on JCR-1644:


I also can confirm it definitively was a StackOverflow, and that with the patch 
(without changing any other setting) I am able to successfully import the file.


 make NamespaceContext#getPrefix(java.lang.String) iterative instead of 
 recursive
 

 Key: JCR-1644
 URL: https://issues.apache.org/jira/browse/JCR-1644
 Project: Jackrabbit
  Issue Type: Improvement
  Components: jackrabbit-core, xml
Affects Versions: core 1.4.5
Reporter: Philippe Marschall
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: JCR-1644-patch.diff, NamespaceContext.java, 
 NamespaceContext.java.patch


 Currently the method 
 org.apache.jackrabbit.core.xml.NamespaceContext#getPrefix(java.lang.String) 
 uses recursion. For very large XML files (50 MB Magnolia website exports) 
 this causes a stack overflow. The method can easily be rewritten using 
 iteration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1644) make NamespaceContext#getPrefix(java.lang.String) iterative instead of recursive

2008-07-29 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12617715#action_12617715
 ] 

fabrizio giustina commented on JCR-1644:


thanks Jukka

 What on earth are you doing with 5000 levels of nested elements?

well, as said that is what I got when trying to create a simple testcase in 
order to reproduce the problem, but in a real case I can successfully reproduce 
it with a 100MB system view file with only (just checked) approximatively 30-40 
levels of nesting... probably the problem is triggered by some other factors I 
couldn't reproduce in a test


 make NamespaceContext#getPrefix(java.lang.String) iterative instead of 
 recursive
 

 Key: JCR-1644
 URL: https://issues.apache.org/jira/browse/JCR-1644
 Project: Jackrabbit
  Issue Type: Improvement
  Components: jackrabbit-core, xml
Affects Versions: core 1.4.5
Reporter: Philippe Marschall
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: JCR-1644-patch.diff, NamespaceContext.java, 
 NamespaceContext.java.patch


 Currently the method 
 org.apache.jackrabbit.core.xml.NamespaceContext#getPrefix(java.lang.String) 
 uses recursion. For very large XML files (50 MB Magnolia website exports) 
 this causes a stack overflow. The method can easily be rewritten using 
 iteration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-1693) JNDIDatabaseJournal doesn't work with oracle schema (or: unable to use OracleDatabaseJournal with a jndi datasource)

2008-07-27 Thread fabrizio giustina (JIRA)
JNDIDatabaseJournal doesn't work with oracle schema (or: unable to use 
OracleDatabaseJournal with a jndi datasource)
--

 Key: JCR-1693
 URL: https://issues.apache.org/jira/browse/JCR-1693
 Project: Jackrabbit
  Issue Type: Bug
  Components: clustering
Affects Versions: core 1.4.5
Reporter: fabrizio giustina


Database journal works fine on oracle when using the OracleDatabaseJournal 
implementation; but when you need to use a jndi datasource you actually need to 
use org.apache.jackrabbit.core.journal.JNDIDatabaseJournal which doesn't work 
fine with the oracle schema.

With the following configuration:
Cluster id=node1 syncDelay=10
Journal class=org.apache.jackrabbit.core.journal.JNDIDatabaseJournal
  param name=schema value=oracle /

jackrabbit crashes at startup with a not well defined sql error. Investigating 
on the problem I see that the oracle.ddl file contains a tablespace 
variable that is replaced only by the OracleDatabaseJournal implementation.

As a workaround users can create a different ddl without a tablespace variable, 
but this should probably work better out of the box.

WDYT about one of the following solutions?
- make the base DatabaseJournal implementation support jndi datasource just 
like PersistenceManagers do (without a specific configuration property but 
specifying a jndi location in the url property)
- move the replacement of the tablespace variable (and maybe: add a generic 
replacement of *any* parameter found in the databaseJournal configuration) to 
the main DatabaseJournal implementation. This could be handy and it will make 
the OracleDatabaseJournal extension useless, but I see that at the moment there 
can be a problem with the MsSql implementation, since it adds on  to the 
tablespace name only when it's not set to an empty string.







-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1644) make NamespaceContext#getPrefix(java.lang.String) iterative instead of recursive

2008-07-27 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12617250#action_12617250
 ] 

fabrizio giustina commented on JCR-1644:


In order to successfully import s large xml file I had to patch bot getPrefix() 
and getURI() using iteration as suggested by Philippe, no other way to make 
jackrabbit handle it...

Can I suggest to change this improvement to a bug Unable to import large xml 
files?

 make NamespaceContext#getPrefix(java.lang.String) iterative instead of 
 recursive
 

 Key: JCR-1644
 URL: https://issues.apache.org/jira/browse/JCR-1644
 Project: Jackrabbit
  Issue Type: Improvement
  Components: jackrabbit-core
Reporter: Philippe Marschall
Priority: Minor
 Attachments: NamespaceContext.java, NamespaceContext.java.patch


 Currently the method 
 org.apache.jackrabbit.core.xml.NamespaceContext#getPrefix(java.lang.String) 
 uses recursion. For very large XML files (50 MB Magnolia website exports) 
 this causes a stack overflow. The method can easily be rewritten using 
 iteration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-1694) System properties does not get replaced in a Cluster configuration

2008-07-27 Thread fabrizio giustina (JIRA)
System properties does not get replaced in a Cluster configuration
--

 Key: JCR-1694
 URL: https://issues.apache.org/jira/browse/JCR-1694
 Project: Jackrabbit
  Issue Type: Bug
  Components: config
Affects Versions: core 1.4.5
Reporter: fabrizio giustina


Since JCR-1304 has been added to jackrabbit 1.4 I guess this should be reported 
as a bug...

Still not debugged deeply, but if I try to configure a Cluster using:
Cluster id=${server} syncDelay=10

after setting a server system property I expect to have the cluster 
initialized properly using the value of such property... I just realized that 
my cluster node gets initialized with the final value of ${server} instead :(

Cluster config is a very good place where to use system properties, since all 
the configuration is usually identical between cluster nodes while the id 
property must be different...

Is there anything I missed/did wrong in my configuration?


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-1694) System properties does not get replaced in a Cluster configuration

2008-07-27 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-1694:
---

Attachment: JCR-1694-testcase.diff

failing testcase

 System properties does not get replaced in a Cluster configuration
 --

 Key: JCR-1694
 URL: https://issues.apache.org/jira/browse/JCR-1694
 Project: Jackrabbit
  Issue Type: Bug
  Components: config
Affects Versions: core 1.4.5
Reporter: fabrizio giustina
 Attachments: JCR-1694-testcase.diff


 Since JCR-1304 has been added to jackrabbit 1.4 I guess this should be 
 reported as a bug...
 Still not debugged deeply, but if I try to configure a Cluster using:
 Cluster id=${server} syncDelay=10
 after setting a server system property I expect to have the cluster 
 initialized properly using the value of such property... I just realized that 
 my cluster node gets initialized with the final value of ${server} instead 
 :(
 Cluster config is a very good place where to use system properties, since all 
 the configuration is usually identical between cluster nodes while the id 
 property must be different...
 Is there anything I missed/did wrong in my configuration?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-1694) System properties does not get replaced in a Cluster configuration

2008-07-27 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-1694:
---

Attachment: JCR-1694-fix.diff

testcase + fix for parsing system properties in cluster id and syncDelay 
properties

 System properties does not get replaced in a Cluster configuration
 --

 Key: JCR-1694
 URL: https://issues.apache.org/jira/browse/JCR-1694
 Project: Jackrabbit
  Issue Type: Bug
  Components: config
Affects Versions: core 1.4.5
Reporter: fabrizio giustina
 Attachments: JCR-1694-fix.diff, JCR-1694-testcase.diff


 Since JCR-1304 has been added to jackrabbit 1.4 I guess this should be 
 reported as a bug...
 Still not debugged deeply, but if I try to configure a Cluster using:
 Cluster id=${server} syncDelay=10
 after setting a server system property I expect to have the cluster 
 initialized properly using the value of such property... I just realized that 
 my cluster node gets initialized with the final value of ${server} instead 
 :(
 Cluster config is a very good place where to use system properties, since all 
 the configuration is usually identical between cluster nodes while the id 
 property must be different...
 Is there anything I missed/did wrong in my configuration?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1694) System properties does not get replaced in a Cluster configuration

2008-07-27 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12617280#action_12617280
 ] 

fabrizio giustina commented on JCR-1694:


(if the patch looks good it would be great to have it merged to the 1.4 branch 
for the upcoming 1.4.6 release, thanks)

 System properties does not get replaced in a Cluster configuration
 --

 Key: JCR-1694
 URL: https://issues.apache.org/jira/browse/JCR-1694
 Project: Jackrabbit
  Issue Type: Bug
  Components: config
Affects Versions: core 1.4.5
Reporter: fabrizio giustina
 Attachments: JCR-1694-fix.diff, JCR-1694-testcase.diff


 Since JCR-1304 has been added to jackrabbit 1.4 I guess this should be 
 reported as a bug...
 Still not debugged deeply, but if I try to configure a Cluster using:
 Cluster id=${server} syncDelay=10
 after setting a server system property I expect to have the cluster 
 initialized properly using the value of such property... I just realized that 
 my cluster node gets initialized with the final value of ${server} instead 
 :(
 Cluster config is a very good place where to use system properties, since all 
 the configuration is usually identical between cluster nodes while the id 
 property must be different...
 Is there anything I missed/did wrong in my configuration?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1644) make NamespaceContext#getPrefix(java.lang.String) iterative instead of recursive

2008-06-15 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12605163#action_12605163
 ] 

fabrizio giustina commented on JCR-1644:


I got the StackOverflowError too while trying to import large xml files with 
jackrabbit 1.4.5:

java.lang.StackOverflowError
at java.util.HashMap.get(HashMap.java:305)
at 
org.apache.jackrabbit.core.xml.NamespaceContext.getURI(NamespaceContext.java:93)
at 
org.apache.jackrabbit.core.xml.NamespaceContext.getURI(NamespaceContext.java:97)
at 
org.apache.jackrabbit.core.xml.NamespaceContext.getURI(NamespaceContext.java:97)
[...]

looks pretty much a bug than an improvement...

 make NamespaceContext#getPrefix(java.lang.String) iterative instead of 
 recursive
 

 Key: JCR-1644
 URL: https://issues.apache.org/jira/browse/JCR-1644
 Project: Jackrabbit
  Issue Type: Improvement
  Components: jackrabbit-core
Reporter: Philippe Marschall
Priority: Minor
 Attachments: NamespaceContext.java, NamespaceContext.java.patch


 Currently the method 
 org.apache.jackrabbit.core.xml.NamespaceContext#getPrefix(java.lang.String) 
 uses recursion. For very large XML files (50 MB Magnolia website exports) 
 this causes a stack overflow. The method can easily be rewritten using 
 iteration.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1562) JNDI data sources with various PersistenceManager: wrong default values

2008-05-06 Thread fabrizio giustina (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12594507#action_12594507
 ] 

fabrizio giustina commented on JCR-1562:


I noticed that also the schema gets default values, for example in oracle:

if (getSchema() == null) {
   setSchema(oracle);
}


shouldn't that be removed too?



 JNDI data sources with various PersistenceManager: wrong default values
 ---

 Key: JCR-1562
 URL: https://issues.apache.org/jira/browse/JCR-1562
 Project: Jackrabbit
  Issue Type: Bug
Affects Versions: core 1.4.2
Reporter: Sven Rieckhoff
 Attachments: bundleDbNoDefaultUserPassword.txt


 With JCR-1305 Jackrabbit supports creating a connection throug a JNDI 
 Datasource and without configuring user and password. This works for some but 
 not all provided PersistenceManagers. Some of them - like the Oracle-specific 
 BundleDBPersistenceManager - sets default values for user and password if 
 none are provided in the jackrabbit config. This way its impossible to use 
 such PersistenceManagers with the plain JNDI DS.
 This concerns the following BundleDbPersistenceManagers: 
 OraclePersistenceManager, DerbyPersistenceManager, H2PersistenceManager.
 There also might be other PMs (perhaps some special 
 SimpleDbPersistenceManagers) with similar behaviour.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-1084) Maintan a stable ordering of properties in xml export

2007-11-02 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-1084:
---

Attachment: JCR-1084.diff

Very simple patch against current 1.4 trunk.
This small change ensures that the order of properties is unchanged in exported 
xmls by ordering them alphabetically. I've checked that this doesn't break any 
existing unit test.


 Maintan a stable ordering of properties in xml export
 -

 Key: JCR-1084
 URL: https://issues.apache.org/jira/browse/JCR-1084
 Project: Jackrabbit
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: fabrizio giustina
Priority: Minor
 Attachments: JCR-1084.diff


 When exporting to xml (system view, not tested with document view) the order 
 of properties is not consistent.
 This is not an issue with the jcr specification, since the order of 
 properties is undefined, but keeping the same (whatever) order in xml export 
 could be useful.
 At this moment if you try running a few import-export-import-export 
 roundtrips you will notice that the exported xml often changes. This is an 
 example of the differences you can see:
   sv:property sv:name=jcr:uuid sv:type=String
 sv:value59357999-b4fb-45cd-8111-59277caf14b7/sv:value
   /sv:property
 +  sv:property sv:name=title sv:type=String
 +sv:valuetest/sv:value
 +  /sv:property
   sv:property sv:name=visible sv:type=String
 sv:valuetrue/sv:value
   /sv:property
 -  sv:property sv:name=title sv:type=String
 -sv:valuetest/sv:value
 -  /sv:property
 If you may need to diff between two exported files that could be pretty 
 annoying, you have no clear way to understand if something has really changed 
 or not.
 I would propose to keep ordering consistent between export: an easy way could 
 be sorting properties alphabetically during export.
 This behavior has been tested on a recent jackrabbit build from trunk 
 (1.4-SNAPSHOT)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-1084) Maintan a stable ordering of properties in xml export

2007-08-24 Thread fabrizio giustina (JIRA)
Maintan a stable ordering of properties in xml export
-

 Key: JCR-1084
 URL: https://issues.apache.org/jira/browse/JCR-1084
 Project: Jackrabbit
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: fabrizio giustina
Priority: Minor


When exporting to xml (system view, not tested with document view) the order of 
properties is not consistent.
This is not an issue with the jcr specification, since the order of properties 
is undefined, but keeping the same (whatever) order in xml export could be 
useful.

At this moment if you try running a few import-export-import-export 
roundtrips you will notice that the exported xml often changes. This is an 
example of the differences you can see:

  sv:property sv:name=jcr:uuid sv:type=String
sv:value59357999-b4fb-45cd-8111-59277caf14b7/sv:value
  /sv:property
+  sv:property sv:name=title sv:type=String
+sv:valuetest/sv:value
+  /sv:property
  sv:property sv:name=visible sv:type=String
sv:valuetrue/sv:value
  /sv:property
-  sv:property sv:name=title sv:type=String
-sv:valuetest/sv:value
-  /sv:property

If you may need to diff between two exported files that could be pretty 
annoying, you have no clear way to understand if something has really changed 
or not.
I would propose to keep ordering consistent between export: an easy way could 
be sorting properties alphabetically during export.

This behavior has been tested on a recent jackrabbit build from trunk 
(1.4-SNAPSHOT)





-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-788) Upgrade to Lucene 2.2

2007-07-17 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-788:
--

Attachment: lucene_2.2.0_patch.diff

The attached patch upgrades the lucene dependency to 2.2.0 (pom update + just a 
few additional methods added in FilteredTermPositions).
Tests run fine after the patch, you should be able to apply this without 
problems.

 Upgrade to Lucene 2.2
 -

 Key: JCR-788
 URL: https://issues.apache.org/jira/browse/JCR-788
 Project: Jackrabbit
  Issue Type: Improvement
  Components: query
Reporter: Marcel Reutegger
Priority: Minor
 Fix For: 1.4

 Attachments: lucene_2.2.0_patch.diff, patch.txt


 Lucene 2.1 contains a number of useful enhancements, which could be benefical 
 to jackrabbit:
 - less locking on index updates - less IO calls
 - introduces FieldSelector - allows jackrabbit to only load required fields

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Spec compliance regarding child node types of nt:frozenNode

2007-03-05 Thread Fabrizio Giustina

On 3/5/07, Cédric Damioli [EMAIL PROTECTED] wrote:

When checking in a versionable Node A, with a child B
(OnParentVersion=COPY) of whatever nodetype, the corresponding
nt:frozenNode has a child node B of type nt:frozenNode and not of
its initial nodetype.


I recently noticed this too and, if I am not wrong, the behavior of
jackrabbit 1.0.x was different.
Only in recent builds (surely with 1.2.x, not tested on 1.1.x) child
nodes seems to inherit the nt:frozenNode nodetype, while in 1.0 they
keep the original nodetype. I couldn't find any trace of this change
in release notes so I am not sure this is intentional...
specs are not really clear about that... does anybody know the reason
for such change or it could simply be a bug?

fabrizio


[jira] Updated: (JCR-764) PdfTextFilter may leave parsed document open in case of errors

2007-02-23 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-764:
--

Attachment: textfilter_close.diff

simple patch which adds an additional cleanup on exceptions.

 PdfTextFilter may leave parsed document open in case of errors
 --

 Key: JCR-764
 URL: https://issues.apache.org/jira/browse/JCR-764
 Project: Jackrabbit
  Issue Type: Bug
Affects Versions: 1.2.2
Reporter: fabrizio giustina
Priority: Trivial
 Attachments: textfilter_close.diff


 In case of errors in a parsed PDF document jackrabbit may fail to properly 
 close the parsed document. PDFBox will write a stack trace to system out at 
 finalize to warn agains this.
 this is the resulting log:
 WARN org.apache.jackrabbit.core.query.LazyReader LazyReader.java(read:82) 
 20.02.2007 15:42:50 exception initializing reader 
 org.apache.jackrabbit.core.query.PdfTextFilter$1: java.io.IOException: Error: 
 Expected hex number, actual=' 2'
 java.lang.Throwable: Warning: You did not close the PDF Document
at org.pdfbox.cos.COSDocument.finalize(COSDocument.java:384)
at java.lang.ref.Finalizer.invokeFinalizeMethod(Native Method)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:83)
at java.lang.ref.Finalizer.access$100(Finalizer.java:14)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:160)
 this may happens because the parse() method at
 parser = new PDFParser(new BufferedInputStream(in));
 parser.parse();
 immediately creates a document, but it can throw an exception while 
 processing the file.
 PdfTextFilter should check if parser still holds a document and close it 
 appropriately.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-764) PdfTextFilter may leave parsed document open in case of errors

2007-02-23 Thread fabrizio giustina (JIRA)
PdfTextFilter may leave parsed document open in case of errors
--

 Key: JCR-764
 URL: https://issues.apache.org/jira/browse/JCR-764
 Project: Jackrabbit
  Issue Type: Bug
Affects Versions: 1.2.2
Reporter: fabrizio giustina
Priority: Trivial
 Attachments: textfilter_close.diff

In case of errors in a parsed PDF document jackrabbit may fail to properly 
close the parsed document. PDFBox will write a stack trace to system out at 
finalize to warn agains this.

this is the resulting log:

WARN org.apache.jackrabbit.core.query.LazyReader LazyReader.java(read:82) 
20.02.2007 15:42:50 exception initializing reader 
org.apache.jackrabbit.core.query.PdfTextFilter$1: java.io.IOException: Error: 
Expected hex number, actual=' 2'
java.lang.Throwable: Warning: You did not close the PDF Document
   at org.pdfbox.cos.COSDocument.finalize(COSDocument.java:384)
   at java.lang.ref.Finalizer.invokeFinalizeMethod(Native Method)
   at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:83)
   at java.lang.ref.Finalizer.access$100(Finalizer.java:14)
   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:160)


this may happens because the parse() method at

parser = new PDFParser(new BufferedInputStream(in));
parser.parse();

immediately creates a document, but it can throw an exception while processing 
the file.
PdfTextFilter should check if parser still holds a document and close it 
appropriately.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-765) DatabasePersistenceManager: don't log exceptions for each statement when a connection needs to be reestablished

2007-02-23 Thread fabrizio giustina (JIRA)
DatabasePersistenceManager: don't log exceptions for each statement when a 
connection needs to be reestablished
---

 Key: JCR-765
 URL: https://issues.apache.org/jira/browse/JCR-765
 Project: Jackrabbit
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: fabrizio giustina
 Fix For: 1.2.3


This is just a cosmetic fix: when reestablishConnection() is called in 
DatabasePersistenceManager all the statements are closed but if an error occurs 
two exceptions are logged for each statement.
Since reestablishConnection() is already called when an exception has been 
caught and its only purpose is to cleanup an existing connection and to reopen 
a new one is pretty common that the connection is already not valid and that 
each statement close will throw an exception.

For example if the connection has been broken due to a network problem 
DatabasePersistenceManager  will log *40* exceptions (2 for each statement) 
before trying to establish a connection, and that's pretty annoying (expecially 
if you use a mail appender for log4j)


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (JCR-765) DatabasePersistenceManager: don't log exceptions for each statement when a connection needs to be reestablished

2007-02-23 Thread fabrizio giustina (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fabrizio giustina updated JCR-765:
--

Attachment: statementclose.diff

simple patch that simply remove the logException() at statement close during 
reestablishConnection()

 DatabasePersistenceManager: don't log exceptions for each statement when a 
 connection needs to be reestablished
 ---

 Key: JCR-765
 URL: https://issues.apache.org/jira/browse/JCR-765
 Project: Jackrabbit
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: fabrizio giustina
 Fix For: 1.2.3

 Attachments: statementclose.diff


 This is just a cosmetic fix: when reestablishConnection() is called in 
 DatabasePersistenceManager all the statements are closed but if an error 
 occurs two exceptions are logged for each statement.
 Since reestablishConnection() is already called when an exception has been 
 caught and its only purpose is to cleanup an existing connection and to 
 reopen a new one is pretty common that the connection is already not valid 
 and that each statement close will throw an exception.
 For example if the connection has been broken due to a network problem 
 DatabasePersistenceManager  will log *40* exceptions (2 for each statement) 
 before trying to establish a connection, and that's pretty annoying 
 (expecially if you use a mail appender for log4j)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-760) Default blob size for mysql ddl too small

2007-02-21 Thread fabrizio giustina (JIRA)
Default blob size for mysql ddl too small
-

 Key: JCR-760
 URL: https://issues.apache.org/jira/browse/JCR-760
 Project: Jackrabbit
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: fabrizio giustina
Priority: Minor


the default datatype for:
NODE.NODE_DATA
PROP.PROP_DATA
REFS.REFS_DATA 
in the mysql ddl is BLOB which is pretty small to the default size in other 
dbs.

When playing with a (not very large) jackrabbit repo using mysql for 
persistence I easily got data truncation errors on both NODE.NODE_DATA and 
PROP.PROP_DATA columns. The same issue has been reported in the past by other 
users.
Although anyone could easily create a custom ddl with larger fields it should 
be nice to increase the blob size in the mysql ddl embedded in jackrabbit, in 
order to avoid this kind of problems for new users (you usually learn this the 
hard way, when the number of nodes in your repository starts to grow and 
jackrabbit start throwing errors :/).
Changing BLOB to MEDIUMBLOB will make the default size for mysql more similar 
to the one in other dbs, without critically increasing the used space...



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [VOTE] Release Apache Jackrabbit 1.2.2

2007-02-19 Thread Fabrizio Giustina

On 2/16/07, Jukka Zitting [EMAIL PROTECTED] wrote:

Please vote on releasing these packages as Apache Jackrabbit 1.2.2.


a non binding +1 from a user, hoping that devs will cast their
positive vote soon ;)

cheers
fabrizio


[jira] Commented: (JCR-332) maven2 pom contribution

2006-07-16 Thread fabrizio giustina (JIRA)
[ 
http://issues.apache.org/jira/browse/JCR-332?page=comments#action_12421432 ] 

fabrizio giustina commented on JCR-332:
---

Hi Jukka,
there is now a maven plugin for m1 to m2 pom conversion. Still in the sandbox, 
but you might find it useful for quick conversion of all the remaining poms... 
you will have to build it from sources at
https://svn.apache.org/repos/asf/maven/sandbox/plugins/maven-maven1-plugin/

(you can then run mvn maven1:convert in order to obtain a base m2 pom)

 maven2 pom contribution
 ---

 Key: JCR-332
 URL: http://issues.apache.org/jira/browse/JCR-332
 Project: Jackrabbit
  Issue Type: New Feature
  Components: maven
Affects Versions: 1.0, 1.0.1, 0.9
Reporter: fabrizio giustina
 Assigned To: Jukka Zitting
Priority: Minor
 Fix For: 1.1

 Attachments: pom.xml, pom.xml


 If you are interested in migrating to maven2 (or adding optional maven 2 
 build scripts) this is a full maven 2 pom.xml for the main jackrabbit jar.
 All the xpath/javacc stuff, previously done in maven.xml, was pretty painfull 
 to reproduce in maven2... the attached pom exactly reproduces the m1 build by 
 using the maven2 javacc plugin + a couple of antrun executions.
 Test configuration is not yet complete, I think it will be a lot better to 
 reproduce the previous behaviour (init tests run first) without any 
 customization (maybe using a single junit test suite with setUp tasks). Also 
 custom packaging goals added to maven.xml (that can be esily done in m2 by 
 using the assembly plugin) are not yet reproduced too.
 If there is interest, I can also provide poms for the contribution projects 
 (that will be easy, the only complex pom is the main one).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (JCR-401) contrib/bdb-persistence: update berkeleydb version

2006-04-15 Thread fabrizio giustina (JIRA)
contrib/bdb-persistence: update berkeleydb version
--

 Key: JCR-401
 URL: http://issues.apache.org/jira/browse/JCR-401
 Project: Jackrabbit
Type: Improvement

  Components: contrib PMs  
Versions: 1.0
Reporter: fabrizio giustina
Priority: Minor


berkeleydb dependency should be updated to 2.0.83, already available at 
ibiblio. At this moment project.xml lists 1.7.1, which is very old.
There are no code changes required, and the PM works correctly with berkeleydb 
2.0.83

dependency
groupIdberkeleydb/groupId
artifactIdje/artifactId
version2.0.83/version
typejar/type
/dependency

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (JCR-402) release module jackrabbit-bdb 1.0

2006-04-15 Thread fabrizio giustina (JIRA)
release module jackrabbit-bdb 1.0
-

 Key: JCR-402
 URL: http://issues.apache.org/jira/browse/JCR-402
 Project: Jackrabbit
Type: Wish

  Components: contrib PMs  
Versions: 1.0
Reporter: fabrizio giustina


The berkeydb persistence manager in contrib is stable and has been used 
successfully by several users.
It may worth an official release, after committing the two trivial fixes in:
 JCR-298 missing blob remove
 JCR-401 update berkeydb version in project.xml


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira