[jira] [Commented] (PYLUCENE-27) JCC should be able to create sdist archives

2013-10-31 Thread Andi Vajda (JIRA)

[ 
https://issues.apache.org/jira/browse/PYLUCENE-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810843#comment-13810843
 ] 

Andi Vajda commented on PYLUCENE-27:


I have no idea how to do this or if this is even possible (I assume so).
A patch implementing this would be more than welcome.

 JCC should be able to create sdist archives
 ---

 Key: PYLUCENE-27
 URL: https://issues.apache.org/jira/browse/PYLUCENE-27
 Project: PyLucene
  Issue Type: Wish
 Environment: jcc-svn-head
Reporter: Martin

 I was not able to create a complete (in terms one is able to compile and 
 install the desired wrapper) source distribution.
 I've tried following calls:
   python -m jcc --jar foo  --egg-info --extra-setup-arg sdist
 and
  python -m jcc --jar foo --extra-setup-arg sdist
 Both create archives only containing the egg-info and setup.py but no source 
 code at all.
 I really need this feature for my testing environment with tox, since this 
 heavily depends on the sdist feature.
 thanks,
 best,
 Martin



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-5217:
--

Assignee: Steve Rowe

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5217:
---

Attachment: LUCENE-5217.patch

Patch, hopefully complete.  

In addition to all tests passing in the Maven build after {{ant 
get-maven-poms}}, {{generate-maven-artifacts}}and {{precommit}} all pass.

I'm running {{ant validate-maven-dependencies}} {{ant nightly-smoke}} now, and 
if no problems surface, I'll commit to trunk.  I plan on letting it soak for a 
few days before backporting to branch_4x. 

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810005#comment-13810005
 ] 

Steve Rowe edited comment on LUCENE-5217 at 10/31/13 7:47 AM:
--

Patch, hopefully complete.  

In addition to all tests passing in the Maven build after {{ant 
get-maven-poms}}, {{generate-maven-artifacts}} and {{precommit}} both pass.

I'm running {{ant validate-maven-dependencies}} and {{ant nightly-smoke}} now, 
and if no problems surface, I'll commit to trunk.  I plan on letting it soak 
for a few days before backporting to branch_4x. 


was (Author: steve_rowe):
Patch, hopefully complete.  

In addition to all tests passing in the Maven build after {{ant 
get-maven-poms}}, {{generate-maven-artifacts}}and {{precommit}} all pass.

I'm running {{ant validate-maven-dependencies}} {{ant nightly-smoke}} now, and 
if no problems surface, I'll commit to trunk.  I plan on letting it soak for a 
few days before backporting to branch_4x. 

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5217:
---

Attachment: LUCENE-5217.patch

A last minute change before posting the previous version of the patch, to fix a 
problem turned up by {{validate-maven-dependencies}} - {{jetty-start}} was 
depended on and doesn't have a checksum in {{solr/licences/}} - broke other 
stuff.  That's fixed in this version of the patch.

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch, 
 LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5084) new field type - EnumField

2013-10-31 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir updated SOLR-5084:
-

Attachment: Solr-5084.trunk.patch

 new field type - EnumField
 --

 Key: SOLR-5084
 URL: https://issues.apache.org/jira/browse/SOLR-5084
 Project: Solr
  Issue Type: New Feature
Reporter: Elran Dvir
Assignee: Erick Erickson
 Attachments: Solr-5084.patch, Solr-5084.patch, Solr-5084.patch, 
 Solr-5084.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, enumsConfig.xml, 
 schema_example.xml


 We have encountered a use case in our system where we have a few fields 
 (Severity. Risk etc) with a closed set of values, where the sort order for 
 these values is pre-determined but not lexicographic (Critical is higher than 
 High). Generically this is very close to how enums work.
 To implement, I have prototyped a new type of field: EnumField where the 
 inputs are a closed predefined  set of strings in a special configuration 
 file (similar to currency.xml).
 The code is based on 4.2.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 427 - Failure

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/427/

All tests passed

Build Log:
[...truncated 30731 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/build.xml:429:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/build.xml:60:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/build.xml:257:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/build.xml:566:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/common-build.xml:2203:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/build/docs/changes/jiraVersionList.json

Total time: 122 minutes 16 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5316) Taxonomy tree traversing improvement

2013-10-31 Thread Gilad Barkai (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810087#comment-13810087
 ] 

Gilad Barkai commented on LUCENE-5316:
--

I like the {{null}} for ordinals with no siblings. Made the change, and now I'm 
chasing all the NPEs that it caused, hope to get a new patch up and about soon.

As for allowing the taxo to say the depth for each dim - that's more trickie. 
Obviously, that's a temporal state, as ever flat dimension can become non flat. 
Also figuring this our during search (more like, once per opening the 
taxo-reader) is o(taxoSize) at the current implementation.

Perhaps, if we're willing to invest a little more time during indexing, we 
could roll up and tell the parents (say, in an incremental numeric field 
update) how low can you go with its children?
In such a case, we could benefit not only from a flat dimension, but whenever 
an ordinal has no grandchildren. Investing during indexing will make it an O(1) 
operation.

 Taxonomy tree traversing improvement
 

 Key: LUCENE-5316
 URL: https://issues.apache.org/jira/browse/LUCENE-5316
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Gilad Barkai
Priority: Minor
 Attachments: LUCENE-5316.patch


 The taxonomy traversing is done today utilizing the 
 {{ParallelTaxonomyArrays}}. In particular, two taxonomy-size {{int}} arrays 
 which hold for each ordinal it's (array #1) youngest child and (array #2) 
 older sibling.
 This is a compact way of holding the tree information in memory, but it's not 
 perfect:
 * Large (8 bytes per ordinal in memory)
 * Exposes internal implementation
 * Utilizing these arrays for tree traversing is not straight forward
 * Lose reference locality while traversing (the array is accessed in 
 increasing only entries, but they may be distant from one another)
 * In NRT, a reopen is always (not worst case) done at O(Taxonomy-size)
 This issue is about making the traversing more easy, the code more readable, 
 and open it for future improvements (i.e memory footprint and NRT cost) - 
 without changing any of the internals. 
 A later issue(s?) could be opened to address the gaps once this one is done.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5401) In Solr's ResourceLoader, add a check for @Deprecated annotation in the plugin/analysis/... class loading code, so we print a warning in the log if a deprecated factory

2013-10-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810086#comment-13810086
 ] 

Uwe Schindler commented on SOLR-5401:
-

[~jkrupan]: I will open another issue to make something for capturing log 
output from tests available in the solr-testframework package. If this is done, 
we can add the check for this issue. For now, the whole thing works and already 
prints some warnings in the dfeault Solr config, which should be cleaned up to 
be deprecation free (see SOLR-5404).

 In Solr's ResourceLoader, add a check for @Deprecated annotation in the 
 plugin/analysis/... class loading code, so we print a warning in the log if a 
 deprecated factory class is used
 --

 Key: SOLR-5401
 URL: https://issues.apache.org/jira/browse/SOLR-5401
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.6, 4.5
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.6, 5.0

 Attachments: SOLR-5401.patch


 While changing an antique 3.6 schema.xml to Solr 4.5, I noticed that some 
 factories were deprecated in 3.x and were no longer available in 4.x (e.g. 
 solr._Language_PorterStemFilterFactory). If the user would have got a 
 notice before, this could have been prevented and user would have upgraded 
 before.
 In fact the factories were @Deprecated in 3.6, but the Solr loader does not 
 print any warning. My proposal is to add some simple code to 
 SolrResourceLoader that it prints a warning about the deprecated class, if 
 any configuartion setting loads a class with @Deprecated warning. So we can 
 prevent that problem in the future.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5404) Fix solr example config to no longer use deprecated stuff

2013-10-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-5404:


Affects Version/s: 4.5

 Fix solr example config to no longer use deprecated stuff
 -

 Key: SOLR-5404
 URL: https://issues.apache.org/jira/browse/SOLR-5404
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.5
Reporter: Uwe Schindler

 After committing SOLR-5401 to branch_4x, I noticed that the example prints 
 the following warnings on startup:
 {noformat}
 16:09:39 WARN SolrResourceLoader
 Solr loaded a deprecated plugin/analysis class 
 [solr.JsonUpdateRequestHandler]. Please consult documentation how to replace 
 it accordingly.
 16:09:39 WARN SolrResourceLoader
 Solr loaded a deprecated plugin/analysis class [solr.CSVRequestHandler]. 
 Please consult documentation how to replace it accordingly.
 {noformat}
 We should fix this in the example config.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5084) new field type - EnumField

2013-10-31 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir updated SOLR-5084:
-

Attachment: Solr-5084.trunk.patch

 new field type - EnumField
 --

 Key: SOLR-5084
 URL: https://issues.apache.org/jira/browse/SOLR-5084
 Project: Solr
  Issue Type: New Feature
Reporter: Elran Dvir
Assignee: Erick Erickson
 Attachments: Solr-5084.patch, Solr-5084.patch, Solr-5084.patch, 
 Solr-5084.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, enumsConfig.xml, 
 schema_example.xml


 We have encountered a use case in our system where we have a few fields 
 (Severity. Risk etc) with a closed set of values, where the sort order for 
 these values is pre-determined but not lexicographic (Critical is higher than 
 High). Generically this is very close to how enums work.
 To implement, I have prototyped a new type of field: EnumField where the 
 inputs are a closed predefined  set of strings in a special configuration 
 file (similar to currency.xml).
 The code is based on 4.2.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5084) new field type - EnumField

2013-10-31 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810092#comment-13810092
 ] 

Elran Dvir commented on SOLR-5084:
--

Attached new patch
Fixes our test issues. Added putConfig(conf1, zkClient, solrhome, 
enumsConfig.xml); in buildZooKeeper in  LeaderElectionIntegrationTest.
There are still test errors but I don't think it's related to our patch (thread 
leaked from SUITE scope at org.apache.solr.servlet.SolrRequestParserTest).

Thanks.

 new field type - EnumField
 --

 Key: SOLR-5084
 URL: https://issues.apache.org/jira/browse/SOLR-5084
 Project: Solr
  Issue Type: New Feature
Reporter: Elran Dvir
Assignee: Erick Erickson
 Attachments: Solr-5084.patch, Solr-5084.patch, Solr-5084.patch, 
 Solr-5084.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, Solr-5084.trunk.patch, 
 Solr-5084.trunk.patch, Solr-5084.trunk.patch, enumsConfig.xml, 
 schema_example.xml


 We have encountered a use case in our system where we have a few fields 
 (Severity. Risk etc) with a closed set of values, where the sort order for 
 these values is pre-determined but not lexicographic (Critical is higher than 
 High). Generically this is very close to how enums work.
 To implement, I have prototyped a new type of field: EnumField where the 
 inputs are a closed predefined  set of strings in a special configuration 
 file (similar to currency.xml).
 The code is based on 4.2.1.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5353) Enhance CoreAdmin api to split a route key's documents from an index

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810099#comment-13810099
 ] 

ASF subversion and git services commented on SOLR-5353:
---

Commit 1537430 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1537430 ]

SOLR-5353: Enhance CoreAdmin api to split a route key's documents from an index 
and leave behind all other documents

 Enhance CoreAdmin api to split a route key's documents from an index
 

 Key: SOLR-5353
 URL: https://issues.apache.org/jira/browse/SOLR-5353
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5353-allow-single-range.patch, SOLR-5353.patch


 Allow a split key to be passed in to CoreAdmin SPLIT action so that we can 
 split only a particular route key's documents out of the index.
 e.g. consider an index containing documents belonging to two route keys with 
 hash ranges A!=[12,15] and B!=[13,17]. We want to split all documents having 
 route key 'A!' while leaving behind any documents having route key 'B!' even 
 though some documents with 'B!' fall into the hash range of 'A!'
 This is different from what was achieved in SOLR-5338 because that issue 
 splits all documents belonging to the hash range of a given route key. Since 
 multiple keys can have overlapping hash range and we were splitting into the 
 same collection, we had no choice but to move all documents belonging to the 
 hash range into the new shard.
 In this particular issue, we are trying to migrate documents to a different 
 collection and therefore we can leave documents having other route keys 
 behind.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5353) Enhance CoreAdmin api to split a route key's documents from an index

2013-10-31 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5353.
-

Resolution: Fixed

 Enhance CoreAdmin api to split a route key's documents from an index
 

 Key: SOLR-5353
 URL: https://issues.apache.org/jira/browse/SOLR-5353
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5353-allow-single-range.patch, SOLR-5353.patch


 Allow a split key to be passed in to CoreAdmin SPLIT action so that we can 
 split only a particular route key's documents out of the index.
 e.g. consider an index containing documents belonging to two route keys with 
 hash ranges A!=[12,15] and B!=[13,17]. We want to split all documents having 
 route key 'A!' while leaving behind any documents having route key 'B!' even 
 though some documents with 'B!' fall into the hash range of 'A!'
 This is different from what was achieved in SOLR-5338 because that issue 
 splits all documents belonging to the hash range of a given route key. Since 
 multiple keys can have overlapping hash range and we were splitting into the 
 same collection, we had no choice but to move all documents belonging to the 
 hash range into the new shard.
 In this particular issue, we are trying to migrate documents to a different 
 collection and therefore we can leave documents having other route keys 
 behind.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5401) In Solr's ResourceLoader, add a check for @Deprecated annotation in the plugin/analysis/... class loading code, so we print a warning in the log if a deprecated factory

2013-10-31 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810107#comment-13810107
 ] 

Dawid Weiss commented on SOLR-5401:
---

It would be better to add some form of a callback listener instead of capturing 
the whole thing. Solr logs are *huge* so capturing in memory will very likely 
OOM on many machines...

 In Solr's ResourceLoader, add a check for @Deprecated annotation in the 
 plugin/analysis/... class loading code, so we print a warning in the log if a 
 deprecated factory class is used
 --

 Key: SOLR-5401
 URL: https://issues.apache.org/jira/browse/SOLR-5401
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.6, 4.5
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.6, 5.0

 Attachments: SOLR-5401.patch


 While changing an antique 3.6 schema.xml to Solr 4.5, I noticed that some 
 factories were deprecated in 3.x and were no longer available in 4.x (e.g. 
 solr._Language_PorterStemFilterFactory). If the user would have got a 
 notice before, this could have been prevented and user would have upgraded 
 before.
 In fact the factories were @Deprecated in 3.6, but the Solr loader does not 
 print any warning. My proposal is to add some simple code to 
 SolrResourceLoader that it prints a warning about the deprecated class, if 
 any configuartion setting loads a class with @Deprecated warning. So we can 
 prevent that problem in the future.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5316) Taxonomy tree traversing improvement

2013-10-31 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810132#comment-13810132
 ] 

Michael McCandless commented on LUCENE-5316:


bq. Investing during indexing will make it an O(1) operation.

Right, I think it can be very simple for starters: a per-dim boolean isFlat, 
that we set to false as soon as we see any grandchildren added under that dim 
(any CategoryPath with more than two components).  We could similarly record if 
ever a dim was multi-valued, but I'm not sure how we can take advantage of 
that.  We'd need to persist this somewhere...

 Taxonomy tree traversing improvement
 

 Key: LUCENE-5316
 URL: https://issues.apache.org/jira/browse/LUCENE-5316
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Gilad Barkai
Priority: Minor
 Attachments: LUCENE-5316.patch


 The taxonomy traversing is done today utilizing the 
 {{ParallelTaxonomyArrays}}. In particular, two taxonomy-size {{int}} arrays 
 which hold for each ordinal it's (array #1) youngest child and (array #2) 
 older sibling.
 This is a compact way of holding the tree information in memory, but it's not 
 perfect:
 * Large (8 bytes per ordinal in memory)
 * Exposes internal implementation
 * Utilizing these arrays for tree traversing is not straight forward
 * Lose reference locality while traversing (the array is accessed in 
 increasing only entries, but they may be distant from one another)
 * In NRT, a reopen is always (not worst case) done at O(Taxonomy-size)
 This issue is about making the traversing more easy, the code more readable, 
 and open it for future improvements (i.e memory footprint and NRT cost) - 
 without changing any of the internals. 
 A later issue(s?) could be opened to address the gaps once this one is done.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5407) Strange error condition with cloud replication not working quite right

2013-10-31 Thread Nathan Neulinger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810192#comment-13810192
 ] 

Nathan Neulinger commented on SOLR-5407:


After some further investigation - it seems like this might be related to 
SOLR-5325 fixed in 4.5.1. We haven't upgraded yet, but have it scheduled.

I also raised the zk tick to 5000 and increased timeout to 40 seconds just in 
case that helps. 

 Strange error condition with cloud replication not working quite right
 --

 Key: SOLR-5407
 URL: https://issues.apache.org/jira/browse/SOLR-5407
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Nathan Neulinger
  Labels: cloud, replication

 I have a clodu deployment of 4.5 on EC2. Architecture is 3 dedicated ZK 
 nodes, and a pair of solr nodes.  I'll apologize in advance that this error 
 report is not going to have a lot of detail, I'm really hoping that the 
 scenario/description will trigger some likely possible explanation.
 The situation I got into was that the server had decided to fail over, so my 
 app servers were all taking to what should have been the primary for most of 
 the shards/collections, but actually was the replica.
 Here's where it gets odd - no errors being returned to the client code for 
 any of the searches or document updates - and the current primary server was 
 definitely receiving all of the updates - even though they were being 
 submitted to the inactive/replica node. (clients talking to solr-p1, which 
 was not primary at the time, and writes were being passed through to solr-r1, 
 which was primary at the time.)
 All sounds good so far right? Except - the replica server at the time, 
 through which the writes were passing - never got any of those content 
 updates. It had an old unmodified copy of the index. 
 I restarted solr-p1 (was the replica at the time) - no change in behavior. 
 Behavior did not change until I killed and restarted the current primary 
 (solr-r1) to force it to fail over.
 At that point, everything was all happy again and working properly. 
 Until this morning, when one of the developers provisioned a new collection, 
 which happened to put it's primary on solr-r1. Again, clients all pointing at 
 solr-p1. The developer reported that the documents were going into the index, 
 but not visible on the replica server. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5308) Split all documents of a route key into another collection

2013-10-31 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5308:


Attachment: SOLR-5308.patch

Removed unnecessary logging added to aid in debugging.

 Split all documents of a route key into another collection
 --

 Key: SOLR-5308
 URL: https://issues.apache.org/jira/browse/SOLR-5308
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, 
 SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch


 Enable SolrCloud users to split out a set of documents from a source 
 collection into another collection.
 This will be useful in multi-tenant environments. This feature will make it 
 possible to split a tenant out of a collection and put them into their own 
 collection which can be scaled separately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5205) MoreLikeThis doesn't escape shard queries

2013-10-31 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810236#comment-13810236
 ] 

Steve Molloy commented on SOLR-5205:


On top of only addressing the id part of the query, this patch may have 
undesired effects on queries that are not distributed and queries using the 
MoreLikeThisHandler. Basically, original issue is because distributed queries 
use the string representation of the query to send to shards, that string 
representation cannot be parsed as-is because characters are not escaped. I'm 
posting a patch that changes the toString behaviour for term queries to produce 
parsable output so it can be used in distributed without changing the actual 
values in the query object.

 MoreLikeThis doesn't escape shard queries
 -

 Key: SOLR-5205
 URL: https://issues.apache.org/jira/browse/SOLR-5205
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 4.4
Reporter: Markus Jelsma
 Fix For: 4.6

 Attachments: SOLR-5205-trunk.patch, SOLR-5205.patch


 MoreLikeThis does not support Lucene special characters as ID in distributed 
 search. ID's containing special characters such as URL's need to be escaped 
 in the first place. They are then unescaped and get sent to shards in an 
 unescaped form, causing the org.apache.solr.search.SyntaxError exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5205) MoreLikeThis doesn't escape shard queries

2013-10-31 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-5205:
---

Attachment: SOLR-5205.patch

Patch to produce string representations that can be parsed.

 MoreLikeThis doesn't escape shard queries
 -

 Key: SOLR-5205
 URL: https://issues.apache.org/jira/browse/SOLR-5205
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 4.4
Reporter: Markus Jelsma
 Fix For: 4.6

 Attachments: SOLR-5205-trunk.patch, SOLR-5205.patch


 MoreLikeThis does not support Lucene special characters as ID in distributed 
 search. ID's containing special characters such as URL's need to be escaped 
 in the first place. They are then unescaped and get sent to shards in an 
 unescaped form, causing the org.apache.solr.search.SyntaxError exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5408) Collapsing Query Parser does not respect multiple Sort fields

2013-10-31 Thread Brandon Chapman (JIRA)
Brandon Chapman created SOLR-5408:
-

 Summary: Collapsing Query Parser does not respect multiple Sort 
fields
 Key: SOLR-5408
 URL: https://issues.apache.org/jira/browse/SOLR-5408
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Brandon Chapman
Priority: Critical


When using the collapsing query parser, only the last sort field appears to be 
used.

http://172.18.0.10:8080/solr/product/select_eng?sort=score%20desc,name_sort_eng%20descqf=name_eng^3+brand^2+categories_term_eng+sku+upc+promoTag+model+related_terms_engpf2=name_eng^2defType=edismaxrows=12pf=name_eng~5^3start=0q=ipadboost=sqrt(popularity)qt=/select_engfq=productType:MERCHANDISEfq=merchant:bestbuycanadafq=(*:*+AND+-all_all_suppressed_b_ovly:[*+TO+*]+AND+-rbc_all_suppressed_b_ovly:[*+TO+*]+AND+-rbc_cpx_suppressed_b_ovly:[*+TO+*])+OR+(all_all_suppressed_b_ovly:false+AND+-rbc_all_suppressed_b_ovly:[*+TO+*]+AND+-rbc_cpx_suppressed_b_ovly:[*+TO+*])+OR+(rbc_all_suppressed_b_ovly:false+AND+-rbc_cpx_suppressed_b_ovly:[*+TO+*])+OR+(rbc_cpx_suppressed_b_ovly:false)fq=translations:engfl=psid,name_eng,scoredebug=truedebugQuery=truefq={!collapse+field%3DgroupId+nullPolicy%3Dexpand}


result name=response numFound=5927 start=0 maxScore=5.6674457
doc
str name=psid3002010250210/str
str name=name_eng
ZOTAC ZBOX nano XS AD13 Plus All-In-One PC (AMD E2-1800/2GB RAM/64GB SSD)
/str
float name=score0.41423172/float
/doc


The same query without using the collapsing query parser produces the expected 
result.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 967 - Failure!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/967/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 9855 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20131031_135629_299.syserr
   [junit4]  JVM J0: stderr (verbatim) 
   [junit4] java(186,0x13b3c9000) malloc: *** error for object 0x13b3b67d0: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4]  JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=D378CF6B9BD1D448 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dtests.disableHdfs=true -Dfile.encoding=US-ASCII 
-classpath 

[jira] [Updated] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5374:
---

Attachment: SOLR-5374.patch

OK, it seems like things are working in distributed mode now... a few more 
cleanups and it will be ready to commit.

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5392) extend solrj apis to cover collection management

2013-10-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5392:
--

Attachment: SOLR-5392.patch

This patch fixes the param name for the collection config set - caught by the 
random testing that sometimes uses two config sets.

 extend solrj apis to cover collection management
 

 Key: SOLR-5392
 URL: https://issues.apache.org/jira/browse/SOLR-5392
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.5
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Attachments: 
 0001-SOLR-5392.-extend-solrj-apis-to-cover-collection-man.patch, 
 SOLR-5392.patch


 It would be useful to extend solrj APIs to cover collection management calls: 
 https://cwiki.apache.org/confluence/display/solr/Collections+API 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 967 - Failure!

2013-10-31 Thread Dawid Weiss
   [junit4]  JVM J0: stderr (verbatim) 
   [junit4] java(186,0x13b3c9000) malloc: *** error for object
0x13b3b67d0: pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4]  JVM J0: EOF 


On Thu, Oct 31, 2013 at 3:10 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/967/
 Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

 All tests passed

 Build Log:
 [...truncated 9855 lines...]
[junit4] JVM J0: stderr was not empty, see: 
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20131031_135629_299.syserr
[junit4]  JVM J0: stderr (verbatim) 
[junit4] java(186,0x13b3c9000) malloc: *** error for object 0x13b3b67d0: 
 pointer being freed was not allocated
[junit4] *** set a breakpoint in malloc_error_break to debug
[junit4]  JVM J0: EOF 

 [...truncated 1 lines...]
[junit4] ERROR: JVM J0 ended with an exception, command line: 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/bin/java 
 -XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
 -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
 -Dtests.prefix=tests -Dtests.seed=D378CF6B9BD1D448 -Xmx512M -Dtests.iters= 
 -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
 -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
 -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
 -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
 -Dtests.cleanthreads=perClass 
 -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
 -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
 -Djava.io.tmpdir=. 
 -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
  
 -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
 -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
 -Djava.awt.headless=true -Dtests.disableHdfs=true -Dfile.encoding=US-ASCII 
 -classpath 
 

[jira] [Commented] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810330#comment-13810330
 ] 

ASF subversion and git services commented on LUCENE-5217:
-

Commit 1537528 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1537528 ]

LUCENE-5217: Maven config: get dependencies from Ant+Ivy; disable transitive 
dependency resolution for all depended-on artifacts by putting an exclusion for 
each transitive dependency in the dependencyManagement section of the 
grandparent POM

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch, 
 LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810335#comment-13810335
 ] 

Steve Rowe commented on LUCENE-5217:


{{validate-maven-dependencies}} and {{nightly-smoke}} both passed.  Committed 
to trunk.  I'll wait a few days before committing to branch_4x.

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch, 
 LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5217:
---

Fix Version/s: 5.0
   4.6

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch, 
 LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5217) disable transitive dependencies in maven config

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810334#comment-13810334
 ] 

ASF subversion and git services commented on LUCENE-5217:
-

Commit 1537530 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1537530 ]

LUCENE-5217: changes entry

 disable transitive dependencies in maven config
 ---

 Key: LUCENE-5217
 URL: https://issues.apache.org/jira/browse/LUCENE-5217
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Steve Rowe
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5217.patch, LUCENE-5217.patch, LUCENE-5217.patch, 
 LUCENE-5217.patch


 Our ivy configuration does this: each dependency is specified and so we know 
 what will happen. Unfortunately the maven setup is not configured the same 
 way.
 Instead the maven setup is configured to download the internet: and it 
 excludes certain things specifically.
 This is really hard to configure and maintain: we added a 
 'validate-maven-dependencies' that tries to fail on any extra jars, but all 
 it really does is run a license check after maven runs. It wouldnt find 
 unnecessary dependencies being dragged in if something else in lucene was 
 using them and thus they had a license file.
 Since maven supports wildcard exclusions: MNG-3832, we can disable this 
 transitive shit completely.
 We should do this, so its configuration is the exact parallel of ivy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5409) core.properties file is not removed.

2013-10-31 Thread Doug Ericson (JIRA)
Doug Ericson created SOLR-5409:
--

 Summary: core.properties file is not removed.
 Key: SOLR-5409
 URL: https://issues.apache.org/jira/browse/SOLR-5409
 Project: Solr
  Issue Type: Bug
Reporter: Doug Ericson


The core.properties file is renamed to core.properties.unloaded when a core is 
unloaded. However if the core is reloaded a new core.properties file is 
created. This can put a core in a state where it cannot be re-loaded without 
removing the core.properties file.

Steps to reproduce using the web admin UI:
# Create a core
# Unload the core
# Create the core again
# Unload the core
# Create the core again

Expected Results:
The core should be created after the last step.

Observed Results:
The last step fails because core.properties already exists and has not been 
renamed to core.properties.unloaded since that file already exists. This puts 
the core in a between state of being unloaded but unable to be re-loaded.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5409) core.properties file is not removed.

2013-10-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5409:
--

Fix Version/s: 5.0
   4.6

 core.properties file is not removed.
 

 Key: SOLR-5409
 URL: https://issues.apache.org/jira/browse/SOLR-5409
 Project: Solr
  Issue Type: Bug
Reporter: Doug Ericson
 Fix For: 4.6, 5.0


 The core.properties file is renamed to core.properties.unloaded when a core 
 is unloaded. However if the core is reloaded a new core.properties file is 
 created. This can put a core in a state where it cannot be re-loaded without 
 removing the core.properties file.
 Steps to reproduce using the web admin UI:
 # Create a core
 # Unload the core
 # Create the core again
 # Unload the core
 # Create the core again
 Expected Results:
 The core should be created after the last step.
 Observed Results:
 The last step fails because core.properties already exists and has not been 
 renamed to core.properties.unloaded since that file already exists. This puts 
 the core in a between state of being unloaded but unable to be re-loaded.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2013-10-31 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-5205:


Attachment: patch.txt

Paul,
  Thank you for your feedback.  Updated version attached.

1) Some recursion tests were scattered throughout, but I added several more in 
a new section devoted to this.
2) Fixed the indentation (I think)

As for And query...the simplest hack that I can think of that uses only 
SpanQuery and Query would be the following. 

Have a simple parser that takes something like this:

sq1 AND sq2 AND NOT sq3

Create a filter wrapper around a BooleanQuery that reflects the above for 
document retrieval and then create a SpanOr query (from sq1 and sq2) for the 
colorization/concordancing.  The return value would be a pair of SpanQuery and 
Filter (where filter could be null).  Or, if doc retrieval were the only goal, 
return the original BooleanQuery.  

My first attempt wouldn't allow grouping, but that should be easy enough to 
add.  By grouping, of course, I mean:

sq1 AND sq2 AND NOT (sq3 AND sq4)

As an integration question...I just came across the test/dev code in the test 
branch of oal.queryparser.flexible.spans.  Is anyone working on that currently? 
 Is there an easy way to add my functionality to that framework? 

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.6

 Attachments: SpanQueryParser_v1.patch.gz, patch.txt


 This parser includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 In combination with a QueryFilter, has been very useful for concordance tasks 
 and for analytical search.  SpanQueries, of course, can also be used as a 
 Query for regular search via IndexSearcher.
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 I'm happy to throw this in the Sandbox, if desired.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5205) MoreLikeThis doesn't escape shard queries

2013-10-31 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810445#comment-13810445
 ] 

Steve Molloy commented on SOLR-5205:


Strike the part trying to avoid code duplication in ClientUtils by calling 
ToStringUtils, solrj cannot see ToStringUtils in its dependencies. :( 
(Should there be a common utility package available for lucene, solr  solrj?)

 MoreLikeThis doesn't escape shard queries
 -

 Key: SOLR-5205
 URL: https://issues.apache.org/jira/browse/SOLR-5205
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 4.4
Reporter: Markus Jelsma
 Fix For: 4.6

 Attachments: SOLR-5205-trunk.patch, SOLR-5205.patch


 MoreLikeThis does not support Lucene special characters as ID in distributed 
 search. ID's containing special characters such as URL's need to be escaped 
 in the first place. They are then unescaped and get sent to shards in an 
 unescaped form, causing the org.apache.solr.search.SyntaxError exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5410) Solr wrapper for the SpanQueryParser in LUCENE-5205

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created SOLR-5410:
--

 Summary: Solr wrapper for the SpanQueryParser in LUCENE-5205
 Key: SOLR-5410
 URL: https://issues.apache.org/jira/browse/SOLR-5410
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson


This is a simple Solr wrapper around the SpanQueryParser submitted in 
[LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].

Dependent on  [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2013-10-31 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-5205:


Description: 
This parser includes functionality from:

* Classic QueryParser: most of its syntax
* SurroundQueryParser: recursive parsing for near and not clauses.
* ComplexPhraseQueryParser: can handle near queries that include multiterms 
(wildcard, fuzzy, regex, prefix),
* AnalyzingQueryParser: has an option to analyze multiterms.


Same as classic syntax:
* term: test 
* fuzzy: roam~0.8, roam~2
* wildcard: te?t, test*, t*st
* regex: /\[mb\]oat/
* phrase: jakarta apache
* phrase with slop: jakarta apache~3
* default or clause: jakarta apache
* grouping or clause: (jakarta apache)
 
Main additions in SpanQueryParser syntax vs. classic syntax:
* Can require in order for phrases with slop with the \~ operator: jakarta 
apache\~3
* Can specify not near: fever bieber!\~3,10 ::
find fever but not if bieber appears within 3 words before or 10 words 
after it.
* Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta apache\]~3 
lucene\]\~4 :: 
find jakarta within 3 words of apache, and that hit has to be within 
four words before lucene
* Can also use \[\] for single level phrasal queries instead of  as in: 
\[jakarta apache\]
* Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
:: find apache and then either lucene or solr within three words.
* Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
* Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
/l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
words of ap*che and that hit has to be within ten words of something like 
solr or that lucene regex.

In combination with a QueryFilter, has been very useful for concordance tasks 
(see also LUCENE-5317 and LUCENE-5318) and for analytical search.  SpanQueries, 
of course, can also be used as a Query for regular search via IndexSearcher.

Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.

Most of the documentation is in the javadoc for SpanQueryParser.

I'm happy to throw this in the Sandbox, if desired.

Any and all feedback is welcome.  Thank you.

  was:
This parser includes functionality from:

* Classic QueryParser: most of its syntax
* SurroundQueryParser: recursive parsing for near and not clauses.
* ComplexPhraseQueryParser: can handle near queries that include multiterms 
(wildcard, fuzzy, regex, prefix),
* AnalyzingQueryParser: has an option to analyze multiterms.


Same as classic syntax:
* term: test 
* fuzzy: roam~0.8, roam~2
* wildcard: te?t, test*, t*st
* regex: /\[mb\]oat/
* phrase: jakarta apache
* phrase with slop: jakarta apache~3
* default or clause: jakarta apache
* grouping or clause: (jakarta apache)
 
Main additions in SpanQueryParser syntax vs. classic syntax:
* Can require in order for phrases with slop with the \~ operator: jakarta 
apache\~3
* Can specify not near: fever bieber!\~3,10 ::
find fever but not if bieber appears within 3 words before or 10 words 
after it.
* Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta apache\]~3 
lucene\]\~4 :: 
find jakarta within 3 words of apache, and that hit has to be within 
four words before lucene
* Can also use \[\] for single level phrasal queries instead of  as in: 
\[jakarta apache\]
* Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
:: find apache and then either lucene or solr within three words.
* Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
* Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
/l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
words of ap*che and that hit has to be within ten words of something like 
solr or that lucene regex.

In combination with a QueryFilter, has been very useful for concordance tasks 
and for analytical search.  SpanQueries, of course, can also be used as a Query 
for regular search via IndexSearcher.

Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.

Most of the documentation is in the javadoc for SpanQueryParser.

I'm happy to throw this in the Sandbox, if desired.

Any and all feedback is welcome.  Thank you.


 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.6

 Attachments: SpanQueryParser_v1.patch.gz, patch.txt


 This parser includes functionality from:
 * Classic QueryParser: most of its syntax
 * 

[jira] [Created] (PYLUCENE-27) JCC should be able to create sdist archives

2013-10-31 Thread Martin (JIRA)
Martin created PYLUCENE-27:
--

 Summary: JCC should be able to create sdist archives
 Key: PYLUCENE-27
 URL: https://issues.apache.org/jira/browse/PYLUCENE-27
 Project: PyLucene
  Issue Type: Wish
 Environment: jcc-svn-head
Reporter: Martin


I was not able to create a complete (in terms one is able to compile and 
install the desired wrapper) source distribution.

I've tried following calls:
  python -m jcc --jar foo  --egg-info --extra-setup-arg sdist
and
 python -m jcc --jar foo --extra-setup-arg sdist

Both create archives only containing the egg-info and setup.py but no source 
code at all.

I really need this feature for my testing environment with tox, since this 
heavily depends on the sdist feature.

thanks,
best,
Martin



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (SOLR-5410) Solr wrapper for the SpanQueryParser in LUCENE-5205

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5410:
---

Attachment: Solr_SpanQueryParser.zip

 Solr wrapper for the SpanQueryParser in LUCENE-5205
 ---

 Key: SOLR-5410
 URL: https://issues.apache.org/jira/browse/SOLR-5410
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_SpanQueryParser.zip


 This is a simple Solr wrapper around the SpanQueryParser submitted in 
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].
 Dependent on  
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5411) Keyword in Context Search / Concordance Search: Solr wrapper for the code in LUCENE-5317

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5411:
---

Attachment: Solr_concordance.zip

 Keyword in Context  Search / Concordance Search: Solr wrapper for the code in 
 LUCENE-5317
 -

 Key: SOLR-5411
 URL: https://issues.apache.org/jira/browse/SOLR-5411
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance.zip


 Keyword in Context  Search / Concordance Search: Solr wrapper for the code in 
 LUCENE-5317
 This is a simple RequestHandler wrapper around ConcordanceSearcher submitted 
 in [LUCENE-5317|https://issues.apache.org/jira/i#browse/LUCENE-5317].  Does 
 have some minimal support for SolrCloud.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5411) Keyword in Context Search / Concordance Search: Solr wrapper for the code in LUCENE-5317

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created SOLR-5411:
--

 Summary: Keyword in Context  Search / Concordance Search: Solr 
wrapper for the code in LUCENE-5317
 Key: SOLR-5411
 URL: https://issues.apache.org/jira/browse/SOLR-5411
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance.zip

Keyword in Context  Search / Concordance Search: Solr wrapper for the code in 
LUCENE-5317

This is a simple RequestHandler wrapper around ConcordanceSearcher submitted in 
[LUCENE-5317|https://issues.apache.org/jira/i#browse/LUCENE-5317].  Does have 
some minimal support for SolrCloud.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5412) TermVariants from fuzzy and/or span search

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created SOLR-5412:
--

 Summary: TermVariants from fuzzy and/or span search 
 Key: SOLR-5412
 URL: https://issues.apache.org/jira/browse/SOLR-5412
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_termvariants.zip

This is a  request handler wrapper around components in ConcordanceSearcher 
submitted in [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]. 
 It quantifies the variation in term matches of a fuzzy search, and optionally 
returns group counts for other categorical fields or class values in a document.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5412) TermVariants from fuzzy and/or span search

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5412:
---

Attachment: Solr_termvariants.zip

 TermVariants from fuzzy and/or span search 
 ---

 Key: SOLR-5412
 URL: https://issues.apache.org/jira/browse/SOLR-5412
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_termvariants.zip


 This is a  request handler wrapper around components in ConcordanceSearcher 
 submitted in 
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].  It 
 quantifies the variation in term matches of a fuzzy search, and optionally 
 returns group counts for other categorical fields or class values in a 
 document.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5412) TermVariants from fuzzy and/or span search

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5412:
---

Attachment: mo2.jpg
mo1.jpg

 TermVariants from fuzzy and/or span search 
 ---

 Key: SOLR-5412
 URL: https://issues.apache.org/jira/browse/SOLR-5412
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_termvariants.zip, mo1.jpg, mo2.jpg


 This is a  request handler wrapper around components in ConcordanceSearcher 
 submitted in 
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].  It 
 quantifies the variation in term matches of a fuzzy search, and optionally 
 returns group counts for other categorical fields or class values in a 
 document.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5412) TermVariants from fuzzy and/or span search

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5412:
---

Description: 
This is a  request handler wrapper around components in ConcordanceSearcher 
submitted in [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]. 
 It quantifies the variation in term matches of a fuzzy search, and optionally 
returns group counts for other categorical fields or class values in a document.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!

A Lucene-based implementation is also forthcoming, see Tim Allison's code.

  was:
This is a  request handler wrapper around components in ConcordanceSearcher 
submitted in [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]. 
 It quantifies the variation in term matches of a fuzzy search, and optionally 
returns group counts for other categorical fields or class values in a document.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!



 TermVariants from fuzzy and/or span search 
 ---

 Key: SOLR-5412
 URL: https://issues.apache.org/jira/browse/SOLR-5412
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_termvariants.zip, mo1.jpg, mo2.jpg


 This is a  request handler wrapper around components in ConcordanceSearcher 
 submitted in 
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].  It 
 quantifies the variation in term matches of a fuzzy search, and optionally 
 returns group counts for other categorical fields or class values in a 
 document.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!
 A Lucene-based implementation is also forthcoming, see Tim Allison's code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5412) TermVariants from fuzzy and/or span search

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5412:
---

Description: 
This is a  request handler wrapper around components in ConcordanceSearcher 
submitted in [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]. 
 It quantifies the variation in term matches of a fuzzy search, and optionally 
returns group counts for other categorical fields or class values in a document.

It does do some distributed tf*idf calculations for fewer than 25 shards.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!

A Lucene-based implementation is also forthcoming, see Tim Allison's code.

  was:
This is a  request handler wrapper around components in ConcordanceSearcher 
submitted in [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205]. 
 It quantifies the variation in term matches of a fuzzy search, and optionally 
returns group counts for other categorical fields or class values in a document.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!

A Lucene-based implementation is also forthcoming, see Tim Allison's code.


 TermVariants from fuzzy and/or span search 
 ---

 Key: SOLR-5412
 URL: https://issues.apache.org/jira/browse/SOLR-5412
 Project: Solr
  Issue Type: New Feature
Reporter: Jason R Robinson
 Attachments: Solr_termvariants.zip, mo1.jpg, mo2.jpg


 This is a  request handler wrapper around components in ConcordanceSearcher 
 submitted in 
 [LUCENE-5205|https://issues.apache.org/jira/i#browse/LUCENE-5205].  It 
 quantifies the variation in term matches of a fuzzy search, and optionally 
 returns group counts for other categorical fields or class values in a 
 document.
 It does do some distributed tf*idf calculations for fewer than 25 shards.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!
 A Lucene-based implementation is also forthcoming, see Tim Allison's code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5319) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated LUCENE-5319:
-

Description: 
This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
submitted in [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318]. 
 It reanalyzes the concordance windows and ranks keywords wrt the target value 
of the concordance search.  

Does have some minimal support for SolrCloud, including distributed tf*idf.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!


  was:
This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
submitted in [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318]. 
 Does have some minimal support for SolrCloud, including distributed tf*idf.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!



 Keywords in concordance windows, Solr Wrapper for LUCENE-5318
 -

 Key: LUCENE-5319
 URL: https://issues.apache.org/jira/browse/LUCENE-5319
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson

 This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
 submitted in 
 [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318].  It 
 reanalyzes the concordance windows and ranks keywords wrt the target value of 
 the concordance search.  
 Does have some minimal support for SolrCloud, including distributed tf*idf.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5319) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created LUCENE-5319:


 Summary: Keywords in concordance windows, Solr Wrapper for 
LUCENE-5318
 Key: LUCENE-5319
 URL: https://issues.apache.org/jira/browse/LUCENE-5319
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson


This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
submitted in [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318]. 
 Does have some minimal support for SolrCloud, including distributed tf*idf.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5319) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated LUCENE-5319:
-

Attachment: Solr_concordance_cooccurrence.zip

 Keywords in concordance windows, Solr Wrapper for LUCENE-5318
 -

 Key: LUCENE-5319
 URL: https://issues.apache.org/jira/browse/LUCENE-5319
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance_cooccurrence.zip


 This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
 submitted in 
 [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318].  It 
 reanalyzes the concordance windows and ranks keywords wrt the target value of 
 the concordance search.  
 Does have some minimal support for SolrCloud, including distributed tf*idf.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5413) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created SOLR-5413:
--

 Summary: Keywords in concordance windows, Solr Wrapper for 
LUCENE-5318
 Key: SOLR-5413
 URL: https://issues.apache.org/jira/browse/SOLR-5413
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance_cooccurrence.zip




This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
submitted in LUCENE-5318. It reanalyzes the concordance windows and ranks 
keywords wrt the target value of the concordance search. 

Does have some minimal support for SolrCloud, including distributed tf*idf.

**Following Yonik's Law** 
This is patch is more of a placeholder for a much more polished draft. Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5319) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson resolved LUCENE-5319.
--

Resolution: Invalid

should have been a solr submission

 Keywords in concordance windows, Solr Wrapper for LUCENE-5318
 -

 Key: LUCENE-5319
 URL: https://issues.apache.org/jira/browse/LUCENE-5319
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance_cooccurrence.zip


 This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
 submitted in 
 [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318].  It 
 reanalyzes the concordance windows and ranks keywords wrt the target value of 
 the concordance search.  
 Does have some minimal support for SolrCloud, including distributed tf*idf.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5413) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5413:
---

Attachment: Solr_concordance_cooccurrence.zip

 Keywords in concordance windows, Solr Wrapper for LUCENE-5318
 -

 Key: SOLR-5413
 URL: https://issues.apache.org/jira/browse/SOLR-5413
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance_cooccurrence.zip


 This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
 submitted in LUCENE-5318. It reanalyzes the concordance windows and ranks 
 keywords wrt the target value of the concordance search. 
 Does have some minimal support for SolrCloud, including distributed tf*idf.
 **Following Yonik's Law** 
 This is patch is more of a placeholder for a much more polished draft. Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5414) Category-based TermVectors

2013-10-31 Thread Jason R Robinson (JIRA)
Jason R Robinson created SOLR-5414:
--

 Summary: Category-based TermVectors
 Key: SOLR-5414
 URL: https://issues.apache.org/jira/browse/SOLR-5414
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson


This is a simple RequestHandler that extracts significant Terms from 
TermVectors wrt some categorical or class value.  Think keywords per class.  
Minimal SolrCloud support is available including distributed tf*idf.

***Following Yonik's Law*** 
This is patch is more of a placeholder for a much more polished draft.  Among 
other things, test scripts and javadocs are forthcoming!




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-5319) Keywords in concordance windows, Solr Wrapper for LUCENE-5318

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson closed LUCENE-5319.



this should have been in solr

 Keywords in concordance windows, Solr Wrapper for LUCENE-5318
 -

 Key: LUCENE-5319
 URL: https://issues.apache.org/jira/browse/LUCENE-5319
 Project: Lucene - Core
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_concordance_cooccurrence.zip


 This is a simple RequestHandler wrapper around ConcordanceCooccurSearcher 
 submitted in 
 [LUCENE-5318|https://issues.apache.org/jira/i#browse/LUCENE-5318].  It 
 reanalyzes the concordance windows and ranks keywords wrt the target value of 
 the concordance search.  
 Does have some minimal support for SolrCloud, including distributed tf*idf.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5414) Category-based TermVectors

2013-10-31 Thread Jason R Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason R Robinson updated SOLR-5414:
---

Attachment: Solr_categoryterms.zip

 Category-based TermVectors
 --

 Key: SOLR-5414
 URL: https://issues.apache.org/jira/browse/SOLR-5414
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.5
Reporter: Jason R Robinson
 Attachments: Solr_categoryterms.zip


 This is a simple RequestHandler that extracts significant Terms from 
 TermVectors wrt some categorical or class value.  Think keywords per class.  
 Minimal SolrCloud support is available including distributed tf*idf.
 ***Following Yonik's Law*** 
 This is patch is more of a placeholder for a much more polished draft.  Among 
 other things, test scripts and javadocs are forthcoming!



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5374:
---

Attachment: SOLR-5374.patch

OK, here's the final patch with everything cleaned up, and with a new stress 
test and optimizations to use the fieldcache/docvalues when possible instead of 
always going to stored fields.

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch, SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5402) SolrCloud 4.5 bulk add errors in cloud setup

2013-10-31 Thread Michael Tracey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810551#comment-13810551
 ] 

Michael Tracey commented on SOLR-5402:
--

I can confirm the same thing, it errors with a single document.  That's the 
last test I did before rolling my infrastructure back to 4.4.  I can index 
1000's of documents at a time without issue to a single server, but when 
SolrCloud 4.5.1 tries to sync two nodes, it fails (same errors as above).

 SolrCloud 4.5 bulk add errors in cloud setup
 

 Key: SOLR-5402
 URL: https://issues.apache.org/jira/browse/SOLR-5402
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5, 4.5.1
Reporter: Sai Gadde
 Fix For: 4.6


 We use out of the box Solr 4.5.1 no customization done. If we merge documents 
 via SolrJ to a single server it is perfectly working fine.
 But as soon as we add another node to the cloud we are getting following 
 while merging documents. We merge about 500 at a time using SolrJ. These 500 
 documents in total are about few MB (1-3) in size.
 This is the error we are getting on the server (10.10.10.116 - IP is 
 irrelavent just for clarity)where merging is happening. 10.10.10.119 is the 
 new node here. This server gets RemoteSolrException
 shard update error StdNode: 
 http://10.10.10.119:8980/solr/mycore/:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
  Illegal to have multiple roots (start tag in epilog?).
  at [row,col {unknown-source}]: [1,12468]
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:425)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
   at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:401)
   at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:1)
   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)
 On the other server 10.10.10.119 we get following error
 org.apache.solr.common.SolrException: Illegal to have multiple roots (start 
 tag in epilog?).
  at [row,col {unknown-source}]: [1,12468]
   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:176)
   at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
   at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
   at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
   at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
   at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
   at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
   at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
   at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
   at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
   at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
   at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
   at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
   at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
   at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: com.ctc.wstx.exc.WstxParsingException: 

[jira] [Created] (SOLR-5415) A core that failed to load should not unregister from ZK

2013-10-31 Thread Noble Paul (JIRA)
Noble Paul created SOLR-5415:


 Summary: A core that failed to load should not unregister from ZK
 Key: SOLR-5415
 URL: https://issues.apache.org/jira/browse/SOLR-5415
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul


If a core fails to load because of
* a bug in a new version of Solr
* or some config error

it immediately unregisters itself from ZK. This does not give the user a chance 
to rectify the error and restart the core/node . So if a core fails to load  it 
should not be unregistered from ZK unless an explicit core unload or a 
DELETEREPLICA collection command is inoked



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-trunk - Build # 2439 - Failure

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-trunk/2439/

No tests ran.

Build Log:
[...truncated 11616 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/build.xml:518:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/common-build.xml:1504:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/tools/custom-tasks.xml:122:
 Malformed module dependency from 
'lucene-analyzers-phonetic.internal.test.dependencies': 
'lucene/build/analysis/common/lucene-analyzers-common-5.0-2013-10-31_18-52-24.jar'

Total time: 7 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5415) A core that failed to load should not unregister from ZK

2013-10-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5415:
-

Attachment: SOLR-5415.patch

fix w/o tests

 A core that failed to load should not unregister from ZK
 

 Key: SOLR-5415
 URL: https://issues.apache.org/jira/browse/SOLR-5415
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5415.patch


 If a core fails to load because of
 * a bug in a new version of Solr
 * or some config error
 it immediately unregisters itself from ZK. This does not give the user a 
 chance to rectify the error and restart the core/node . So if a core fails to 
 load  it should not be unregistered from ZK unless an explicit core unload or 
 a DELETEREPLICA collection command is inoked



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Artifacts-trunk - Build # 2439 - Failure

2013-10-31 Thread Steve Rowe
This is caused by LUCENE-5217. 

Looks like internal dependency finder is overly strict on jar versions - I’ll 
relax that to allow Jenkins to succeed.

Steve

On Oct 31, 2013, at 3:01 PM, Apache Jenkins Server jenk...@builds.apache.org 
wrote:

 Build: https://builds.apache.org/job/Lucene-Artifacts-trunk/2439/
 
 No tests ran.
 
 Build Log:
 [...truncated 11616 lines...]
 BUILD FAILED
 /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/build.xml:518:
  The following error occurred while executing this line:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/common-build.xml:1504:
  The following error occurred while executing this line:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Artifacts-trunk/lucene/tools/custom-tasks.xml:122:
  Malformed module dependency from 
 'lucene-analyzers-phonetic.internal.test.dependencies': 
 'lucene/build/analysis/common/lucene-analyzers-common-5.0-2013-10-31_18-52-24.jar'
 
 Total time: 7 minutes 4 seconds
 Build step 'Invoke Ant' marked build as failure
 Archiving artifacts
 Publishing Javadoc
 Email was triggered for: Failure
 Sending email for trigger: Failure
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810572#comment-13810572
 ] 

ASF subversion and git services commented on SOLR-5374:
---

Commit 1537587 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1537587 ]

SOLR-5374: user version update processor

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch, SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810602#comment-13810602
 ] 

ASF subversion and git services commented on SOLR-5374:
---

Commit 1537597 from [~yo...@apache.org] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1537597 ]

SOLR-5374: user version update processor

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch, SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-5374.


   Resolution: Fixed
Fix Version/s: 5.0
   4.6

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.6, 5.0

 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch, SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0) - Build # 8023 - Failure!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8023/
Java: 32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0 -XnoOpt

All tests passed

Build Log:
[...truncated 25993 lines...]
-check-forbidden-java-apis:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.6
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.6
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
the classpath!
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
which executor it is associated with]
[forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
(TestDocBasedVersionConstraints.java:361)
[forbidden-apis] Scanned 2515 (and 1373 related) class file(s) for forbidden 
API invocations (in 1.22s), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:286: Check for 
forbidden API calls failed, see log.

Total time: 81 minutes 46 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0 -XnoOpt
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

HdfsDirectory Implementation

2013-10-31 Thread Josh Clum
Hello,

I refactored out the HDFS directory implementation from Solr to use in my
own project and was surprised to see how it performed. I'm using the both
the HDFSDirectory class and the
HdfsDirectoryFactory class.

On my local machine when using the cache there was a significant speed up.
It was a small enough that each file making up lucene index (12 docs) fit
into one block inside the cache.

When running it on a multinode cluster on AWS the performance pulling back
1031 docs with the cache was not that much better than without. According
to my log statements, the cache was being hit every time, but the
difference between this an my local was that there were several blocks per
file.

When setting up the cache I used the default setting as specified in
HdfsDirectoryFactory.

Any ideas on how to speed up searches? Should I change the block size? Is
there something that blur does to put a wrapper around the cache?

ON A MULTI NODE CLUSTER
Number of documents in directory[1031]
Try #1 - Total execution time: 3776
Try #2 - Total execution time: 2995
Try #3 - Total execution time: 2683
Try #4 - Total execution time: 2301
Try #5 - Total execution time: 2174
Try #6 - Total execution time: 2253
Try #7 - Total execution time: 2184
Try #8 - Total execution time: 2087
Try #9 - Total execution time: 2157
Try #10 - Total execution time: 2089
Cached try #1 - Total execution time: 2065
Cached try #2 - Total execution time: 2298
Cached try #3 - Total execution time: 2398
Cached try #4 - Total execution time: 2421
Cached try #5 - Total execution time: 2080
Cached try #6 - Total execution time: 2060
Cached try #7 - Total execution time: 2285
Cached try #8 - Total execution time: 2048
Cached try #9 - Total execution time: 2087
Cached try #10 - Total execution time: 2106

ON MY LOCAL
Number of documents in directory[12]
Try #1 - Total execution time: 627
Try #2 - Total execution time: 620
Try #3 - Total execution time: 637
Try #4 - Total execution time: 535
Try #5 - Total execution time: 486
Try #6 - Total execution time: 527
Try #7 - Total execution time: 363
Try #8 - Total execution time: 430
Try #9 - Total execution time: 431
Try #10 - Total execution time: 337
Cached try #1 - Total execution time: 38
Cached try #2 - Total execution time: 38
Cached try #3 - Total execution time: 36
Cached try #4 - Total execution time: 35
Cached try #5 - Total execution time: 135
Cached try #6 - Total execution time: 31
Cached try #7 - Total execution time: 36
Cached try #8 - Total execution time: 30
Cached try #9 - Total execution time: 29
Cached try #10 - Total execution time: 28

Thanks,
Josh


[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 2125 - Failure

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/2125/

All tests passed

Build Log:
[...truncated 26012 lines...]
-check-forbidden-java-apis:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.6
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.6
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
the classpath!
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
which executor it is associated with]
[forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
(TestDocBasedVersionConstraints.java:361)
[forbidden-apis] Scanned 2515 (and 1373 related) class file(s) for forbidden 
API invocations (in 2.03s), 1 error(s).

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:428:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:67:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build.xml:286:
 Check for forbidden API calls failed, see log.

Total time: 75 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0) - Build # 8023 - Failure!

2013-10-31 Thread Yonik Seeley
Sorry guys, Steve pinged me about this break...
I'll fix...

-Yonik


On Thu, Oct 31, 2013 at 5:10 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8023/
 Java: 32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0 -XnoOpt

 All tests passed

 Build Log:
 [...truncated 25993 lines...]
 -check-forbidden-java-apis:
 [forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.6
 [forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.6
 [forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
 [forbidden-apis] Reading API signatures: 
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/base.txt
 [forbidden-apis] Reading API signatures: 
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
 [forbidden-apis] Loading classes to check...
 [forbidden-apis] Scanning for API signatures and dependencies...
 [forbidden-apis] WARNING: The referenced class 
 'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. 
 Please fix the classpath!
 [forbidden-apis] WARNING: The referenced class 
 'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. 
 Please fix the classpath!
 [forbidden-apis] WARNING: The referenced class 
 'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. 
 Please fix the classpath!
 [forbidden-apis] WARNING: The referenced class 
 'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
 the classpath!
 [forbidden-apis] Forbidden method invocation: 
 java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
 vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
 DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
 which executor it is associated with]
 [forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
 (TestDocBasedVersionConstraints.java:361)
 [forbidden-apis] Scanned 2515 (and 1373 related) class file(s) for forbidden 
 API invocations (in 1.22s), 1 error(s).

 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:286: Check 
 for forbidden API calls failed, see log.

 Total time: 81 minutes 46 seconds
 Build step 'Invoke Ant' marked build as failure
 Description set: Java: 32bit/jrockit-jdk1.6.0_45-R28.2.7-4.1.0 -XnoOpt
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5374) Support user configured doc-centric versioning rules

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810745#comment-13810745
 ] 

ASF subversion and git services commented on SOLR-5374:
---

Commit 1537704 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1537704 ]

SOLR-5374: fix unnamed thread pool

 Support user configured doc-centric versioning rules
 

 Key: SOLR-5374
 URL: https://issues.apache.org/jira/browse/SOLR-5374
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.6, 5.0

 Attachments: SOLR-5374.patch, SOLR-5374.patch, SOLR-5374.patch, 
 SOLR-5374.patch, SOLR-5374.patch


 The existing optimistic concurrency features of Solr can be very handy for 
 ensuring that you are only updating/replacing the version of the doc you 
 think you are updating/replacing, w/o the risk of someone else 
 adding/removing the doc in the mean time -- but I've recently encountered 
 some situations where I really wanted to be able to let the client specify an 
 arbitrary version, on a per document basis, (ie: generated by an external 
 system, or perhaps a timestamp of when a file was last modified) and ensure 
 that the corresponding document update was processed only if the new 
 version is greater then the old version -- w/o needing to check exactly 
 which version is currently in Solr.  (ie: If a client wants to index version 
 101 of a doc, that update should fail if version 102 is already in the index, 
 but succeed if the currently indexed version is 99 -- w/o the client needing 
 to ask Solr what the current version)
 The idea Yonik brought up in SOLR-5298 (letting the client specify a 
 {{\_new\_version\_}} that would be used by the existing optimistic 
 concurrency code to control the assignment of the {{\_version\_}} field for 
 documents) looked like a good direction to go -- but after digging into the 
 way {{\_version\_}} is used internally I realized it requires a uniqueness 
 constraint across all update commands, that would make it impossible to allow 
 multiple independent documents to have the same {{\_version\_}}.
 So instead I've tackled the problem in a different way, using an 
 UpdateProcessor that is configured with user defined field to track a 
 DocBasedVersion and uses the RTG logic to figure out if the update is 
 allowed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_45) - Build # 8119 - Failure!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/8119/
Java: 64bit/jdk1.7.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 26355 lines...]
-check-forbidden-java-apis:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.7
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.7
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
the classpath!
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
which executor it is associated with]
[forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
(TestDocBasedVersionConstraints.java:361)
[forbidden-apis] Scanned 2512 (and 1389 related) class file(s) for forbidden 
API invocations (in 1.46s), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:417: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:286: Check 
for forbidden API calls failed, see log.

Total time: 49 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

How do we access CloudSolrServer.RouteResponse?

2013-10-31 Thread Jessica Cheng
Hi,

I need the version number from an add, and I added the versions param and
the version is returned in the RouteResponse, but since that class is
package private, I'm not able to access it outside the package. How are we
supposed to access this response?

Thanks,
Jessica


[jira] [Updated] (LUCENE-5283) Fail the build if ant test didn't execute any tests (everything filtered out).

2013-10-31 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5283:


Attachment: LUCENE-5283.patch

As a purely intellectual exercise I decided to investigate whether it's 
possible to have a top-level, after-all-the-submodules check for the number 
of executed tests. Ant really isn't suited for multi-module, hierarchical 
project layouts; it'd be so much easier with gradle...

Anyway, the attached patch seems to work. It's terribly hacky and terribly 
ugly, but it does work. Try it from module-level or top-level (lucene or solr, 
I didn't try to make it work at top-top level).

{code}
cd lucene
ant test -Dtests.class=*TestSpellChecker*
...
BUILD SUCCESSFUL
{code}
but:
{code}
ant test -Dtests.class=*foo*
...
BUILD FAILED
C:\Work\lucene-solr-svn\trunk\lucene\common-build.xml:1278: Not even a single 
test was executed (a typo in the filter pattern maybe)?
{code}

Let me know what you think. Should I commit it? In spite of how ugly it is?

 Fail the build if ant test didn't execute any tests (everything filtered out).
 --

 Key: LUCENE-5283
 URL: https://issues.apache.org/jira/browse/LUCENE-5283
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5283.patch, LUCENE-5283.patch


 This should be an optional setting that defaults to 'false' (the build 
 proceeds).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 943 - Failure!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/943/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 26574 lines...]
-check-forbidden-java-apis:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.6
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.6
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
the classpath!
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
which executor it is associated with]
[forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
(TestDocBasedVersionConstraints.java:361)
[forbidden-apis] Scanned 2515 (and 1376 related) class file(s) for forbidden 
API invocations (in 6.40s), 1 error(s).

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:428: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:67: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build.xml:286: Check for 
forbidden API calls failed, see log.

Total time: 95 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 8024 - Still Failing!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8024/
Java: 32bit/jdk1.6.0_45 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 32455 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:367: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:66: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:135: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 53 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.6.0_45 -server -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1721 - Failure

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1721/

All tests passed

Build Log:
[...truncated 26552 lines...]
-check-forbidden-java-apis:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.6
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.6
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.1
[forbidden-apis] Reading API signatures: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning for API signatures and dependencies...
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.collation.ICUCollationKeyAnalyzer' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProviderFactory' cannot be loaded. Please 
fix the classpath!
[forbidden-apis] WARNING: The referenced class 
'org.apache.lucene.analysis.uima.ae.AEProvider' cannot be loaded. Please fix 
the classpath!
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newFixedThreadPool(int) [Spawns threads with 
vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's 
DefaultSolrThreadFactory) and name threads so that you can tell (by its name) 
which executor it is associated with]
[forbidden-apis]   in org.apache.solr.update.TestDocBasedVersionConstraints 
(TestDocBasedVersionConstraints.java:361)
[forbidden-apis] Scanned 2515 (and 1376 related) class file(s) for forbidden 
API invocations (in 2.73s), 1 error(s).

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:428:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:67:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build.xml:286:
 Check for forbidden API calls failed, see log.

Total time: 75 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_45) - Build # 8120 - Still Failing!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/8120/
Java: 32bit/jdk1.7.0_45 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 32838 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:417: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:356: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:66: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:135: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 54 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_45 -client -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-10-31 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810891#comment-13810891
 ] 

David commented on SOLR-5027:
-

Getting the following error please advise how to fix:

3095070 [http-bio-8080-exec-8] ERROR org.apache.solr.core.SolrCore  – 
java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingScoreCollector.collect(CollapsingQParserPlugin.java:409)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:910)
at 
org.apache.solr.request.SimpleFacets.parseParams(SimpleFacets.java:219)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:549)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:265)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1008)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

13095072 [http-bio-8080-exec-8] ERROR 
org.apache.solr.servlet.SolrDispatchFilter  – 
null:java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingScoreCollector.collect(CollapsingQParserPlugin.java:409)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:910)
at 
org.apache.solr.request.SimpleFacets.parseParams(SimpleFacets.java:219)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:549)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:265)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_45) - Build # 3412 - Still Failing!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3412/
Java: 32bit/jdk1.7.0_45 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 32826 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:417: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:356: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:66: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:135:
 The following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 103 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_45 -client -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-10-31 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810910#comment-13810910
 ] 

David commented on SOLR-5027:
-

Looks like the error is only happening on queries that I use tagging on

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b109) - Build # 8025 - Still Failing!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8025/
Java: 64bit/jdk1.8.0-ea-b109 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 36038 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:367: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:66: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:135: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 53 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b109 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5392) extend solrj apis to cover collection management

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810912#comment-13810912
 ] 

ASF subversion and git services commented on SOLR-5392:
---

Commit 1537787 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1537787 ]

SOLR-5392: Extend solrj apis to cover collection management.

 extend solrj apis to cover collection management
 

 Key: SOLR-5392
 URL: https://issues.apache.org/jira/browse/SOLR-5392
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.5
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Attachments: 
 0001-SOLR-5392.-extend-solrj-apis-to-cover-collection-man.patch, 
 SOLR-5392.patch


 It would be useful to extend solrj APIs to cover collection management calls: 
 https://cwiki.apache.org/confluence/display/solr/Collections+API 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5392) extend solrj apis to cover collection management

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810915#comment-13810915
 ] 

ASF subversion and git services commented on SOLR-5392:
---

Commit 1537790 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1537790 ]

SOLR-5392: Extend solrj apis to cover collection management.

 extend solrj apis to cover collection management
 

 Key: SOLR-5392
 URL: https://issues.apache.org/jira/browse/SOLR-5392
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.5
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Attachments: 
 0001-SOLR-5392.-extend-solrj-apis-to-cover-collection-man.patch, 
 SOLR-5392.patch


 It would be useful to extend solrj APIs to cover collection management calls: 
 https://cwiki.apache.org/confluence/display/solr/Collections+API 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-10-31 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810924#comment-13810924
 ] 

David commented on SOLR-5027:
-

Here is an example query where i'm getting the error: 

/productQuery?fq=discontinued:falsefq={!tag=manufacturer_string}manufacturer_string:(delta%20OR%20kohler)fq=siteid:82sort=score%20descfacet=truefacet.mincount=1facet.sort=indexstart=0rows=48fl=productid,manufacturer,uniqueFinish,uniqueid,productCompositeid,scorefacet.query={!ex=onSale}onSale:truefacet.query={!ex=rating}rating:[4%20TO%20*]facet.query={!ex=rating}rating:[3%20TO%20*]facet.query={!ex=rating}rating:[2%20TO%20*]facet.query={!ex=rating}rating:[1%20TO%20*]facet.query={!ex=MadeinAmerica_boolean}MadeinAmerica_boolean:yesfacet.query={!ex=inStock}inStock:truefacet.query={!ex=PulloutSpray_string}PulloutSpray_string:yesfacet.query={!ex=HandlesIncluded_string}HandlesIncluded_string:yesfacet.query={!ex=Electronic_string}Electronic_string:yesfacet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[0%20TO%201]facet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[1%20TO%202]facet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[2%20TO%203]facet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[4%20TO%205]facet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[3%20TO%204]facet.query={!ex=FlowRateGPM_numeric}FlowRateGPM_numeric:[5%20TO%20*]facet.query={!ex=ADA_string}ADA_string:yesfacet.query={!ex=WaterSenseCertified_string}WaterSenseCertified_string:yesfacet.query={!ex=WaterfallFaucet_boolean}WaterfallFaucet_boolean:yesfacet.query={!ex=InstallationAvailable_string}InstallationAvailable_string:yesfacet.query={!ex=LowLeadCompliant_string}LowLeadCompliant_string:yesfacet.query={!ex=DrainAssemblyIncluded_string}DrainAssemblyIncluded_string:yesfacet.query={!ex=EscutcheonIncluded_string}EscutcheonIncluded_string:yesfacet.field=NumberOfHandles_numericfacet.field=pricebook_1_fsfacet.field=SpoutReach_numericfacet.field=SpoutHeight_numericfacet.field=FaucetCenters_numericfacet.field=OverallHeight_numericfacet.field=FaucetHoles_numericfacet.field=HandleStyle_stringfacet.field=masterFinish_stringfacet.field={!ex=manufacturer_string}manufacturer_stringfacet.field=HandleMaterial_stringfacet.field=ValveType_stringfacet.field=Theme_stringfacet.field=MountingType_stringqt=/productQueryqf=sku^9.0%20upc^9.1%20keywords_82_txtws^1.9%20uniqueid^9.0%20series^2.8%20productTitle^1.2%20productid^9.0%20manufacturer^4.0%20masterFinish^1.5%20theme^1.1%20categoryNames_82_txt^0.2%20finish^1.4pf=keywords_82_txtws^2.1%20productTitle^1.5%20manufacturer^4.0%20finish^1.9bf=linear(popularity_82_i,1,2)^3.0q.alt=categories_82_is:108503

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-10-31 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810939#comment-13810939
 ] 

David commented on SOLR-5027:
-

When I take the {!tag} out I don't get the error. It looks like the 
CollapsingQParserPlugin doesn't work with tagging? Can you confirm?

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-10-31 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13810941#comment-13810941
 ] 

David commented on SOLR-5027:
-

I have posted this information on Solr User: 
http://lucene.472066.n3.nabble.com/Error-with-CollapsingQParserPlugin-when-trying-to-use-tagging-td4098709.html

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 422 - Still Failing

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/422/

All tests passed

Build Log:
[...truncated 32513 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:435:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:367:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/extra-targets.xml:66:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/extra-targets.xml:135:
 The following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 187 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_45) - Build # 3337 - Failure!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3337/
Java: 32bit/jdk1.7.0_45 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 33141 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:428: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:367: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\extra-targets.xml:66: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\extra-targets.xml:135: 
The following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 105 minutes 54 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_45 -client -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1015: POMs out of sync

2013-10-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1015/

All tests passed

Build Log:
[...truncated 36479 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 593 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b109) - Build # 8028 - Still Failing!

2013-10-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8028/
Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 32479 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:367: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:66: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:135: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
* ./solr/core/src/test/org/apache/solr/cloud/TestDistribDocBasedVersion.java
* ./solr/core/src/test/org/apache/solr/search/TestStressUserVersions.java
* 
./solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java

Total time: 45 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5189) Numeric DocValues Updates

2013-10-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13811057#comment-13811057
 ] 

ASF subversion and git services commented on LUCENE-5189:
-

Commit 1537829 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1537829 ]

LUCENE-5189: add CHANGES

 Numeric DocValues Updates
 -

 Key: LUCENE-5189
 URL: https://issues.apache.org/jira/browse/LUCENE-5189
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/index
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5189-4x.patch, LUCENE-5189-4x.patch, 
 LUCENE-5189-no-lost-updates.patch, LUCENE-5189-segdv.patch, 
 LUCENE-5189-updates-order.patch, LUCENE-5189-updates-order.patch, 
 LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, 
 LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, 
 LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, 
 LUCENE-5189_process_events.patch, LUCENE-5189_process_events.patch


 In LUCENE-4258 we started to work on incremental field updates, however the 
 amount of changes are immense and hard to follow/consume. The reason is that 
 we targeted postings, stored fields, DV etc., all from the get go.
 I'd like to start afresh here, with numeric-dv-field updates only. There are 
 a couple of reasons to that:
 * NumericDV fields should be easier to update, if e.g. we write all the 
 values of all the documents in a segment for the updated field (similar to 
 how livedocs work, and previously norms).
 * It's a fairly contained issue, attempting to handle just one data type to 
 update, yet requires many changes to core code which will also be useful for 
 updating other data types.
 * It has value in and on itself, and we don't need to allow updating all the 
 data types in Lucene at once ... we can do that gradually.
 I have some working patch already which I'll upload next, explaining the 
 changes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org