[jira] [Created] (SOLR-5554) Ordering Issue with

2013-12-13 Thread Deepak Mishra (JIRA)
Deepak Mishra created SOLR-5554:
---

 Summary: Ordering Issue with 
 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
Reporter: Deepak Mishra






--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue with Collapse while using sort field min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


  Description: 
We faced the ordering issue without passing any sorting parameter and same 
filters in both queries.

Query1
fq=
{!collapse field=company_id}

Query2
fq=
{!collapse field=comany_id min=price}

Query3
For debugging Query2, we added score field in fl=score,offering_id,company_id...
That actually solved the document order issue

Query4
But when we passed selective exclude in facet field of Query3, it give document 
in correct order but with NullPointerException in error and no facet (not the 
one in SOLR-5416).
facet.field=
{!ex=samsung}

brand
fq=
{!tag=samsung}

(brand:samsung)
The error is
NullPointerException at 
org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
  Environment: Solr 4.6
Affects Version/s: 4.6
  Summary: Ordering Issue with Collapse while using sort field min 
max  (was: Ordering Issue with )

 Ordering Issue with Collapse while using sort field min max
 ---

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue with Collapse while using sort field min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Description: 
We faced the ordering issue without passing any sorting parameter and same 
filters in both queries.

Query1
fq=
{!collapse field=company_id}

Query2
fq=
{!collapse field=comany_id min=price}

Query3
For debugging Query2, we added score field in fl=score,offering_id,company_id...
That actually solved the document order issue

Query4
But when we passed selective exclude in facet field of Query3, it give document 
in correct order but with NullPointerException in error and no facet (not the 
one in SOLR-5416).
facet.field=
{!ex=samsung}

brand
fq=
{!tag=samsung}

(brand:samsung)
The error is
NullPointerException at 
org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)

Query5
Removing score from fl in Query 4 removes the error

  was:
We faced the ordering issue without passing any sorting parameter and same 
filters in both queries.

Query1
fq=
{!collapse field=company_id}

Query2
fq=
{!collapse field=comany_id min=price}

Query3
For debugging Query2, we added score field in fl=score,offering_id,company_id...
That actually solved the document order issue

Query4
But when we passed selective exclude in facet field of Query3, it give document 
in correct order but with NullPointerException in error and no facet (not the 
one in SOLR-5416).
facet.field=
{!ex=samsung}

brand
fq=
{!tag=samsung}

(brand:samsung)
The error is
NullPointerException at 
org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)


 Ordering Issue with Collapse while using sort field min max
 ---

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue with Collapse while using sort field min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: Query5.txt
Query4.txt
Query3.txt
Query2.txt
Query1.txt
Error On Query4

 Ordering Issue with Collapse while using sort field min max
 ---

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Summary: Ordering Issue when Collapsing using min max  (was: Ordering Issue 
with Collapse while using sort field min max)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 1088 - Failure!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1088/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([393A505DAD0BF697]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 10026 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 28866 T33 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/J0/./solrtest-QueryEqualityTest-1386919853131
   [junit4]   2 28867 T33 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files/solr/collection1/'
   [junit4]   2 28875 T33 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 28876 T33 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 29052 T33 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_47
   [junit4]   2 29225 T33 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 29226 T33 oass.IndexSchema.readSchema Reading Solr Schema from 
schema15.xml
   [junit4]   2 29263 T33 oass.IndexSchema.readSchema [null] Schema name=test
   [junit4]   2 30153 T33 oass.IndexSchema.readSchema default search field in 
schema is text
   [junit4]   2 30157 T33 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 30160 T33 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 30165 T33 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 30204 T33 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 30205 T33 oasc.SolrResourceLoader.locateSolrHome using system 
property solr.solr.home: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files/solr
   

[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: Query1.txt

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query1.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query5.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: Query5.txt

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 8573 - Failure!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8573/
Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([6103039DAF19D478]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:662)




Build Log:
[...truncated 9703 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 481925 T2084 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test/J0/./solrtest-QueryEqualityTest-1386923402517
   [junit4]   2 481925 T2084 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/'
   [junit4]   2 481926 T2084 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 481926 T2084 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 481978 T2084 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_47
   [junit4]   2 482041 T2084 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 482041 T2084 oass.IndexSchema.readSchema Reading Solr Schema 
from schema15.xml
   [junit4]   2 482049 T2084 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 482467 T2084 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 482470 T2084 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 482472 T2084 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 482475 T2084 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 482484 T2084 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 482484 T2084 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 

[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 2218 - Failure

2013-12-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/2218/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([F318814D889A5F05]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)




Build Log:
[...truncated 9664 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 582702 T1699 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build/solr-core/test/J0/./solrtest-QueryEqualityTest-1386924304150
   [junit4]   2 582703 T1699 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build/solr-core/test-files/solr/collection1/'
   [junit4]   2 582704 T1699 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 582705 T1699 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 582834 T1699 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_47
   [junit4]   2 582920 T1699 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 582921 T1699 oass.IndexSchema.readSchema Reading Solr Schema 
from schema15.xml
   [junit4]   2 582931 T1699 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 583449 T1699 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 583452 T1699 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 583453 T1699 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 583457 T1699 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 583469 T1699 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 583470 T1699 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_45) - Build # 8672 - Failure!

2013-12-13 Thread Shalin Shekhar Mangar
I'm fixing this.

On Fri, Dec 13, 2013 at 1:25 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/8672/
 Java: 64bit/jdk1.7.0_45 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

 Error Message:
 testParserCoverage was run w/o any other method explicitly testing val 
 parser: concat

 Stack Trace:
 java.lang.AssertionError: testParserCoverage was run w/o any other method 
 explicitly testing val parser: concat
 at __randomizedtesting.SeedInfo.seed([AACD3BDDCEDD70D9]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:744)




 Build Log:
 [...truncated 10332 lines...]
[junit4] Suite: org.apache.solr.search.QueryEqualityTest
[junit4]   2 589019 T1850 oas.SolrTestCaseJ4.initCore initCore
[junit4]   2 Creating dataDir: 
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/./solrtest-QueryEqualityTest-1386920348074
[junit4]   2 589019 T1850 oasc.SolrResourceLoader.init new 
 SolrResourceLoader for directory: 
 '/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/'
[junit4]   2 589020 T1850 oasc.SolrResourceLoader.replaceClassLoader 
 Adding 
 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
  to classloader
[junit4]   2 589020 T1850 oasc.SolrResourceLoader.replaceClassLoader 
 Adding 
 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/lib/README'
  to classloader
[junit4]   2 589066 T1850 oasc.SolrConfig.init Using Lucene 
 MatchVersion: LUCENE_50
[junit4]   2 589114 T1850 oasc.SolrConfig.init Loaded SolrConfig: 
 solrconfig.xml
[junit4]   2 589115 T1850 oass.IndexSchema.readSchema Reading Solr Schema 
 from schema15.xml
[junit4]   2 589128 T1850 oass.IndexSchema.readSchema [null] Schema 
 name=test
[junit4]   2 589428 T1850 oass.IndexSchema.readSchema default search 
 field in schema is text
[junit4]   2 589430 T1850 oass.IndexSchema.readSchema unique key field: id
[junit4]   2 589431 T1850 oass.FileExchangeRateProvider.reload Reloading 
 exchange rates from file currency.xml
[junit4]   2 589433 T1850 oass.FileExchangeRateProvider.reload Reloading 
 exchange rates from file currency.xml
[junit4]   2 589441 T1850 

[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query2.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query5.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query1.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: Query5.txt
Query4.txt
Query3.txt
Query2.txt
Query1.txt
Error On Query4

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Error On Query4)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query4.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Deepak Mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Mishra updated SOLR-5554:


Attachment: (was: Query3.txt)

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-12-13 Thread Deepak Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847343#comment-13847343
 ] 

Deepak Mishra commented on SOLR-5027:
-

Joel, I created a new JIRA and attached the queries in SOLR-5554

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_60-ea-b01) - Build # 3564 - Failure!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3564/
Java: 64bit/jdk1.7.0_60-ea-b01 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([CFCBFD4CA93239B3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 10062 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 188312 T376 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\.\solrtest-QueryEqualityTest-1386926025369
   [junit4]   2 188316 T376 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\collection1\'
   [junit4]   2 188319 T376 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 188319 T376 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 188450 T376 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_50
   [junit4]   2 188546 T376 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 188548 T376 oass.IndexSchema.readSchema Reading Solr Schema 
from schema15.xml
   [junit4]   2 188572 T376 oass.IndexSchema.readSchema [null] Schema name=test
   [junit4]   2 189188 T376 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 189197 T376 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 189199 T376 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 189206 T376 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 189232 T376 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 189232 T376 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 

Re: mlockall?

2013-12-13 Thread Mikhail Khludnev
On Fri, Dec 13, 2013 at 7:45 AM, Otis Gospodnetic 
otis.gospodne...@gmail.com wrote:


 How come Lucene/Solr don't make any use of that?


indeed searchhub.org/2013/05/21/mlockall-for-all/


-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_60-ea-b01) - Build # 8574 - Still Failing!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8574/
Java: 64bit/jdk1.7.0_60-ea-b01 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([9408024A7683358A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 10192 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 310564 T1256 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test/J0/./solrtest-QueryEqualityTest-1386929062989
   [junit4]   2 310564 T1256 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/'
   [junit4]   2 310565 T1256 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 310565 T1256 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 310603 T1256 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_47
   [junit4]   2 310631 T1256 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 310632 T1256 oass.IndexSchema.readSchema Reading Solr Schema 
from schema15.xml
   [junit4]   2 310637 T1256 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 310838 T1256 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 310840 T1256 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 310841 T1256 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 310843 T1256 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 310851 T1256 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 310851 T1256 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 

[jira] [Commented] (SOLR-3702) String concatenation function

2013-12-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847363#comment-13847363
 ] 

ASF subversion and git services commented on SOLR-3702:
---

Commit 1550676 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1550676 ]

SOLR-3702: Reverting commit because it breaks QueryEqualityTest

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3702) String concatenation function

2013-12-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847365#comment-13847365
 ] 

ASF subversion and git services commented on SOLR-3702:
---

Commit 1550677 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1550677 ]

SOLR-3702: Reverting commit because it breaks QueryEqualityTest

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-3702) String concatenation function

2013-12-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-3702:
-


Re-opening and reverting commit because it breaks QueryEqualityTest.

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4449) Enable backup requests for the internal solr load balancer

2013-12-13 Thread philip hoy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847375#comment-13847375
 ] 

philip hoy commented on SOLR-4449:
--

Otis, it does work in the way that you have suggested, any backup requests are 
only sent after a configurable backupRequestDelay millis which could of course 
be 0. The number of concurrent requests is limited by maximumConcurrentRequests.

Also i have recently implemented inflight request counting and added that to 
the backup request load balancer. Now the load balancer will pick the server 
that is currently handling the fewest requests. 

Currently the backup request load balancer is running live in our production 
environment alongside a standard solr load balancer, i have configured a socket 
timeout of 30 secs and a retry after 15 secs, also worth noting is we have only 
two replicas of the data. Here are some numbers from yesterday:

Seconds  Count(standard)  Count(with backups) 
0.0 279635  281384 
5.0 31412668 
10.0585 421
15.0176 209
20.0145 54 
25.0147 42 
30.0137 79 
35.022  14 
40.030  11 
45.020  37 
50.017  5 
55.07   0
60.0213 68

As you can see the numbers do look a little better with backups however i think 
with more replicas of the data one could be more aggressive with the retry time 
without the risk of flooding all the  servers in which case the improvement 
would be more marked. 

 Enable backup requests for the internal solr load balancer
 --

 Key: SOLR-4449
 URL: https://issues.apache.org/jira/browse/SOLR-4449
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor
 Attachments: SOLR-4449.patch, SOLR-4449.patch, SOLR-4449.patch, 
 patch-4449.txt, solr-back-request-lb-plugin.jar


 Add the ability to configure the built-in solr load balancer such that it 
 submits a backup request to the next server in the list if the initial 
 request takes too long. Employing such an algorithm could improve the latency 
 of the 9xth percentile albeit at the expense of increasing overall load due 
 to additional requests. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4449) Enable backup requests for the internal solr load balancer

2013-12-13 Thread philip hoy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847375#comment-13847375
 ] 

philip hoy edited comment on SOLR-4449 at 12/13/13 10:47 AM:
-

Otis, it does work in the way that you have suggested, any backup requests are 
only sent after a configurable backupRequestDelay millis which could of course 
be 0. The number of concurrent requests is limited by maximumConcurrentRequests.

Also i have recently implemented inflight request counting and added that to 
the backup request load balancer. Now the load balancer will pick the server 
that is currently handling the fewest requests. 

Currently the backup request load balancer is running live in our production 
environment alongside a standard solr load balancer, i have configured a socket 
timeout of 30 secs and a retry after 15 secs, also worth noting is we have only 
two replicas of the data. Here are some numbers from yesterday:

||Seconds||  Count(standard)|| Count(with backups)   ||  
|0.0|279635 |281384 |
|5.0|3141   |2668 |
|10.0   |585|421|
|15.0   |176|209|
|20.0   |145|54 |
|25.0   |147|42 |
|30.0   |137|79 |
|35.0   |22 |14 |
|40.0   |30 |11 |
|45.0   |20 |37| 
|50.0   |17 |5 |
|55.0   |7  |0|
|60.0   |213|68|

As you can see the numbers do look a little better with backups however i think 
with more replicas of the data one could be more aggressive with the retry time 
without the risk of flooding all the  servers in which case the improvement 
would be more marked. 


was (Author: phloy):
Otis, it does work in the way that you have suggested, any backup requests are 
only sent after a configurable backupRequestDelay millis which could of course 
be 0. The number of concurrent requests is limited by maximumConcurrentRequests.

Also i have recently implemented inflight request counting and added that to 
the backup request load balancer. Now the load balancer will pick the server 
that is currently handling the fewest requests. 

Currently the backup request load balancer is running live in our production 
environment alongside a standard solr load balancer, i have configured a socket 
timeout of 30 secs and a retry after 15 secs, also worth noting is we have only 
two replicas of the data. Here are some numbers from yesterday:

Seconds  Count(standard)  Count(with backups) 
0.0 279635  281384 
5.0 31412668 
10.0585 421
15.0176 209
20.0145 54 
25.0147 42 
30.0137 79 
35.022  14 
40.030  11 
45.020  37 
50.017  5 
55.07   0
60.0213 68

As you can see the numbers do look a little better with backups however i think 
with more replicas of the data one could be more aggressive with the retry time 
without the risk of flooding all the  servers in which case the improvement 
would be more marked. 

 Enable backup requests for the internal solr load balancer
 --

 Key: SOLR-4449
 URL: https://issues.apache.org/jira/browse/SOLR-4449
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor
 Attachments: SOLR-4449.patch, SOLR-4449.patch, SOLR-4449.patch, 
 patch-4449.txt, solr-back-request-lb-plugin.jar


 Add the ability to configure the built-in solr load balancer such that it 
 submits a backup request to the next server in the list if the initial 
 request takes too long. Employing such an algorithm could improve the latency 
 of the 9xth percentile albeit at the expense of increasing overall load due 
 to additional requests. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_60-ea-b01) - Build # 3486 - Failure!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3486/
Java: 64bit/jdk1.7.0_60-ea-b01 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing val parser: 
concat

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing val parser: concat
at __randomizedtesting.SeedInfo.seed([4213C3E2C0ABCE50]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 10780 lines...]
   [junit4] Suite: org.apache.solr.search.QueryEqualityTest
   [junit4]   2 2673900 T7174 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\solrtest-QueryEqualityTest-1386933602999
   [junit4]   2 2673904 T7174 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test-files\solr\collection1\'
   [junit4]   2 2673904 T7174 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 2673904 T7174 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 2673995 T7174 oasc.SolrConfig.init Using Lucene 
MatchVersion: LUCENE_47
   [junit4]   2 2674051 T7174 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2 2674051 T7174 oass.IndexSchema.readSchema Reading Solr Schema 
from schema15.xml
   [junit4]   2 2674063 T7174 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 2674494 T7174 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 2674496 T7174 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 2674498 T7174 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 2674503 T7174 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2 2674520 T7174 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 2674520 T7174 oasc.SolrResourceLoader.locateSolrHome using 
system property 

[jira] [Created] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread Sushil Bajracharya (JIRA)
Sushil Bajracharya created SOLR-:


 Summary: CloudSolrServer need not declare to throw 
MalformedURLException
 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya


Currently CloudSolrServer declares to throw MalformedURLException for some of 
its constructors. This does not seem necessary.
 
Details based on looking through Solr 4.6 release code:
 
CloudSolrServer has the following constructor that declares a checked exception 
MalformedURLException..
{code}
 
 public CloudSolrServer(String zkHost) throws MalformedURLException {
 
  this.zkHost = zkHost;
 
  this.myClient = HttpClientUtil.createClient(null);
 
  this.lbServer = new LBHttpSolrServer(myClient);
 
  this.lbServer.setRequestWriter(new BinaryRequestWriter());
 
  this.lbServer.setParser(new BinaryResponseParser());
 
  this.updatesToLeaders = true;
 
  shutdownLBHttpSolrServer = true;
 
  }
 
{code}
 
The only thing that seemed capable of throwing MalformedURLException seems to 
be LBHttpSolrServer’s constructor:
 
{code}
  public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
  throws MalformedURLException {
this(httpClient, new BinaryResponseParser(), solrServerUrl);
  }
{code}
 
which calls ..
 
{code}
  public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
String... solrServerUrl)
  throws MalformedURLException {
clientIsInternal = (httpClient == null);
this.parser = parser;
if (httpClient == null) {
  ModifiableSolrParams params = new ModifiableSolrParams();
  params.set(HttpClientUtil.PROP_USE_RETRY, false);
  this.httpClient = HttpClientUtil.createClient(params);
} else {
  this.httpClient = httpClient;
}
for (String s : solrServerUrl) {
  ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
  aliveServers.put(wrapper.getKey(), wrapper);
}
updateAliveList();
  }
{code}
 
which calls ..
 
{code}
protected HttpSolrServer makeServer(String server) throws MalformedURLException 
{
HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
if (requestWriter != null) {
  s.setRequestWriter(requestWriter);
}
if (queryParams != null) {
  s.setQueryParams(queryParams);
}
return s;
  }
{code}
 
Note that makeServer(String server) above does not need to throw 
MalformedURLException.. sine the only thing that seems capable of throwing 
MalformedURLException is HttpSolrServer’s constructor (which does not):
 
{code}
public HttpSolrServer(String baseURL, HttpClient client, ResponseParser parser) 
{
this.baseUrl = baseURL;
if (baseUrl.endsWith(/)) {
  baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
}
if (baseUrl.indexOf('?') = 0) {
  throw new RuntimeException(
  Invalid base url for solrj.  The base URL must not contain 
parameters: 
  + baseUrl);
}

if (client != null) {
  httpClient = client;
  internalClient = false;
} else {
  internalClient = true;
  ModifiableSolrParams params = new ModifiableSolrParams();
  params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
  params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
  params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
  httpClient =  HttpClientUtil.createClient(params);
}

this.parser = parser;
  }
{code}
 
I see nothing above that’d throw MalformedURLException. It is throwing a 
RuntimeException when the baseUrl does not match certain pattern, may be that 
was intended to be a MalformedURLException.
 
It seems like an error or oversight that CloudSolrServer declares to throw 
MalformedURLException for some of its constructors. 

This could be fixed by making LBHttpSolrServer not declare the 
MalformedURLException, and thus other callers to it do not need to do so.
 
 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5248) Data Import Handler support for Twitter

2013-12-13 Thread Hasan Emre ERKEK (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hasan Emre ERKEK updated SOLR-5248:
---

Attachment: data-config.xml

example of data-config.xml

 Data Import Handler support for Twitter
 ---

 Key: SOLR-5248
 URL: https://issues.apache.org/jira/browse/SOLR-5248
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Affects Versions: 4.4
Reporter: Hasan Emre ERKEK
Priority: Minor
  Labels: DIH, Twitter
 Attachments: SOLR-5248.patch, data-config.xml


 The Twitter Entity Processor allows index twitter stream using Solr



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: mlockall?

2013-12-13 Thread Otis Gospodnetic
Hi Mr. Paddy, ;)

Right, right, I meant to say that I know about that blog post but my Q
is:
If mlockall is such a good thing, why not have it in Lucene or Solr?  Or
maybe mlockall is not such a good or simple thing?

Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr  Elasticsearch Support * http://sematext.com/


On Fri, Dec 13, 2013 at 5:07 AM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:


 On Fri, Dec 13, 2013 at 7:45 AM, Otis Gospodnetic 
 otis.gospodne...@gmail.com wrote:


 How come Lucene/Solr don't make any use of that?


 indeed searchhub.org/2013/05/21/mlockall-for-all/


 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com



[jira] [Commented] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847559#comment-13847559
 ] 

Joel Bernstein commented on SOLR-5554:
--

I was able to reproduce the ordering issue. It was due to the sort= param. 
The code was expecting sort to be left off the request instead of an empty sort 
criteria sent. I'll resolve this as part of SOLR-5416 and add a test for this. 
In the meantime if you leave out sort parameter when it's empty it will solve 
the ordering issues.

I'm checking on the null pointer issue now.

 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5554) Ordering Issue when Collapsing using min max

2013-12-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847571#comment-13847571
 ] 

Joel Bernstein commented on SOLR-5554:
--

I was not able to reproduce the NPE with this test:

{code}
 params = new ModifiableSolrParams();
params.add(q, *:*);
params.add(fq, {!collapse field=group_s nullPolicy=expand min=test_tf});
params.add(defType, edismax);
params.add(bf, field(test_ti));
params.add(fl,score,id);
params.add(facet,true);
params.add(facet.field, {!ex=g2}group_s);
params.add(fq, {!tag=g2}group_s:group2);
params.add(fq, {!tag=t2}term_s:);
assertQ(req(params), *[count(//doc)=1],
//result/doc[1]/float[@name='id'][.='6.0']
);

{code}

Can you change the log level to FINE and attach the entire stack trace?




 Ordering Issue when Collapsing using min max
 

 Key: SOLR-5554
 URL: https://issues.apache.org/jira/browse/SOLR-5554
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.6
 Environment: Solr 4.6
Reporter: Deepak Mishra
 Fix For: 4.6, 5.0

 Attachments: Error On Query4, Query1.txt, Query2.txt, Query3.txt, 
 Query4.txt, Query5.txt


 We faced the ordering issue without passing any sorting parameter and same 
 filters in both queries.
 Query1
 fq=
 {!collapse field=company_id}
 Query2
 fq=
 {!collapse field=comany_id min=price}
 Query3
 For debugging Query2, we added score field in 
 fl=score,offering_id,company_id...
 That actually solved the document order issue
 Query4
 But when we passed selective exclude in facet field of Query3, it give 
 document in correct order but with NullPointerException in error and no facet 
 (not the one in SOLR-5416).
 facet.field=
 {!ex=samsung}
 brand
 fq=
 {!tag=samsung}
 (brand:samsung)
 The error is
 NullPointerException at 
 org.apache.solr.search.CollapsingQParserPlugin$FloatValueCollapse.collapse(CollapsingQParserPlugin.java:852)
 Query5
 Removing score from fl in Query 4 removes the error



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5416) CollapsingQParserPlugin bug with Tagging

2013-12-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5416:
-

Attachment: SOLR-5416.patch

Resolved an issue with sort order when an empty sort param is provided, 
reported in SOLR-5554.

 CollapsingQParserPlugin bug with Tagging
 

 Key: SOLR-5416
 URL: https://issues.apache.org/jira/browse/SOLR-5416
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.6
Reporter: David
Assignee: Joel Bernstein
  Labels: group, grouping
 Fix For: 5.0, 4.7

 Attachments: CollapsingQParserPlugin.java, SOLR-5416.patch, 
 SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, 
 SOLR-5416.patch, SolrIndexSearcher.java, TestCollapseQParserPlugin.java

   Original Estimate: 48h
  Remaining Estimate: 48h

 Trying to use CollapsingQParserPlugin with facet tagging throws an exception. 
 {code}
  ModifiableSolrParams params = new ModifiableSolrParams();
 params.add(q, *:*);
 params.add(fq, {!collapse field=group_s});
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(fq,{!tag=test_ti}test_ti:5);
 params.add(facet,true);
 params.add(facet.field,{!ex=test_ti}test_ti);
 assertQ(req(params), *[count(//doc)=1], 
 //doc[./int[@name='test_ti']='5']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5299) Refactor Collector API for parallelism

2013-12-13 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847577#comment-13847577
 ] 

Otis Gospodnetic commented on LUCENE-5299:
--

{quote}
I think this could have real-world applicability, but I don't have evidence yet 
in terms of a high query concurrency benchmark. Let's take as an example a 
32-core server that serves 100 QPS at an average latency of 100ms. You'd expect 
10 search tasks/threads to be active on average. So in theory you have 22 cores 
available for helping out with the search.
{quote}

In other words somebody bought overly expensive server?  Partial-joking aside, 
sure, yes, that can happen.  Without looking at the patch, I like the idea - 
why not if the ability to parallelize improves query latency in such 
situations, while not negatively impacting those whose CPU cores are already 
being pushed to the max through query concurrency.
Showing more numbers will help convince everyone. :)


 Refactor Collector API for parallelism
 --

 Key: LUCENE-5299
 URL: https://issues.apache.org/jira/browse/LUCENE-5299
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shikhar Bhushan
 Attachments: LUCENE-5299.patch, LUCENE-5299.patch, LUCENE-5299.patch, 
 LUCENE-5299.patch, LUCENE-5299.patch, benchmarks.txt


 h2. Motivation
 We should be able to scale-up better with Solr/Lucene by utilizing multiple 
 CPU cores, and not have to resort to scaling-out by sharding (with all the 
 associated distributed system pitfalls) when the index size does not warrant 
 it.
 Presently, IndexSearcher has an optional constructor arg for an 
 ExecutorService, which gets used for searching in parallel for call paths 
 where one of the TopDocCollector's is created internally. The 
 per-atomic-reader search happens in parallel and then the 
 TopDocs/TopFieldDocs results are merged with locking around the merge bit.
 However there are some problems with this approach:
 * If arbitary Collector args come into play, we can't parallelize. Note that 
 even if ultimately results are going to a TopDocCollector it may be wrapped 
 inside e.g. a EarlyTerminatingCollector or TimeLimitingCollector or both.
 * The special-casing with parallelism baked on top does not scale, there are 
 many Collector's that could potentially lend themselves to parallelism, and 
 special-casing means the parallelization has to be re-implemented if a 
 different permutation of collectors is to be used.
 h2. Proposal
 A refactoring of collectors that allows for parallelization at the level of 
 the collection protocol. 
 Some requirements that should guide the implementation:
 * easy migration path for collectors that need to remain serial
 * the parallelization should be composable (when collectors wrap other 
 collectors)
 * allow collectors to pick the optimal solution (e.g. there might be memory 
 tradeoffs to be made) by advising the collector about whether a search will 
 be parallelized, so that the serial use-case is not penalized.
 * encourage use of non-blocking constructs and lock-free parallelism, 
 blocking is not advisable for the hot-spot of a search, besides wasting 
 pooled threads.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2013-12-13 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473.patch

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4809) OpenOffice document body is not indexed by SolrCell

2013-12-13 Thread Doug Wegscheid (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847585#comment-13847585
 ] 

Doug Wegscheid commented on SOLR-4809:
--

There is a fix for this from Augusto Camarotti.  I fixed my source, built, 
moved solr-cell-4.6-SNAPSHOT.jar into my binary tree in place of the existing 
solr-cell-4.6.0.jar, and the problem was resolved.

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201312.mbox/%3c529f62e9026f0001d...@tambau.prpb.mpf.gov.br%3E

How do we get the fix applied to the distrubution?

 OpenOffice document body is not indexed by SolrCell
 ---

 Key: SOLR-4809
 URL: https://issues.apache.org/jira/browse/SOLR-4809
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 3.6.1, 4.3
Reporter: Jack Krupansky
 Attachments: HelloWorld.docx, HelloWorld.odp, HelloWorld.odt, 
 HelloWorld.txt


 As reported on the solr user mailing list, SolrCell is not indexing document 
 body content for OpenOffice documents.
 I tested with Apache Open Office 3.4.1 on Solr 4.3 and 3.6.1, for both 
 OpenWriter (.ODT) and Impress (.ODS).
 The extractOnly option does return the document body text, but Solr does not 
 index the document body text. In my test cases (.ODS and .ODT), all I see for 
 the content attribute in Solr are a few spaces.
 Using the example schema, I indexed HelloWorld.odt using:
 {code}
  curl 
 http://localhost:8983/solr/update/extract?literal.id=doc-1uprefix=attr_commit=true;
  -F myfile=@HelloWorld.odt
 {code}
 It queries as:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime2/int
   lst name=params
 str name=indenttrue/str
 str name=qid:doc-1/str
   /lst
 /lst
 result name=response numFound=1 start=0
   doc
 str name=iddoc-1/str
 arr name=attr_image_count
   str0/str
 /arr
 arr name=attr_editing_cycles
   str1/str
 /arr
 arr name=attr_stream_source_info
   strmyfile/str
 /arr
 arr name=attr_meta_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_subject
   strHello, World/str
 /arr
 str name=subjectHello World - subject/str
 arr name=attr_dcterms_created
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_description
   strThis is a test of SolrCell using OpenOffice 3.4.1 - 
 OpenWriter./str
 /arr
 arr name=attr_nbobject
   str0/str
 /arr
 arr name=attr_word_count
   str10/str
 /arr
 arr name=attr_edit_time
   strPT3M44S/str
 /arr
 arr name=attr_meta_paragraph_count
   str4/str
 /arr
 arr name=attr_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_object_count
   str0/str
 /arr
 arr name=attr_stream_content_type
   strapplication/octet-stream/str
 /arr
 arr name=attr_nbimg
   str0/str
 /arr
 str name=descriptionThis is a test of SolrCell using OpenOffice 3.4.1 
 - OpenWriter./str
 arr name=attr_stream_size
   str8960/str
 /arr
 arr name=attr_meta_object_count
   str0/str
 /arr
 arr name=attr_cp_subject
   strHello World - subject/str
 /arr
 arr name=attr_stream_name
   strHelloWorld.odt/str
 /arr
 arr name=attr_generator
   strOpenOffice.org/3.4.1$Win32 
 OpenOffice.org_project/341m1$Build-9593/str
 /arr
 str name=keywordsHello, World/str
 arr name=attr_last_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_paragraph_count
   str4/str
 /arr
 arr name=attr_dc_title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_dcterms_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_page_count
   str1/str
 /arr
 arr name=attr_meta_character_count
   str60/str
 /arr
 date name=last_modified2013-05-10T17:15:40Z/date
 arr name=attr_nbtab
   str0/str
 /arr
 arr name=attr_meta_word_count
   str10/str
 /arr
 arr name=attr_meta_table_count
   str0/str
 /arr
 arr name=attr_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_image_count
   str0/str
 /arr
 arr name=attr_xmptpg_npages
   str1/str
 /arr
 arr name=attr_table_count
   str0/str
 /arr
 arr name=attr_nbpara
   str4/str
 /arr
 arr name=attr_character_count
   str60/str
 /arr
 arr name=attr_meta_page_count
   str1/str

[jira] [Updated] (SOLR-4478) Allow cores to specify a named config set in non-SolrCloud mode

2013-12-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-4478:


Attachment: SOLR-4478-take2.patch

I got some spare cycles, and had another stab at this.

* new ConfigSet object that contains a SolrConfig and IndexSchema
* ConfigSet loading and discovery is dealt with by a ConfigSetService, which 
comes in Cloud, Default and SchemaCaching varieties.
* Config sets are kept in solrhome/configsets by default, but this can be 
configured in solr.xml
* The actual schema and config objects are *not* shared between cores, unless 
the share schema flag is switched on.
* Concurrency in schema sharing is dealt with by using a loading cache from 
Guava.

This ends up tidying up some of the zookeeper/not zookeeper logic in 
CoreContainer as well, which is nice.  Tests are passing so far...


 Allow cores to specify a named config set in non-SolrCloud mode
 ---

 Key: SOLR-4478
 URL: https://issues.apache.org/jira/browse/SOLR-4478
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2, 5.0
Reporter: Erick Erickson
 Attachments: SOLR-4478-take2.patch, SOLR-4478.patch, SOLR-4478.patch


 Part of moving forward to the new way, after SOLR-4196 etc... I propose an 
 additional parameter specified on the core node in solr.xml or as a 
 parameter in the discovery mode core.properties file, call it configSet, 
 where the value provided is a path to a directory, either absolute or 
 relative. Really, this is as though you copied the conf directory somewhere 
 to be used by more than one core.
 Straw-man: There will be a directory solr_home/configsets which will be the 
 default. If the configSet parameter is, say, myconf, then I'd expect a 
 directory named myconf to exist in solr_home/configsets, which would look 
 something like
 solr_home/configsets/myconf/schema.xml
   solrconfig.xml
   stopwords.txt
   velocity
   velocity/query.vm
 etc.
 If multiple cores used the same configSet, schema, solrconfig etc. would all 
 be shared (i.e. shareSchema=true would be assumed). I don't see a good 
 use-case for _not_ sharing schemas, so I don't propose to allow this to be 
 turned off. Hmmm, what if shareSchema is explicitly set to false in the 
 solr.xml or properties file? I'd guess it should be honored but maybe log a 
 warning?
 Mostly I'm putting this up for comments. I know that there are already 
 thoughts about how this all should work floating around, so before I start 
 any work on this I thought I'd at least get an idea of whether this is the 
 way people are thinking about going.
 Configset can be either a relative or absolute path, if relative it's assumed 
 to be relative to solr_home.
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5552) Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover

2013-12-13 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847614#comment-13847614
 ] 

Timothy Potter commented on SOLR-5552:
--

Here's a first cut at a solution sans unit tests, which relies on a new Slice 
property - last_known_leader_core_url. However I'm open to other suggestions on 
how to solve this issue if someone sees a cleaner way.

During the leader recovery process outlined in the description of this ticket, 
the ShardLeaderElectionContext can use this property as a hint to replicas to 
defer to the previous known leader if it is one of the replicas that is trying 
to recover. Specifically, this patch only applies if all replicas are down 
and the previous known leader is on a live node and is one of the replicas 
trying to recover. This may be too restrictive but it covers this issue nicely 
and minimizes chance of regression for other leader election / recovery cases.

Here are some log messages from the replica as it exits the 
waitForReplicasToComeUp process that show this patch working:



2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO  
solr.cloud.ShardLeaderElectionContext  - Enough replicas found to continue.
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO  
solr.cloud.ShardLeaderElectionContext  - Last known leader is 
http://cloud84:8984/solr/cloud_shard1_replica1/ and I am 
http://cloud85:8985/solr/cloud_shard1_replica2/
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO  
solr.cloud.ShardLeaderElectionContext  - Found previous? true and numDown is 2
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO  
solr.cloud.ShardLeaderElectionContext  - All 2 replicas are down. Choosing to 
let last known leader http://cloud84:8984/solr/cloud_shard1_replica1/ try first 
...
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO  
solr.cloud.ShardLeaderElectionContext  - There may be a better leader candidate 
than us - going back into recovery


The end result was that my shard recovered correctly and the data remained 
consistent between leader and replica. I've also tried this with 3 replicas in 
a Slice and when the last known leader doesn't come back, which works as it did 
previously.

Lastly, I'm not entirely certain I like how the property gets set in the Slice 
constructor. It may be better to set this property in the Overseer? Or even 
store the last_known_leader_core_url in a separate znode, such as 
/collections/COLL/last_known_leader/shardN. I do see some comments in places 
about keeping the leader property on the Slice vs. in the leader Replica so 
maybe that figures into this as well?

 Leader recovery process can select the wrong leader if all replicas for a 
 shard are down and trying to recover
 --

 Key: SOLR-5552
 URL: https://issues.apache.org/jira/browse/SOLR-5552
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
  Labels: leader, recovery

 One particular issue that leads to out-of-sync shards, related to SOLR-4260
 Here's what I know so far, which admittedly isn't much:
 As cloud85 (replica before it crashed) is initializing, it enters the wait 
 process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is 
 expected and a good thing.
 Some short amount of time in the future, cloud84 (leader before it crashed) 
 begins initializing and gets to a point where it adds itself as a possible 
 leader for the shard (by creating a znode under 
 /collections/cloud/leaders_elect/shard1/election), which leads to cloud85 
 being able to return from waitForReplicasToComeUp and try to determine who 
 should be the leader.
 cloud85 then tries to run the SyncStrategy, which can never work because in 
 this scenario the Jetty HTTP listener is not active yet on either node, so 
 all replication work that uses HTTP requests fails on both nodes ... PeerSync 
 treats these failures as indicators that the other replicas in the shard are 
 unavailable (or whatever) and assumes success. Here's the log message:
 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN 
 solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1 
 url=http://cloud85:8985/solr couldn't connect to 
 http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
 The Jetty HTTP listener doesn't start accepting connections until long after 
 this process has completed and already selected the wrong leader.
 From what I can see, we seem to have a leader recovery process that is based 
 partly on HTTP requests to the other nodes, but the HTTP listener on those 
 nodes isn't active yet. We need a leader recovery process that doesn't rely 
 on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader 
 

[jira] [Updated] (SOLR-5552) Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover

2013-12-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-5552:
-

Attachment: SOLR-5552.patch

For branch_4x

 Leader recovery process can select the wrong leader if all replicas for a 
 shard are down and trying to recover
 --

 Key: SOLR-5552
 URL: https://issues.apache.org/jira/browse/SOLR-5552
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
  Labels: leader, recovery
 Attachments: SOLR-5552.patch


 One particular issue that leads to out-of-sync shards, related to SOLR-4260
 Here's what I know so far, which admittedly isn't much:
 As cloud85 (replica before it crashed) is initializing, it enters the wait 
 process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is 
 expected and a good thing.
 Some short amount of time in the future, cloud84 (leader before it crashed) 
 begins initializing and gets to a point where it adds itself as a possible 
 leader for the shard (by creating a znode under 
 /collections/cloud/leaders_elect/shard1/election), which leads to cloud85 
 being able to return from waitForReplicasToComeUp and try to determine who 
 should be the leader.
 cloud85 then tries to run the SyncStrategy, which can never work because in 
 this scenario the Jetty HTTP listener is not active yet on either node, so 
 all replication work that uses HTTP requests fails on both nodes ... PeerSync 
 treats these failures as indicators that the other replicas in the shard are 
 unavailable (or whatever) and assumes success. Here's the log message:
 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN 
 solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1 
 url=http://cloud85:8985/solr couldn't connect to 
 http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
 The Jetty HTTP listener doesn't start accepting connections until long after 
 this process has completed and already selected the wrong leader.
 From what I can see, we seem to have a leader recovery process that is based 
 partly on HTTP requests to the other nodes, but the HTTP listener on those 
 nodes isn't active yet. We need a leader recovery process that doesn't rely 
 on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader 
 may need to work differently than leader election in a shard that has 
 replicas that can respond to HTTP requests? All of what I'm seeing makes 
 perfect sense for leader election when there are active replicas and the 
 current leader fails.
 All this aside, I'm not asserting that this is the only cause for the 
 out-of-sync issues reported in this ticket, but it definitely seems like it 
 could happen in a real cluster.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5552) Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover

2013-12-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-5552:
-

Comment: was deleted

(was: For branch_4x)

 Leader recovery process can select the wrong leader if all replicas for a 
 shard are down and trying to recover
 --

 Key: SOLR-5552
 URL: https://issues.apache.org/jira/browse/SOLR-5552
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
  Labels: leader, recovery
 Attachments: SOLR-5552.patch


 One particular issue that leads to out-of-sync shards, related to SOLR-4260
 Here's what I know so far, which admittedly isn't much:
 As cloud85 (replica before it crashed) is initializing, it enters the wait 
 process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is 
 expected and a good thing.
 Some short amount of time in the future, cloud84 (leader before it crashed) 
 begins initializing and gets to a point where it adds itself as a possible 
 leader for the shard (by creating a znode under 
 /collections/cloud/leaders_elect/shard1/election), which leads to cloud85 
 being able to return from waitForReplicasToComeUp and try to determine who 
 should be the leader.
 cloud85 then tries to run the SyncStrategy, which can never work because in 
 this scenario the Jetty HTTP listener is not active yet on either node, so 
 all replication work that uses HTTP requests fails on both nodes ... PeerSync 
 treats these failures as indicators that the other replicas in the shard are 
 unavailable (or whatever) and assumes success. Here's the log message:
 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN 
 solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1 
 url=http://cloud85:8985/solr couldn't connect to 
 http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
 The Jetty HTTP listener doesn't start accepting connections until long after 
 this process has completed and already selected the wrong leader.
 From what I can see, we seem to have a leader recovery process that is based 
 partly on HTTP requests to the other nodes, but the HTTP listener on those 
 nodes isn't active yet. We need a leader recovery process that doesn't rely 
 on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader 
 may need to work differently than leader election in a shard that has 
 replicas that can respond to HTTP requests? All of what I'm seeing makes 
 perfect sense for leader election when there are active replicas and the 
 current leader fails.
 All this aside, I'm not asserting that this is the only cause for the 
 out-of-sync issues reported in this ticket, but it definitely seems like it 
 could happen in a real cluster.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned SOLR-:
---

Assignee: Alan Woodward

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward

 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-:


Attachment: SOLR-.patch

Here's a simple patch.

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward
 Attachments: SOLR-.patch


 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-:


Attachment: SOLR-.patch

Updated patch with a better CHANGES entry and a couple of compilation fixes.  
Will commit in a bit.

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward
 Attachments: SOLR-.patch, SOLR-.patch


 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4809) OpenOffice document body is not indexed by SolrCell

2013-12-13 Thread Doug Wegscheid (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Wegscheid updated SOLR-4809:
-

Attachment: SOLR-4809.patch

This is Augusto Camarotti's fix for SOLR-4809.

 OpenOffice document body is not indexed by SolrCell
 ---

 Key: SOLR-4809
 URL: https://issues.apache.org/jira/browse/SOLR-4809
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 3.6.1, 4.3
Reporter: Jack Krupansky
 Attachments: HelloWorld.docx, HelloWorld.odp, HelloWorld.odt, 
 HelloWorld.txt, SOLR-4809.patch


 As reported on the solr user mailing list, SolrCell is not indexing document 
 body content for OpenOffice documents.
 I tested with Apache Open Office 3.4.1 on Solr 4.3 and 3.6.1, for both 
 OpenWriter (.ODT) and Impress (.ODS).
 The extractOnly option does return the document body text, but Solr does not 
 index the document body text. In my test cases (.ODS and .ODT), all I see for 
 the content attribute in Solr are a few spaces.
 Using the example schema, I indexed HelloWorld.odt using:
 {code}
  curl 
 http://localhost:8983/solr/update/extract?literal.id=doc-1uprefix=attr_commit=true;
  -F myfile=@HelloWorld.odt
 {code}
 It queries as:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime2/int
   lst name=params
 str name=indenttrue/str
 str name=qid:doc-1/str
   /lst
 /lst
 result name=response numFound=1 start=0
   doc
 str name=iddoc-1/str
 arr name=attr_image_count
   str0/str
 /arr
 arr name=attr_editing_cycles
   str1/str
 /arr
 arr name=attr_stream_source_info
   strmyfile/str
 /arr
 arr name=attr_meta_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_subject
   strHello, World/str
 /arr
 str name=subjectHello World - subject/str
 arr name=attr_dcterms_created
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_description
   strThis is a test of SolrCell using OpenOffice 3.4.1 - 
 OpenWriter./str
 /arr
 arr name=attr_nbobject
   str0/str
 /arr
 arr name=attr_word_count
   str10/str
 /arr
 arr name=attr_edit_time
   strPT3M44S/str
 /arr
 arr name=attr_meta_paragraph_count
   str4/str
 /arr
 arr name=attr_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_object_count
   str0/str
 /arr
 arr name=attr_stream_content_type
   strapplication/octet-stream/str
 /arr
 arr name=attr_nbimg
   str0/str
 /arr
 str name=descriptionThis is a test of SolrCell using OpenOffice 3.4.1 
 - OpenWriter./str
 arr name=attr_stream_size
   str8960/str
 /arr
 arr name=attr_meta_object_count
   str0/str
 /arr
 arr name=attr_cp_subject
   strHello World - subject/str
 /arr
 arr name=attr_stream_name
   strHelloWorld.odt/str
 /arr
 arr name=attr_generator
   strOpenOffice.org/3.4.1$Win32 
 OpenOffice.org_project/341m1$Build-9593/str
 /arr
 str name=keywordsHello, World/str
 arr name=attr_last_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_paragraph_count
   str4/str
 /arr
 arr name=attr_dc_title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_dcterms_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_page_count
   str1/str
 /arr
 arr name=attr_meta_character_count
   str60/str
 /arr
 date name=last_modified2013-05-10T17:15:40Z/date
 arr name=attr_nbtab
   str0/str
 /arr
 arr name=attr_meta_word_count
   str10/str
 /arr
 arr name=attr_meta_table_count
   str0/str
 /arr
 arr name=attr_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_image_count
   str0/str
 /arr
 arr name=attr_xmptpg_npages
   str1/str
 /arr
 arr name=attr_table_count
   str0/str
 /arr
 arr name=attr_nbpara
   str4/str
 /arr
 arr name=attr_character_count
   str60/str
 /arr
 arr name=attr_meta_page_count
   str1/str
 /arr
 arr name=attr_nbword
   str10/str
 /arr
 arr name=attr_nbpage
   str1/str
 /arr
 arr name=content_type
   strapplication/vnd.oasis.opendocument.text/str
 /arr
 arr name=attr_nbcharacter
   str60/str
 /arr
 arr name=content
   str  /str
 /arr
 long 

Re: mlockall?

2013-12-13 Thread Chris Hostetter

: Right, right, I meant to say that I know about that blog post but my Q
: is:
: If mlockall is such a good thing, why not have it in Lucene or Solr?  Or
: maybe mlockall is not such a good or simple thing?

Beyond writting that agent jar, I never put much effort into thinking 
about integrating it directly into Solr for a variety of reasons alluded 
to in the blog...
http://searchhub.org/2013/05/21/mlockall-for-all/

...it seemed like an unnecessary complication and poor substitute for 
disabling swap on your production servers.

There are a few important caveats to using mlockall-agent.jar, mostly 
because there are some important caveats to using mlockall in general (pay 
attention to your ulimits) and some specific caveats to using it in java 
(make sure your min heap = your max heap)

...and in the FAQ at the bottom of the README...
https://github.com/LucidWorks/mlockall-agent/blob/master/README.txt#L111

In particular, note the FAQ about MCL_CURRENT and the associated comments 
(straight from CASSANDRA-1214)...
https://github.com/LucidWorks/mlockall-agent/blob/master/src/MLockAgent.java#L29
...my understanding is that mlockall is really only a good idea *before* 
any data is mmapped so you don't try to lock the stuff the OS is already 
mmapping -- doing that from within Solr's source (after the servlet 
container has already started) would be risky.

Assuing it was worth the techinical effort, it would convolute the build 
system a bit, and we'd have to make choices (similar to what Cassandra 
did) of how to deal with it on systems where it's not supported, or in 
instances where the call fails (because the ulimit isn't set high ehough) 
... treating at as an optional performance optimization isn't 
neccessarily the best approach if people with platforms that do work are 
counting on it working.

Baring any evidence from someone whose looked into it more then me, my
current suggestion would be...

 * don't overload your machines and just disable swap when 
   using solr -- don't worry about mlockall
 * if you can't disable swap, and you want to run solr with 
   mlockall because your machines are overloaded, use mlockall-agent

Once Solr evolves to the point where we don't run in a servlet container, 
and have our own public static void main, and ship platform native startup 
scripts where we can handle things like forcing heap min=max, and have 
overridable startup config options for things like force_mlockall=true 
then it might be worth revisiting.



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3702) String concatenation function

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-3702:
-

Attachment: SOLR-3702.patch

Fix for failed test added. Sorry for inconvenience.

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch, SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5416) CollapsingQParserPlugin bug with Tagging

2013-12-13 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847788#comment-13847788
 ] 

David commented on SOLR-5416:
-

Thank you for your quick response, I'm actually trying to use your plugin in 
production as the standard grouping had serious performance issues around facet 
grouping. I'll try to patch in your latest fix. Thank you for all of your help!

 CollapsingQParserPlugin bug with Tagging
 

 Key: SOLR-5416
 URL: https://issues.apache.org/jira/browse/SOLR-5416
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.6
Reporter: David
Assignee: Joel Bernstein
  Labels: group, grouping
 Fix For: 5.0, 4.7

 Attachments: CollapsingQParserPlugin.java, SOLR-5416.patch, 
 SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, 
 SOLR-5416.patch, SolrIndexSearcher.java, TestCollapseQParserPlugin.java

   Original Estimate: 48h
  Remaining Estimate: 48h

 Trying to use CollapsingQParserPlugin with facet tagging throws an exception. 
 {code}
  ModifiableSolrParams params = new ModifiableSolrParams();
 params.add(q, *:*);
 params.add(fq, {!collapse field=group_s});
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(fq,{!tag=test_ti}test_ti:5);
 params.add(facet,true);
 params.add(facet.field,{!ex=test_ti}test_ti);
 assertQ(req(params), *[count(//doc)=1], 
 //doc[./int[@name='test_ti']='5']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5416) CollapsingQParserPlugin bug with Tagging

2013-12-13 Thread David (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David updated SOLR-5416:


Attachment: CollapseQParserPluginPatch-solr-4.5.1.patch

I am attaching a patch with all of the patches for CollapseQParserPlugin up to 
todays date. This patch is for those of use who want to use this plugin in Solr 
4.5.1

 CollapsingQParserPlugin bug with Tagging
 

 Key: SOLR-5416
 URL: https://issues.apache.org/jira/browse/SOLR-5416
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.6
Reporter: David
Assignee: Joel Bernstein
  Labels: group, grouping
 Fix For: 5.0, 4.7

 Attachments: CollapseQParserPluginPatch-solr-4.5.1.patch, 
 CollapsingQParserPlugin.java, SOLR-5416.patch, SOLR-5416.patch, 
 SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, SOLR-5416.patch, 
 SolrIndexSearcher.java, TestCollapseQParserPlugin.java

   Original Estimate: 48h
  Remaining Estimate: 48h

 Trying to use CollapsingQParserPlugin with facet tagging throws an exception. 
 {code}
  ModifiableSolrParams params = new ModifiableSolrParams();
 params.add(q, *:*);
 params.add(fq, {!collapse field=group_s});
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(fq,{!tag=test_ti}test_ti:5);
 params.add(facet,true);
 params.add(facet.field,{!ex=test_ti}test_ti);
 assertQ(req(params), *[count(//doc)=1], 
 //doc[./int[@name='test_ti']='5']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3702) String concatenation function

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-3702:
-

Attachment: SOLR-3702.patch

I found out that while this patch was on review, we've already has 
ConcatStringFunction via SOLR-5302 Analytics component. Checking in to trunk, 
we'll let it back then port to 4x. So now I reuse that code and just add 
support to use this function as function query in ValueSourceParser.

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch, SOLR-3702.patch, 
 SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4809) OpenOffice document body is not indexed by SolrCell

2013-12-13 Thread Augusto Camarotti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847833#comment-13847833
 ] 

Augusto Camarotti commented on SOLR-4809:
-

Thanks for a making a patch for it, Doug!



 OpenOffice document body is not indexed by SolrCell
 ---

 Key: SOLR-4809
 URL: https://issues.apache.org/jira/browse/SOLR-4809
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 3.6.1, 4.3
Reporter: Jack Krupansky
 Attachments: HelloWorld.docx, HelloWorld.odp, HelloWorld.odt, 
 HelloWorld.txt, SOLR-4809.patch


 As reported on the solr user mailing list, SolrCell is not indexing document 
 body content for OpenOffice documents.
 I tested with Apache Open Office 3.4.1 on Solr 4.3 and 3.6.1, for both 
 OpenWriter (.ODT) and Impress (.ODS).
 The extractOnly option does return the document body text, but Solr does not 
 index the document body text. In my test cases (.ODS and .ODT), all I see for 
 the content attribute in Solr are a few spaces.
 Using the example schema, I indexed HelloWorld.odt using:
 {code}
  curl 
 http://localhost:8983/solr/update/extract?literal.id=doc-1uprefix=attr_commit=true;
  -F myfile=@HelloWorld.odt
 {code}
 It queries as:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime2/int
   lst name=params
 str name=indenttrue/str
 str name=qid:doc-1/str
   /lst
 /lst
 result name=response numFound=1 start=0
   doc
 str name=iddoc-1/str
 arr name=attr_image_count
   str0/str
 /arr
 arr name=attr_editing_cycles
   str1/str
 /arr
 arr name=attr_stream_source_info
   strmyfile/str
 /arr
 arr name=attr_meta_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_subject
   strHello, World/str
 /arr
 str name=subjectHello World - subject/str
 arr name=attr_dcterms_created
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_dc_description
   strThis is a test of SolrCell using OpenOffice 3.4.1 - 
 OpenWriter./str
 /arr
 arr name=attr_nbobject
   str0/str
 /arr
 arr name=attr_word_count
   str10/str
 /arr
 arr name=attr_edit_time
   strPT3M44S/str
 /arr
 arr name=attr_meta_paragraph_count
   str4/str
 /arr
 arr name=attr_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_object_count
   str0/str
 /arr
 arr name=attr_stream_content_type
   strapplication/octet-stream/str
 /arr
 arr name=attr_nbimg
   str0/str
 /arr
 str name=descriptionThis is a test of SolrCell using OpenOffice 3.4.1 
 - OpenWriter./str
 arr name=attr_stream_size
   str8960/str
 /arr
 arr name=attr_meta_object_count
   str0/str
 /arr
 arr name=attr_cp_subject
   strHello World - subject/str
 /arr
 arr name=attr_stream_name
   strHelloWorld.odt/str
 /arr
 arr name=attr_generator
   strOpenOffice.org/3.4.1$Win32 
 OpenOffice.org_project/341m1$Build-9593/str
 /arr
 str name=keywordsHello, World/str
 arr name=attr_last_save_date
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_paragraph_count
   str4/str
 /arr
 arr name=attr_dc_title
   strHello World SolrCell Test - title/str
 /arr
 arr name=attr_dcterms_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_creation_date
   str2013-05-10T17:11:58.88/str
 /arr
 arr name=attr_page_count
   str1/str
 /arr
 arr name=attr_meta_character_count
   str60/str
 /arr
 date name=last_modified2013-05-10T17:15:40Z/date
 arr name=attr_nbtab
   str0/str
 /arr
 arr name=attr_meta_word_count
   str10/str
 /arr
 arr name=attr_meta_table_count
   str0/str
 /arr
 arr name=attr_modified
   str2013-05-10T17:15:40.99/str
 /arr
 arr name=attr_meta_image_count
   str0/str
 /arr
 arr name=attr_xmptpg_npages
   str1/str
 /arr
 arr name=attr_table_count
   str0/str
 /arr
 arr name=attr_nbpara
   str4/str
 /arr
 arr name=attr_character_count
   str60/str
 /arr
 arr name=attr_meta_page_count
   str1/str
 /arr
 arr name=attr_nbword
   str10/str
 /arr
 arr name=attr_nbpage
   str1/str
 /arr
 arr name=content_type
   strapplication/vnd.oasis.opendocument.text/str
 /arr
 arr name=attr_nbcharacter
   str60/str
 /arr
 arr name=content
   str  /str
 /arr
   

[jira] [Comment Edited] (SOLR-3702) String concatenation function

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847829#comment-13847829
 ] 

Andrey Kudryavtsev edited comment on SOLR-3702 at 12/13/13 7:44 PM:


I found out that while this patch was on review, we've already have 
ConcatStringFunction via SOLR-5302 Analytics component. Checking in to trunk, 
we'll let it back then port to 4x. So now I reuse that code and just add 
support to use this function as function query in ValueSourceParser.


was (Author: werder):
I found out that while this patch was on review, we've already has 
ConcatStringFunction via SOLR-5302 Analytics component. Checking in to trunk, 
we'll let it back then port to 4x. So now I reuse that code and just add 
support to use this function as function query in ValueSourceParser.

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch, SOLR-3702.patch, 
 SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847846#comment-13847846
 ] 

ASF subversion and git services commented on SOLR-:
---

Commit 1550824 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1550824 ]

SOLR-: CloudSolrServer and LBHttpSolrServer shouldn't throw MUE from 
constructors

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward
 Attachments: SOLR-.patch, SOLR-.patch


 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847849#comment-13847849
 ] 

ASF subversion and git services commented on SOLR-:
---

Commit 1550826 from [~romseygeek] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1550826 ]

SOLR-: CloudSolrServer and LBHttpSolrServer shouldn't throw MUE from 
constructors

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward
 Attachments: SOLR-.patch, SOLR-.patch


 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Resolved] (SOLR-5555) CloudSolrServer need not declare to throw MalformedURLException

2013-12-13 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-.
-

Resolution: Fixed

Thanks Sushil!

 CloudSolrServer need not declare to throw MalformedURLException
 ---

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Sushil Bajracharya
Assignee: Alan Woodward
 Attachments: SOLR-.patch, SOLR-.patch


 Currently CloudSolrServer declares to throw MalformedURLException for some of 
 its constructors. This does not seem necessary.
  
 Details based on looking through Solr 4.6 release code:
  
 CloudSolrServer has the following constructor that declares a checked 
 exception MalformedURLException..
 {code}
  
  public CloudSolrServer(String zkHost) throws MalformedURLException {
  
   this.zkHost = zkHost;
  
   this.myClient = HttpClientUtil.createClient(null);
  
   this.lbServer = new LBHttpSolrServer(myClient);
  
   this.lbServer.setRequestWriter(new BinaryRequestWriter());
  
   this.lbServer.setParser(new BinaryResponseParser());
  
   this.updatesToLeaders = true;
  
   shutdownLBHttpSolrServer = true;
  
   }
  
 {code}
  
 The only thing that seemed capable of throwing MalformedURLException seems to 
 be LBHttpSolrServer’s constructor:
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, String... solrServerUrl)
   throws MalformedURLException {
 this(httpClient, new BinaryResponseParser(), solrServerUrl);
   }
 {code}
  
 which calls ..
  
 {code}
   public LBHttpSolrServer(HttpClient httpClient, ResponseParser parser, 
 String... solrServerUrl)
   throws MalformedURLException {
 clientIsInternal = (httpClient == null);
 this.parser = parser;
 if (httpClient == null) {
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_USE_RETRY, false);
   this.httpClient = HttpClientUtil.createClient(params);
 } else {
   this.httpClient = httpClient;
 }
 for (String s : solrServerUrl) {
   ServerWrapper wrapper = new ServerWrapper(makeServer(s)); 
   aliveServers.put(wrapper.getKey(), wrapper);
 }
 updateAliveList();
   }
 {code}
  
 which calls ..
  
 {code}
 protected HttpSolrServer makeServer(String server) throws 
 MalformedURLException {
 HttpSolrServer s = new HttpSolrServer(server, httpClient, parser);
 if (requestWriter != null) {
   s.setRequestWriter(requestWriter);
 }
 if (queryParams != null) {
   s.setQueryParams(queryParams);
 }
 return s;
   }
 {code}
  
 Note that makeServer(String server) above does not need to throw 
 MalformedURLException.. sine the only thing that seems capable of throwing 
 MalformedURLException is HttpSolrServer’s constructor (which does not):
  
 {code}
 public HttpSolrServer(String baseURL, HttpClient client, ResponseParser 
 parser) {
 this.baseUrl = baseURL;
 if (baseUrl.endsWith(/)) {
   baseUrl = baseUrl.substring(0, baseUrl.length() - 1);
 }
 if (baseUrl.indexOf('?') = 0) {
   throw new RuntimeException(
   Invalid base url for solrj.  The base URL must not contain 
 parameters: 
   + baseUrl);
 }
 
 if (client != null) {
   httpClient = client;
   internalClient = false;
 } else {
   internalClient = true;
   ModifiableSolrParams params = new ModifiableSolrParams();
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS, 128);
   params.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, 32);
   params.set(HttpClientUtil.PROP_FOLLOW_REDIRECTS, followRedirects);
   httpClient =  HttpClientUtil.createClient(params);
 }
 
 this.parser = parser;
   }
 {code}
  
 I see nothing above that’d throw MalformedURLException. It is throwing a 
 RuntimeException when the baseUrl does not match certain pattern, may be that 
 was intended to be a MalformedURLException.
  
 It seems like an error or oversight that CloudSolrServer declares to throw 
 MalformedURLException for some of its constructors. 
 This could be fixed by making LBHttpSolrServer not declare the 
 MalformedURLException, and thus other callers to it do not need to do so.
  
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5388) Combine def function with multi functions (max, min, sum, product)

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-5388:
-

Description: 
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 
Example: 

Doc1: Field1: 1, Field2: 2
Doc2: Field3: 3, FIeld4: 4

Now if we call user function def(min(Field1,Field2),5) we've got : Doc1 = 1, 
Doc2 = +
 

  was:
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 


 Combine def function with multi functions (max, min, sum, product)
 

 Key: SOLR-5388
 URL: https://issues.apache.org/jira/browse/SOLR-5388
 Project: Solr
  Issue Type: New Feature
Reporter: Andrey Kudryavtsev
Priority: Minor
  Labels: patch
 Fix For: 5.0

 Attachments: SOLR-5388.patch, SOLR-5388.patch


 Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
 _defValue_ if document doesn't have value for any of this fields. 
 Implement _exists(int doc)_ method for FunctionValues created in 
 MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
 implementation to override it. 
 Example: 
 Doc1: Field1: 1, Field2: 2
 Doc2: Field3: 3, FIeld4: 4
 Now if we call user function def(min(Field1,Field2),5) we've got : Doc1 = 
 1, Doc2 = +
  



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5388) Combine def function with multi functions (max, min, sum, product)

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-5388:
-

Description: 
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 
Example: 

Doc1: Field1: 10, Field2: 20
Doc2: Field3: 30, FIeld4: 40

We want to call user function def(min(Field1,Field2),5) for this documents :
Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
documents, so he gets defValue)

  was:
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 
Example: 

Doc1: Field1: 1, Field2: 2
Doc2: Field3: 3, FIeld4: 4

Now if we call user function def(min(Field1,Field2),5) we've got : Doc1 = 1, 
Doc2 = +
 


 Combine def function with multi functions (max, min, sum, product)
 

 Key: SOLR-5388
 URL: https://issues.apache.org/jira/browse/SOLR-5388
 Project: Solr
  Issue Type: New Feature
Reporter: Andrey Kudryavtsev
Priority: Minor
  Labels: patch
 Fix For: 5.0

 Attachments: SOLR-5388.patch, SOLR-5388.patch


 Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
 _defValue_ if document doesn't have value for any of this fields. 
 Implement _exists(int doc)_ method for FunctionValues created in 
 MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
 implementation to override it. 
 Example: 
 Doc1: Field1: 10, Field2: 20
 Doc2: Field3: 30, FIeld4: 40
 We want to call user function def(min(Field1,Field2),5) for this documents :
 Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
 With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
 documents, so he gets defValue)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5388) Combine def function with multi functions (max, min, sum, product)

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-5388:
-

Description: 
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 

Example: 
Doc1: Field1: 10, Field2: 20
Doc2: Field3: 30, FIeld4: 40
We want to call user function def(min(Field1,Field2),5) for this documents :
Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
documents, so he gets defValue)

  was:
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 
Example: 

Doc1: Field1: 10, Field2: 20
Doc2: Field3: 30, FIeld4: 40

We want to call user function def(min(Field1,Field2),5) for this documents :
Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
documents, so he gets defValue)


 Combine def function with multi functions (max, min, sum, product)
 

 Key: SOLR-5388
 URL: https://issues.apache.org/jira/browse/SOLR-5388
 Project: Solr
  Issue Type: New Feature
Reporter: Andrey Kudryavtsev
Priority: Minor
  Labels: patch
 Fix For: 5.0

 Attachments: SOLR-5388.patch, SOLR-5388.patch


 Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
 _defValue_ if document doesn't have value for any of this fields. 
 Implement _exists(int doc)_ method for FunctionValues created in 
 MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
 implementation to override it. 
 Example: 
 Doc1: Field1: 10, Field2: 20
 Doc2: Field3: 30, FIeld4: 40
 We want to call user function def(min(Field1,Field2),5) for this documents :
 Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
 With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
 documents, so he gets defValue)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5477) Async execution of OverseerCollectionProcessor tasks

2013-12-13 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847857#comment-13847857
 ] 

Jessica Cheng commented on SOLR-5477:
-

I agree that auto-retry is not the right thing to do.

However, a timeout can possibly happen on the Overseer to node admin requests 
(if these requests have no timeouts, it might be dangerous because a connection 
can be sometimes be sunk and the client will never find out--we've actually 
seen this happen on the apache httpclient through solrj). What I'm getting at 
is that for the same reason that we're changing this client to Overseer request 
to being async, maybe the Overseer to node admin request should be async too.

 Async execution of OverseerCollectionProcessor tasks
 

 Key: SOLR-5477
 URL: https://issues.apache.org/jira/browse/SOLR-5477
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Anshum Gupta

 Typical collection admin commands are long running and it is very common to 
 have the requests get timed out.  It is more of a problem if the cluster is 
 very large.Add an option to run these commands asynchronously
 add an extra param async=true for all collection commands
 the task is written to ZK and the caller is returned a task id. 
 as separate collection admin command will be added to poll the status of the 
 task
 command=statusid=7657668909
 if id is not passed all running async tasks should be listed
 A separate queue is created to store in-process tasks . After the tasks are 
 completed the queue entry is removed. OverSeerColectionProcessor will perform 
 these tasks in multiple threads



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5351) DirectoryReader#close can throw AlreadyClosedException if it's and NRT reader

2013-12-13 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5351:


Attachment: LUCENE-5351.patch

I think we should catch exceptions that come from the IW/Directory.. here is a 
patch.

 DirectoryReader#close can throw AlreadyClosedException if it's and NRT reader
 -

 Key: LUCENE-5351
 URL: https://issues.apache.org/jira/browse/LUCENE-5351
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5351.patch


 in StandartDirectoryReader#doClose we do this:
 {noformat}
if (writer != null) {
   // Since we just closed, writer may now be able to
   // delete unused files:
   writer.deletePendingFiles();
 }
 {noformat}
 which can throw AlreadyClosedException from the directory if the Direcotory 
 has already closed. To me this looks like a bug and we should catch this 
 exception from the directory.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5388) Combine def function with multi functions (max, min, sum, product)

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-5388:
-

Description: 
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 

Example: 
Doc1: Field1: 10, Field2: 20
Doc2: Field3: 30, FIeld4: 40
We want to call user function def(min(Field1,Field2),5) for this documents :
Now we've got:  Doc1 = 10, Doc2 = Float.POSITIVE_INFINITY
With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
documents, so he gets defValue)

  was:
Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
_defValue_ if document doesn't have value for any of this fields. 
Implement _exists(int doc)_ method for FunctionValues created in 
MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
implementation to override it. 

Example: 
Doc1: Field1: 10, Field2: 20
Doc2: Field3: 30, FIeld4: 40
We want to call user function def(min(Field1,Field2),5) for this documents :
Now we've got:  Doc1 = 1, Doc2 = Float.POSITIVE_INFINITY
With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
documents, so he gets defValue)


 Combine def function with multi functions (max, min, sum, product)
 

 Key: SOLR-5388
 URL: https://issues.apache.org/jira/browse/SOLR-5388
 Project: Solr
  Issue Type: New Feature
Reporter: Andrey Kudryavtsev
Priority: Minor
  Labels: patch
 Fix For: 5.0

 Attachments: SOLR-5388.patch, SOLR-5388.patch


 Ability for expressions like _def(min(field1,..,fieldN), defValue)_  return 
 _defValue_ if document doesn't have value for any of this fields. 
 Implement _exists(int doc)_ method for FunctionValues created in 
 MultiFloatFunction. Extract method to get ability other MultiFloatFunction 
 implementation to override it. 
 Example: 
 Doc1: Field1: 10, Field2: 20
 Doc2: Field3: 30, FIeld4: 40
 We want to call user function def(min(Field1,Field2),5) for this documents :
 Now we've got:  Doc1 = 10, Doc2 = Float.POSITIVE_INFINITY
 With this patch:  Doc1 = 10, Doc2 = 5 ( Doc2 doesn't have values for this 
 documents, so he gets defValue)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5368) Only close IndexOuput when MockDirectoryWrapper crashes...

2013-12-13 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5368:


Attachment: LUCENE-5368.patch

here is a patch

 Only close IndexOuput when MockDirectoryWrapper crashes...
 --

 Key: LUCENE-5368
 URL: https://issues.apache.org/jira/browse/LUCENE-5368
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5368.patch


 The directory contract allows to read from opened IndexInput even after 
 Directory.close() is called as long as the input is not closed. In 
 MockDirWrapper we close the index input forcefully during close (sometimes if 
 we run check index) when we try to crash the dir. I think we should just 
 close the IndexOutput in that case and let the index input open until they 
 are closed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5368) Only close IndexOuput when MockDirectoryWrapper crashes...

2013-12-13 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-5368:
---

 Summary: Only close IndexOuput when MockDirectoryWrapper crashes...
 Key: LUCENE-5368
 URL: https://issues.apache.org/jira/browse/LUCENE-5368
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5368.patch

The directory contract allows to read from opened IndexInput even after 
Directory.close() is called as long as the input is not closed. In 
MockDirWrapper we close the index input forcefully during close (sometimes if 
we run check index) when we try to crash the dir. I think we should just close 
the IndexOutput in that case and let the index input open until they are closed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3702) String concatenation function

2013-12-13 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-3702:
-

Attachment: SOLR-3702.patch

 String concatenation function
 -

 Key: SOLR-3702
 URL: https://issues.apache.org/jira/browse/SOLR-3702
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.0-ALPHA
Reporter: Ted Strauss
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-3702.patch, SOLR-3702.patch, SOLR-3702.patch, 
 SOLR-3702.patch, SOLR-3702.patch


 Related to https://issues.apache.org/jira/browse/SOLR-2526
 Add query function to support concatenation of Strings.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5556) Allow class of CollectionsHandler and InfoHandler to be specified in solr.xml

2013-12-13 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-5556:


 Summary: Allow class of CollectionsHandler and InfoHandler to be 
specified in solr.xml
 Key: SOLR-5556
 URL: https://issues.apache.org/jira/browse/SOLR-5556
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, 4.7
Reporter: Gregory Chanan
Priority: Minor
 Fix For: 5.0, 4.7


Currently, you can specify the CoreAdminHandler class name in solr.xml, but not 
the CollectionsHandler nor the InfoHandler.

I want to run some (access control) checks around the administrative commands.  
I can do this with the CoreAdminHandler, but not the other two.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5556) Allow class of CollectionsHandler and InfoHandler to be specified in solr.xml

2013-12-13 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-5556:
-

Attachment: SOLR-5556.patch

Here's a patch against trunk that implements this.

 Allow class of CollectionsHandler and InfoHandler to be specified in solr.xml
 -

 Key: SOLR-5556
 URL: https://issues.apache.org/jira/browse/SOLR-5556
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, 4.7
Reporter: Gregory Chanan
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: SOLR-5556.patch


 Currently, you can specify the CoreAdminHandler class name in solr.xml, but 
 not the CollectionsHandler nor the InfoHandler.
 I want to run some (access control) checks around the administrative 
 commands.  I can do this with the CoreAdminHandler, but not the other two.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5368) Only close IndexOuput when MockDirectoryWrapper crashes...

2013-12-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847912#comment-13847912
 ] 

Michael McCandless commented on LUCENE-5368:


bq. The directory contract allows to read from opened IndexInput even after 
Directory.close() is called as long as the input is not closed.

I don't think that's the contract.

I mean, this happens to be true for Lucene's Directory impls today ... but I 
think that's more of a happy accident than a hard contract.  I can see a DB 
based Directory impl closing the database connection in Directory.close, which 
would render all open IndexInputs unusable.  It should be an error to close a 
Directory when there are still open IndexInputs/Outputs (MDW enforces this 
today).

I don't think apps should close a Directory until all open IndexReaders on that 
Directory have been closed.

Either that or, I think we should just remove Directory.close, if its semantics 
are so ambiguous.

Also, if you remove closing the IIs then won't this cause failures on Windows?  
(The comment says we must close so we can corrupt even currently open files).

 Only close IndexOuput when MockDirectoryWrapper crashes...
 --

 Key: LUCENE-5368
 URL: https://issues.apache.org/jira/browse/LUCENE-5368
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5368.patch


 The directory contract allows to read from opened IndexInput even after 
 Directory.close() is called as long as the input is not closed. In 
 MockDirWrapper we close the index input forcefully during close (sometimes if 
 we run check index) when we try to crash the dir. I think we should just 
 close the IndexOutput in that case and let the index input open until they 
 are closed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5351) DirectoryReader#close can throw AlreadyClosedException if it's and NRT reader

2013-12-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847914#comment-13847914
 ] 

Michael McCandless commented on LUCENE-5351:


I don't think we should commit this; I think the application should not close 
the Directory until it has closed the readers that are using that Directory.

 DirectoryReader#close can throw AlreadyClosedException if it's and NRT reader
 -

 Key: LUCENE-5351
 URL: https://issues.apache.org/jira/browse/LUCENE-5351
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5351.patch


 in StandartDirectoryReader#doClose we do this:
 {noformat}
if (writer != null) {
   // Since we just closed, writer may now be able to
   // delete unused files:
   writer.deletePendingFiles();
 }
 {noformat}
 which can throw AlreadyClosedException from the directory if the Direcotory 
 has already closed. To me this looks like a bug and we should catch this 
 exception from the directory.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5369) Add an UpperCaseFilter

2013-12-13 Thread Ryan McKinley (JIRA)
Ryan McKinley created LUCENE-5369:
-

 Summary: Add an UpperCaseFilter
 Key: LUCENE-5369
 URL: https://issues.apache.org/jira/browse/LUCENE-5369
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Ryan McKinley
Assignee: Ryan McKinley
Priority: Minor


We should offer a standard way to force upper-case tokens.  I understand that 
lowercase is safer for general search quality because some uppercase characters 
can represent multiple lowercase ones.

However, having upper-case tokens is often nice for faceting (consider 
normalizing to standard acronyms)





--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5369) Add an UpperCaseFilter

2013-12-13 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated LUCENE-5369:
--

Attachment: LUCENE-5369-uppercase-filter.patch

Here is a patch that adds UpperCaseFilter

There are a few others out there:
http://svn.apache.org/repos/asf/uima/addons/trunk/Lucas/src/main/java/org/apache/uima/lucas/indexer/analysis/UpperCaseFilter.java

https://github.ugent.be/Universiteitsbibliotheek/lludss-solr-java/blob/master/src/main/java/lludss/solr/analysis/UpperCaseFilter.java



Given that we would want to steer people to LowerCase, perhaps this should be 
in a different package

I'll wait for +1 from someone who knows more about this than me :)




 Add an UpperCaseFilter
 --

 Key: LUCENE-5369
 URL: https://issues.apache.org/jira/browse/LUCENE-5369
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Ryan McKinley
Assignee: Ryan McKinley
Priority: Minor
 Attachments: LUCENE-5369-uppercase-filter.patch


 We should offer a standard way to force upper-case tokens.  I understand that 
 lowercase is safer for general search quality because some uppercase 
 characters can represent multiple lowercase ones.
 However, having upper-case tokens is often nice for faceting (consider 
 normalizing to standard acronyms)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2013-12-13 Thread Gary Schulte (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848029#comment-13848029
 ] 

Gary Schulte commented on SOLR-1301:


Some additional feedback, it would be convenient if we could ignore the 
underscore (_) hidden files in hdfs as well as the . hidden files when 
reading input files from hdfs.  

When trying to index an AvroStorage directory created by Pig, we are having to 
send each part name individually because the job will fail if we pass the 
directory.  Passing the directory, we end up picking up _logs/*, _SUCCESS, 
etc - the corresponding avro morphlines map jobs fail.   

 Add a Solr contrib that allows for building Solr indexes via Hadoop's 
 Map-Reduce.
 -

 Key: SOLR-1301
 URL: https://issues.apache.org/jira/browse/SOLR-1301
 Project: Solr
  Issue Type: New Feature
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
 SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
 commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
 hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
 log4j-1.2.15.jar


 This patch contains  a contrib module that provides distributed indexing 
 (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
 twofold:
 * provide an API that is familiar to Hadoop developers, i.e. that of 
 OutputFormat
 * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
 SolrOutputFormat consumes data produced by reduce tasks directly, without 
 storing it in intermediate files. Furthermore, by using an 
 EmbeddedSolrServer, the indexing task is split into as many parts as there 
 are reducers, and the data to be indexed is not sent over the network.
 Design
 --
 Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
 which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
 instantiates an EmbeddedSolrServer, and it also instantiates an 
 implementation of SolrDocumentConverter, which is responsible for turning 
 Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
 batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
 task completes, and the OutputFormat is closed, SolrRecordWriter calls 
 commit() and optimize() on the EmbeddedSolrServer.
 The API provides facilities to specify an arbitrary existing solr.home 
 directory, from which the conf/ and lib/ files will be taken.
 This process results in the creation of as many partial Solr home directories 
 as there were reduce tasks. The output shards are placed in the output 
 directory on the default filesystem (e.g. HDFS). Such part-N directories 
 can be used to run N shard servers. Additionally, users can specify the 
 number of reduce tasks, in particular 1 reduce task, in which case the output 
 will consist of a single shard.
 An example application is provided that processes large CSV files and uses 
 this API. It uses a custom CSV processing to avoid (de)serialization overhead.
 This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
 issue, you should put it in contrib/hadoop/lib.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5463) Provide cursor/token based searchAfter support that works with arbitrary sorting (ie: deep paging)

2013-12-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5463:
---

Attachment: SOLR-5463.patch

Baby steps towards real solution.  This still has a DeepPagingComponent for 
doing setup  managing the sort values, but...

* Gutted the vestigial remnants of SOLR-1726
* replaced CursorMark.getPagingFilter with CursorMark.getSearchAfterFieldDoc
* made ResponseBuilder and QueryCommand keep track of a CursorMark
* refactored SolrIndexSearcher to add a buildTopDocsCollector helper
* made buildTopDocsCollector aware of CursorMark.getSearchAfterFieldDoc
* added test to ensure that the non-cachibility of the cursor query wouldn't 
affect the independent caching of the filter queries.



 Provide cursor/token based searchAfter support that works with arbitrary 
 sorting (ie: deep paging)
 --

 Key: SOLR-5463
 URL: https://issues.apache.org/jira/browse/SOLR-5463
 Project: Solr
  Issue Type: New Feature
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-5463.patch, SOLR-5463__straw_man.patch, 
 SOLR-5463__straw_man.patch, SOLR-5463__straw_man.patch, 
 SOLR-5463__straw_man.patch, SOLR-5463__straw_man.patch, 
 SOLR-5463__straw_man.patch, SOLR-5463__straw_man.patch, 
 SOLR-5463__straw_man.patch, SOLR-5463__straw_man.patch, 
 SOLR-5463__straw_man.patch


 I'd like to revist a solution to the problem of deep paging in Solr, 
 leveraging an HTTP based API similar to how IndexSearcher.searchAfter works 
 at the lucene level: require the clients to provide back a token indicating 
 the sort values of the last document seen on the previous page.  This is 
 similar to the cursor model I've seen in several other REST APIs that 
 support pagnation over a large sets of results (notable the twitter API and 
 it's since_id param) except that we'll want something that works with 
 arbitrary multi-level sort critera that can be either ascending or descending.
 SOLR-1726 laid some initial ground work here and was commited quite a while 
 ago, but the key bit of argument parsing to leverage it was commented out due 
 to some problems (see comments in that issue).  It's also somewhat out of 
 date at this point: at the time it was commited, IndexSearcher only supported 
 searchAfter for simple scores, not arbitrary field sorts; and the params 
 added in SOLR-1726 suffer from this limitation as well.
 ---
 I think it would make sense to start fresh with a new issue with a focus on 
 ensuring that we have deep paging which:
 * supports arbitrary field sorts in addition to sorting by score
 * works in distributed mode



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2013-12-13 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848097#comment-13848097
 ] 

wolfgang hoschek commented on SOLR-1301:


Might be best to write a program that generates the list of files and then 
explicitly provide that file list to the MR job, e.g. via the --input-list 
option. For example you could use the HDFS version of the Linux file system 
'find' command for that (HdfsFindTool doc and code here: 
https://github.com/cloudera/search/tree/master_1.1.0/search-mr)



 Add a Solr contrib that allows for building Solr indexes via Hadoop's 
 Map-Reduce.
 -

 Key: SOLR-1301
 URL: https://issues.apache.org/jira/browse/SOLR-1301
 Project: Solr
  Issue Type: New Feature
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
 SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
 commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
 hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
 log4j-1.2.15.jar


 This patch contains  a contrib module that provides distributed indexing 
 (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
 twofold:
 * provide an API that is familiar to Hadoop developers, i.e. that of 
 OutputFormat
 * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
 SolrOutputFormat consumes data produced by reduce tasks directly, without 
 storing it in intermediate files. Furthermore, by using an 
 EmbeddedSolrServer, the indexing task is split into as many parts as there 
 are reducers, and the data to be indexed is not sent over the network.
 Design
 --
 Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
 which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
 instantiates an EmbeddedSolrServer, and it also instantiates an 
 implementation of SolrDocumentConverter, which is responsible for turning 
 Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
 batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
 task completes, and the OutputFormat is closed, SolrRecordWriter calls 
 commit() and optimize() on the EmbeddedSolrServer.
 The API provides facilities to specify an arbitrary existing solr.home 
 directory, from which the conf/ and lib/ files will be taken.
 This process results in the creation of as many partial Solr home directories 
 as there were reduce tasks. The output shards are placed in the output 
 directory on the default filesystem (e.g. HDFS). Such part-N directories 
 can be used to run N shard servers. Additionally, users can specify the 
 number of reduce tasks, in particular 1 reduce task, in which case the output 
 will consist of a single shard.
 An example application is provided that processes large CSV files and uses 
 this API. It uses a custom CSV processing to avoid (de)serialization overhead.
 This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
 issue, you should put it in contrib/hadoop/lib.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: mlockall?

2013-12-13 Thread Uwe Schindler
Hi Hoss,

full ack! sysctl -w vm.swappiness=0 is your friend (if you really want to do 
it, I don't recommend it for several reasons, nor do I recommend to mlockall) 

Mlockall is  too risky if we don't have a single main() method that 
mlockallalls directly after starting.
It would also make the build system more crazy, because we would need to ship 
(like ES) with precompiled dll/so files for various OS, which is not easy to 
handle. One that wants to mlockall, can always use an agent-jar that does this. 
Or much easier: Write your own main() method in a single class that locks all 
stuff and then delegates to the main() method of your servlet container. But 
this is all out of scope of for Solr, this is how to setup your runtime env. I 
don't want to have that in Lucene or Solr.

Finally, I had several customers with ES that did exactly the wrong thing:
- They allocated too much heap space (initially like 70% of available RAM), 
just because they did not know better
- They also used mmap. Because 70% of the RAM was locked, the OS had no chance 
to swap in mmapped pages and FileChannel.map() throwed OOM Exception (same 
happened for them with NIOFSDir, because NIOFS also needs direct buffers 
outside heap!)
- Because of the OOM (which was a special OOM, not the default heap space 
or permgen one), they raised -Xmx further
- You can repeat this several times until you cannot reach your machine anymore 
because all mem is locked and also fragmented... AMEN :-) Hopefully Linux OOM 
killer kills your processes!

With lowering swappiness, this cannot happen (because with swappiness=0, the 
system will still swap if all wents bad), so you can reach your system and 
don’t make it die. Also: As swapping and mmapping is essentially the same 
thing, you should leave the decision when to swap something out and better swap 
some mmapped buffers in to the OS kernel!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
 Sent: Friday, December 13, 2013 6:19 PM
 To: dev@lucene.apache.org
 Subject: Re: mlockall?
 
 
 : Right, right, I meant to say that I know about that blog post but my Q
 : is:
 : If mlockall is such a good thing, why not have it in Lucene or Solr?  Or
 : maybe mlockall is not such a good or simple thing?
 
 Beyond writting that agent jar, I never put much effort into thinking about
 integrating it directly into Solr for a variety of reasons alluded to in the 
 blog...
 http://searchhub.org/2013/05/21/mlockall-for-all/
 
 ...it seemed like an unnecessary complication and poor substitute for
 disabling swap on your production servers.
 
 There are a few important caveats to using mlockall-agent.jar, mostly
 because there are some important caveats to using mlockall in general (pay
 attention to your ulimits) and some specific caveats to using it in java (make
 sure your min heap = your max heap)
 
 ...and in the FAQ at the bottom of the README...
 https://github.com/LucidWorks/mlockall-
 agent/blob/master/README.txt#L111
 
 In particular, note the FAQ about MCL_CURRENT and the associated
 comments (straight from CASSANDRA-1214)...
 https://github.com/LucidWorks/mlockall-
 agent/blob/master/src/MLockAgent.java#L29
 ...my understanding is that mlockall is really only a good idea *before* any
 data is mmapped so you don't try to lock the stuff the OS is already
 mmapping -- doing that from within Solr's source (after the servlet container
 has already started) would be risky.
 
 Assuing it was worth the techinical effort, it would convolute the build
 system a bit, and we'd have to make choices (similar to what Cassandra
 did) of how to deal with it on systems where it's not supported, or in
 instances where the call fails (because the ulimit isn't set high ehough) ...
 treating at as an optional performance optimization isn't neccessarily the
 best approach if people with platforms that do work are counting on it
 working.
 
 Baring any evidence from someone whose looked into it more then me, my
 current suggestion would be...
 
  * don't overload your machines and just disable swap when
using solr -- don't worry about mlockall
  * if you can't disable swap, and you want to run solr with
mlockall because your machines are overloaded, use mlockall-agent
 
 Once Solr evolves to the point where we don't run in a servlet container, and
 have our own public static void main, and ship platform native startup scripts
 where we can handle things like forcing heap min=max, and have overridable
 startup config options for things like force_mlockall=true
 then it might be worth revisiting.
 
 
 
 -Hoss
 http://www.lucidworks.com/
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5299) Refactor Collector API for parallelism

2013-12-13 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848177#comment-13848177
 ] 

Otis Gospodnetic commented on LUCENE-5299:
--

That's one phat patch.  Should Fix Version be set to 5.0?

 Refactor Collector API for parallelism
 --

 Key: LUCENE-5299
 URL: https://issues.apache.org/jira/browse/LUCENE-5299
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shikhar Bhushan
 Attachments: LUCENE-5299.patch, LUCENE-5299.patch, LUCENE-5299.patch, 
 LUCENE-5299.patch, LUCENE-5299.patch, benchmarks.txt


 h2. Motivation
 We should be able to scale-up better with Solr/Lucene by utilizing multiple 
 CPU cores, and not have to resort to scaling-out by sharding (with all the 
 associated distributed system pitfalls) when the index size does not warrant 
 it.
 Presently, IndexSearcher has an optional constructor arg for an 
 ExecutorService, which gets used for searching in parallel for call paths 
 where one of the TopDocCollector's is created internally. The 
 per-atomic-reader search happens in parallel and then the 
 TopDocs/TopFieldDocs results are merged with locking around the merge bit.
 However there are some problems with this approach:
 * If arbitary Collector args come into play, we can't parallelize. Note that 
 even if ultimately results are going to a TopDocCollector it may be wrapped 
 inside e.g. a EarlyTerminatingCollector or TimeLimitingCollector or both.
 * The special-casing with parallelism baked on top does not scale, there are 
 many Collector's that could potentially lend themselves to parallelism, and 
 special-casing means the parallelization has to be re-implemented if a 
 different permutation of collectors is to be used.
 h2. Proposal
 A refactoring of collectors that allows for parallelization at the level of 
 the collection protocol. 
 Some requirements that should guide the implementation:
 * easy migration path for collectors that need to remain serial
 * the parallelization should be composable (when collectors wrap other 
 collectors)
 * allow collectors to pick the optimal solution (e.g. there might be memory 
 tradeoffs to be made) by advising the collector about whether a search will 
 be parallelized, so that the serial use-case is not penalized.
 * encourage use of non-blocking constructs and lock-free parallelism, 
 blocking is not advisable for the hot-spot of a search, besides wasting 
 pooled threads.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1117 - Still Failing!

2013-12-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1117/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
REGRESSION:  org.apache.solr.core.TestShardHandlerFactory.testXML

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([6A5CF76EA9906ABA]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestShardHandlerFactory

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([6A5CF76EA9906ABA]:0)




Build Log:
[...truncated 10148 lines...]
   [junit4] Suite: org.apache.solr.core.TestShardHandlerFactory
   [junit4]   2 421678 T1170 oas.SolrTestCaseJ4.setUp ###Starting testOldXML
   [junit4]   2 421680 T1170 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/'
   [junit4]   2 421715 T1170 oasc.ConfigSolr.fromFile Loading container 
configuration from 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/solr-shardhandler-old.xml
   [junit4]   2 421834 T1170 oasc.CoreContainer.init New CoreContainer 
1834436890
   [junit4]   2 421835 T1170 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/]
   [junit4]   2 421862 T1170 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2 421863 T1170 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2 421864 T1170 oasc.CoreContainer.load Host Name: null
   [junit4]   2 422215 T1170 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=1834436890
   [junit4]   2 422217 T1170 oas.SolrTestCaseJ4.tearDown ###Ending testOldXML
   [junit4]   2 422234 T1170 oas.SolrTestCaseJ4.setUp ###Starting testXML
   [junit4]   2 422235 T1170 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/'
   [junit4]   2 422256 T1170 oasc.ConfigSolr.fromFile Loading container 
configuration from 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/solr-shardhandler.xml
   [junit4]   2 422360 T1170 oasc.ConfigSolrXml.init Config-defined core 
root directory: 
   [junit4]   2 422361 T1170 oasc.CoreContainer.init New CoreContainer 
21250115
   [junit4]   2 422362 T1170 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/]
   [junit4]   2 422380 T1170 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2 422380 T1170 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2 422381 T1170 oasc.CoreContainer.load Host Name: null
   [junit4]   2 422387 T1170 oasc.CorePropertiesLocator.discover Looking for 
core definitions underneath 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr
   [junit4]   2 422507 T1170 oasc.CorePropertiesLocator.discoverUnder Found 
core conf in 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files/solr/conf/
   [junit4]   2 422512 T1170 oasc.CorePropertiesLocator.discover Found 1 core 
definitions
   [junit4]   2 422513 T1170 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=21250115
   [junit4]   2 7200039 T1169 ccr.ThreadLeakControl$2.evaluate WARNING Suite 
execution timed out: org.apache.solr.core.TestShardHandlerFactory
   [junit4]   2 jstack at approximately timeout time 
   [junit4]   2Thread-500 ID=1172 WAITING on 
java.lang.Object@71e441c3
   [junit4]   2at java.lang.Object.wait(Native Method)
   [junit4]   2- waiting on java.lang.Object@71e441c3
   [junit4]   2at java.lang.Object.wait(Object.java:503)
   [junit4]   2at 
org.apache.solr.core.CloserThread.run(CoreContainer.java:1006)
   [junit4]   2
   [junit4]   2
TEST-TestShardHandlerFactory.testXML-seed#[6A5CF76EA9906ABA] ID=1170 WAITING 
on org.apache.solr.core.CloserThread@4099a39f
   [junit4]   2at java.lang.Object.wait(Native Method)
   [junit4]   2- waiting on 
org.apache.solr.core.CloserThread@4099a39f
   [junit4]   2at java.lang.Thread.join(Thread.java:1280)
   [junit4]   2at java.lang.Thread.join(Thread.java:1354)
   [junit4]   2at 
org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:376)
   [junit4]   2