[jira] [Updated] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-01 Thread bidorbuy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bidorbuy updated SOLR-11078:

Affects Version/s: 7.1

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-01 Thread bidorbuy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bidorbuy updated SOLR-11078:

Component/s: Server
 search

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-01 Thread bidorbuy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bidorbuy updated SOLR-11078:

Attachment: solr-6-4-2-schema.xml
solr-6-4-2-solrconfig.xml
solr-7-1-0-managed-schema
solr-7-1-0-solrconfig.xml
solr.in.sh

Config files attached to show 7.1.0 vs 6.4.2 configuration

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, 
> solr-7-1-0-solrconfig.xml, solr-sample-warning-log.txt, solr.in.sh, 
> solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-11-01 Thread bidorbuy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235224#comment-16235224
 ] 

bidorbuy commented on SOLR-11078:
-

We have 3 identical production servers. Two are running Solr 6.4.2 where the 
schema is still using Trie* fields. The 3rd server where we have the 
performance degradation running any version of Solr since 6.4.2 has now been 
upgraded to Solr 7.1.0 where Trie*-fields have been changed to *Point-fields. 
Index has been rebuild and Solr performance is worse compared to Solr 6.4.2.

All servers are running:
- 10 GB RAM, 4 CPUs
- CentOS 7.3.1611
- Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
- Solr Heap -Xms/Xmx is set to 4GB. GC tuning as per attached solr.in.sh (the 
settings are the same between 6.4.2 and 7.1.0)
- Two Solr indices: tradesearch (5.5m documents and 3,4GB size) and 
searchsuggestions (2.7m documents and 700MB size)

Switching Solr 7.1.0 into production load:
* Load average (via top) shot to 19 and after a few minutes "settled" at 12 (on 
6.4.2 it is about 2.2 - 4)
* Average query time is about 230ms - this is 4 times slower compared to 6.4.2 
(on 6.4.2 it is about 50ms)
* A specific query (we know it is generally slow) runs between 8-10 seconds (on 
6.4.2 it would that about 1.3 seconds)

I can switch 7.1.0 into parallel production tests to collate more data. Any 
help is appreciated.

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> solr-sample-warning-log.txt, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 273 - Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/273/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 26,929,560 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 26,928,944 bytes, private static 
javax.management.MBeanServer 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest.mBeanServer   - 192 
bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules   - 128 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.factoryProp   - 112 bytes, 
private static java.lang.String 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest.COLLECTION   - 72 
bytes, private static java.util.Map 
org.apache.solr.SolrTestCaseJ4.savedClassLogLevels   - 64 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.coreName   - 48 bytes, private 
static java.lang.String org.apache.solr.SolrTestCaseJ4.initialRootLogLevel

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 26,929,560 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 26,928,944 bytes, private static javax.management.MBeanServer 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest.mBeanServer
  - 192 bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules
  - 128 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp
  - 112 bytes, private static java.lang.String 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest.COLLECTION
  - 72 bytes, private static java.util.Map 
org.apache.solr.SolrTestCaseJ4.savedClassLogLevels
  - 64 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName
  - 48 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.initialRootLogLevel
at __randomizedtesting.SeedInfo.seed([6ED8EDAC21555339]:0)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:170)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12456 lines...]
   [junit4] Suite: org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest
   [junit4]   2> 1764264 INFO  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.metrics.reporters.SolrJmxReporterCloudTest_6ED8EDAC21555339-001/init-core-data-001
   [junit4]   2> 1764264 WARN  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=59 numCloses=59
   [junit4]   2> 1764264 INFO  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1764266 INFO  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1764266 INFO  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.metrics.reporters.SolrJmxReporterCloudTest_6ED8EDAC21555339-001/tempDir-001
   [junit4]   2> 1764266 INFO  
(SUITE-SolrJmxReporterCloudTest-seed#[6ED8EDAC21555339]-worker) [] 
o.a.s.c.ZkTestServer 

[jira] [Updated] (LUCENE-8007) Require that codecs always store totalTermFreq, sumDocFreq and sumTotalTermFreq

2017-11-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8007:

Attachment: LUCENE-8007.patch

Adrien i brought the patch up to speed, fixed codecs to return 
{{totalTermFreq=docFreq}} and {{sumTotalTermFreq=sumDocFreq}} in DOCS_ONLY 
cases, and tried to remove all the damage caused by -1 values in 
code/javadocs/checkindex/tests/etc.

> Require that codecs always store totalTermFreq, sumDocFreq and 
> sumTotalTermFreq
> ---
>
> Key: LUCENE-8007
> URL: https://issues.apache.org/jira/browse/LUCENE-8007
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-8007.patch, LUCENE-8007.patch, LUCENE-8007.patch, 
> LUCENE-8007.patch
>
>
> Javadocs allow codecs to not store some index statistics. Given discussion 
> that occurred on LUCENE-4100, this was mostly implemented this way to support 
> pre-flex codecs. We should now require that all codecs store these statistics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8032) Test failed: RandomGeoShapeRelationshipTest.testRandomContains

2017-11-01 Thread David Smiley (JIRA)
David Smiley created LUCENE-8032:


 Summary: Test failed: 
RandomGeoShapeRelationshipTest.testRandomContains
 Key: LUCENE-8032
 URL: https://issues.apache.org/jira/browse/LUCENE-8032
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Reporter: David Smiley
Priority: Minor


https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20800/
{noformat}
Error Message:
geoAreaShape: GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=-0.00871130560892533, 
lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
accuracy=2.01444186927E-4} shape: GeoRectangle: 
{planetmodel=PlanetModel.WGS84, toplat=0.18851664435052304(10.801208089253723), 
bottomlat=-1.4896034997154073(-85.34799368160976), 
leftlon=-1.4970589804391838(-85.7751612613233), 
rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: geoAreaShape: GeoExactCircle: 
{planetmodel=PlanetModel.WGS84, center=[lat=-0.00871130560892533, 
lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
accuracy=2.01444186927E-4}
shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
toplat=0.18851664435052304(10.801208089253723), 
bottomlat=-1.4896034997154073(-85.34799368160976), 
leftlon=-1.4970589804391838(-85.7751612613233), 
rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([87612C9805977C6F:B087E212A0C8DB25]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10469) setParallelUpdates should be deprecated in favor of SolrClientBuilder methods

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10469:

Attachment: 
SOLR_10469_CloudSolrClient_setParallelUpdates_move_to_Builder.patch

This patch only makes the change without redirecting any of the existing 
callers (which coincidentally are all in tests).  I'll handle the callers in 
SOLR-11507 involving some other simplifications.

> setParallelUpdates should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10469
> URL: https://issues.apache.org/jira/browse/SOLR-10469
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_10469_CloudSolrClient_setParallelUpdates_move_to_Builder.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setParallelUpdates}} 
> setter on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10469) setParallelUpdates should be deprecated in favor of SolrClientBuilder methods

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-10469:
---

 Assignee: David Smiley
Fix Version/s: (was: 7.0)
   7.2

> setParallelUpdates should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10469
> URL: https://issues.apache.org/jira/browse/SOLR-10469
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setParallelUpdates}} 
> setter on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1504 - Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1504/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=15708, name=jetty-launcher-2497-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=15696, name=jetty-launcher-2497-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=15708, name=jetty-launcher-2497-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235132#comment-16235132
 ] 

David Smiley commented on SOLR-11507:
-

Thanks for your input [~gerlowskija].  I didn't know about SOLR-10469.  I 
should split this patch in two -- the SOLR-10469 part and the randomization 
part here. 

bq. Most of the other setters have "move to setter" issues filed.

IMO we don't need a JIRA issues for each setter; it's okay to lump them all 
into a common theme!  Creating many JIRA issues creates an administrative 
burden with dubious if any benefits.


> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11511) Use existing private field in DistributedUpdateProcessor

2017-11-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235128#comment-16235128
 ] 

David Smiley commented on SOLR-11511:
-

Thanks for the precommit fix [~rcmuir]!

> Use existing private field in DistributedUpdateProcessor
> 
>
> Key: SOLR-11511
> URL: https://issues.apache.org/jira/browse/SOLR-11511
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11511.patch
>
>
> The DistributedUpdateProcessor has a private instance field called cloudDesc. 
> It is used in a few places, but most code navigates to CloudDescriptor from 
> the request object instead. 
> The fundamental question of this ticket, is this: is there any reason to 
> distrust this field and do the navigation directly (in which case maybe we 
> get rid of the field instead?) or can we trust it and thus should use it 
> where we can. Since it is a private field only ever updated in the 
> constructor, it's not likely to be changing out from under us. The request 
> from which it is derived is also held in a private final field, so it very 
> much looks to me like this field should have been final and should be used.
> This might or might not be a performance gain (depending on whether or not 
> the compiler can optimize away something like this already), but it will be a 
> readability and consistency gain for sure.
> Attaching patch to tidy this up shortly...
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Shawn Heisey

On 11/1/2017 2:43 PM, Steve Rowe wrote:

Not fixed in 2.3, according to comments on 
https://issues.apache.org/jira/browse/IVY-1489 .  However a comment there mentions 
Ivy 2.4’s "artifact-lock-nio” strategy as a more reliable alternative to the 
standard locking.  I’ll make an issue to upgrade our Ivy dependency and switch lock 
strategies.


I opened that ivy issue.  Thanks for figuring that out!  I look forward 
to having our build system fixed on LUCENE-6144.


Does the ivy-bootstrap target deal with upgrading ivy, or would it just 
add the jar with our current preferred ivy version?  Would it be 
possible to have it detect and remove/rename a previous version?


Shawn

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20800 - Still Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20800/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains

Error Message:
geoAreaShape: GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=-0.00871130560892533, 
lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
accuracy=2.01444186927E-4} shape: GeoRectangle: 
{planetmodel=PlanetModel.WGS84, toplat=0.18851664435052304(10.801208089253723), 
bottomlat=-1.4896034997154073(-85.34799368160976), 
leftlon=-1.4970589804391838(-85.7751612613233), 
rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: geoAreaShape: GeoExactCircle: 
{planetmodel=PlanetModel.WGS84, center=[lat=-0.00871130560892533, 
lon=2.3029626482941588([X=-0.6692047265792528, Y=0.7445316825911176, 
Z=-0.008720939756154669])], radius=3.038428918538668(174.0891533827647), 
accuracy=2.01444186927E-4}
shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
toplat=0.18851664435052304(10.801208089253723), 
bottomlat=-1.4896034997154073(-85.34799368160976), 
leftlon=-1.4970589804391838(-85.7751612613233), 
rightlon=1.346321571653886(77.13854392318753)} expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([87612C9805977C6F:B087E212A0C8DB25]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Created] (SOLR-11596) SolrJ clients -- create internal HttpClient objects with increased thread capability

2017-11-01 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-11596:
---

 Summary: SolrJ clients -- create internal HttpClient objects with 
increased thread capability
 Key: SOLR-11596
 URL: https://issues.apache.org/jira/browse/SOLR-11596
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 7.1
Reporter: Shawn Heisey
Priority: Minor


The HttpClient object that various SolrClient implementations create has 
HttpClient's default per-destination thread limit of two.  I'm not sure why 
they went with such a low default, but that's out of our hands.  The low 
default makes default SolrClient objects that are thread-safe, but basically 
unable to handle more than two threads at the same time.

Increasing this limit in user programs is very doable by creating a custom 
HttpClient object, but the amount of code required is fairly extensive.

I think that when our client implementations create an HttpClient object, they 
should explicitly increase the thread limits to larger default values, and 
expose configuration knobs for those values in the fluent interface.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11578) Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a more accurate representation of the cluster

2017-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235081#comment-16235081
 ] 

Tomás Fernández Löbbe commented on SOLR-11578:
--

bq. I think it's the same thing I suggested here SOLR-11558?
And BTW, I don't think it's one or the other, I'm +1 to the patch

> Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a 
> more accurate representation of the cluster
> -
>
> Key: SOLR-11578
> URL: https://issues.apache.org/jira/browse/SOLR-11578
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.0, 7.1
>Reporter: Rohit
>Priority: Minor
> Attachments: SOLR-11578.patch, Updated Graph.png, Updated Legend.png, 
> Updated Radial Graph.png
>
>
> New replica types were introduced in Solr 7.
> 1. The Solr Admin UI --> Cloud --> Graph mode should be updated to reflect 
> the new replica types (NRT, TLOG, PULL)
> 2. It will give a better overview of the cluster as well as help in 
> troubleshooting and diagnosing issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11578) Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a more accurate representation of the cluster

2017-11-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235078#comment-16235078
 ] 

Tomás Fernández Löbbe commented on SOLR-11578:
--

bq. Hmmm, random thought that just popped into my head if it wouldn't be too 
much work; a tooltip/popup with all the node information would be great, i.e. 
all the information from state.json for that replica. 
I think it's the same thing I suggested here SOLR-11558?

> Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a 
> more accurate representation of the cluster
> -
>
> Key: SOLR-11578
> URL: https://issues.apache.org/jira/browse/SOLR-11578
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.0, 7.1
>Reporter: Rohit
>Priority: Minor
> Attachments: SOLR-11578.patch, Updated Graph.png, Updated Legend.png, 
> Updated Radial Graph.png
>
>
> New replica types were introduced in Solr 7.
> 1. The Solr Admin UI --> Cloud --> Graph mode should be updated to reflect 
> the new replica types (NRT, TLOG, PULL)
> 2. It will give a better overview of the cluster as well as help in 
> troubleshooting and diagnosing issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 277 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/277/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 57316 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj75436032
 [ecj-lint] Compiling 1154 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj75436032
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/core/CoreContainer.java
 (at line 1036)
 [ecj-lint] core = new SolrCore(this, dcore, coreConfig);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'core' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 234)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 121)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 145)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1283)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/handler/sql/SolrTable.java
 (at line 517)
 [ecj-lint] ParallelStream parallelStream = new ParallelStream(zk, 
collection, tupleStream, numWorkers, comp);
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'parallelStream' is never closed
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/handler/sql/SolrTable.java
 (at line 743)
 [ecj-lint] ParallelStream parallelStream = new ParallelStream(zkHost, 
collection, tupleStream, numWorkers, comp);
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'parallelStream' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/highlight/DefaultSolrHighlighter.java
 (at line 578)
 [ecj-lint] tvWindowStream = new OffsetWindowTokenFilter(tvStream);
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'tvWindowStream' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/request/SimpleFacets.java
 (at line 943)
 [ecj-lint] fastForRandomSet = new HashDocSet(sset.getDocs(), 0, 
sset.size());
 [ecj-lint] 
^
 [ecj-lint] Resource leak: 'fastForRandomSet' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/response/SmileResponseWriter.java
 (at line 33)
 [ecj-lint] new SmileWriter(out, request, response).writeResponse();
 [ecj-lint] ^^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/java/org/apache/solr/schema/OpenExchangeRatesOrgProvider.java
 (at line 146)
 [ecj-lint] ratesJsonStream = 
resourceLoader.openResource(ratesFileLocation);
 

[jira] [Resolved] (SOLR-11557) SolrZkClient.checkInterrupted is not interrupting the thread like intends

2017-11-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-11557.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> SolrZkClient.checkInterrupted is not interrupting the thread like intends
> -
>
> Key: SOLR-11557
> URL: https://issues.apache.org/jira/browse/SOLR-11557
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11557.patch
>
>
> it’s calling {{interrupted()}} instead of {{interrupt()}}. This method is 
> intended to re-set the interrupted flag on the thread in case of an 
> InterruptedException 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 210 - Failure

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/210/

No tests ran.

Build Log:
[...truncated 11695 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/temp/junit4-J0-20171101_225113_066886789534836535979.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 44040192 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/hs_err_pid29289.log
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/temp/junit4-J0-20171101_225113_066829234350085015639.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0xea98, 44040192, 0) failed; error='Cannot 
allocate memory' (errno=12)
   [junit4] <<< JVM J0: EOF 

[...truncated 153 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
   [junit4]   2> 1041213 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.AutoScalingHandlerTest_1C436BE69425E52E-001/init-core-data-001
   [junit4]   2> 1041213 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 1041214 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1041214 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.AutoScalingHandlerTest_1C436BE69425E52E-001/tempDir-001
   [junit4]   2> 1041214 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1041215 INFO  (Thread-422) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1041215 INFO  (Thread-422) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1041331 ERROR (Thread-422) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1041331 INFO  
(SUITE-AutoScalingHandlerTest-seed#[1C436BE69425E52E]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:57542
   [junit4]   2> 1041437 INFO  (jetty-launcher-479-thread-1) [] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 1041437 INFO  (jetty-launcher-479-thread-2) [] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 1041438 INFO  (jetty-launcher-479-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5c00d5b5{/solr,null,AVAILABLE}
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@517afa5c{HTTP/1.1,[http/1.1]}{127.0.0.1:47504}
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.e.j.s.Server Started @1064698ms
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=47504}
   [junit4]   2> 1041451 ERROR (jetty-launcher-479-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
7.2.0
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1041451 INFO  (jetty-launcher-479-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-11-01T23:08:57.977Z
   [junit4]   2> 1042715 INFO  (jetty-launcher-479-thread-2) [] 
o.e.j.s.h.ContextHandler Started 

[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Attachment: SOLR-10680.patch

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-10680.patch
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Description: 
The minMaxNormalize Stream Evaluator scales an array of numbers within the 
specified min/max range. Default to min=0, max=1.

Syntax:

{code}
a = minMaxScale(colA, 0, 1)
{code}

  was:
The minMaxNormalize Stream Evaluator scales an array of numbers within the 
specified min/max range.

Syntax:

{code}
a = minMaxScale(colA, -1, 1)
{code}


> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10680:
-

Assignee: Joel Bernstein

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Fix Version/s: 7.2

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Summary: Add minMaxScale Stream Evaluator  (was: Add minMaxNormalize Stream 
Evaluator)

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> The minMaxNormalize Stream Evaluator normalizes an array of numbers within 
> the specified min/max range.
> Syntax:
> {code}
> a = minMaxNormalize(colA, -1, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10680:
--
Description: 
The minMaxNormalize Stream Evaluator scales an array of numbers within the 
specified min/max range.

Syntax:

{code}
a = minMaxScale(colA, -1, 1)
{code}

  was:
The minMaxNormalize Stream Evaluator normalizes an array of numbers within the 
specified min/max range.

Syntax:

{code}
a = minMaxNormalize(colA, -1, 1)
{code}


> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range.
> Syntax:
> {code}
> a = minMaxScale(colA, -1, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 721 - Still unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/721/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny

7 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:38931

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:38931
at 
__randomizedtesting.SeedInfo.seed([7BC0127BE86281A3:F3942DA1469EEC5B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:315)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-11584) Ref Guide: support Bootstrap components like tabs and pills

2017-11-01 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235050#comment-16235050
 ] 

Jason Gerlowski commented on SOLR-11584:


As far as feedback, I'd second any effort to make it easier to for developers 
to use (macro, etc.) would be awesome.

I also noticed that when the content on the tabs are very different sizes, 
switching between tabs can be a little jarring, as the reset of the page shifts 
up/down to accommodate the size of the newly-chosen tab.  Not sure what an 
improved experience would look like.  You could lock the size and have tabs 
with more content use a scroll-bar, but that would probably be equally jarring 
for some content.  I don't have a strong opinion either way.  Just wanted to 
mention it in case it strikes a chord with someone with more UI or CSS 
experience than myself.

> Ref Guide: support Bootstrap components like tabs and pills
> ---
>
> Key: SOLR-11584
> URL: https://issues.apache.org/jira/browse/SOLR-11584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2
>
> Attachments: SOLR-11584.patch, refguide-tabs.png, 
> tabbed_api_output_example.png
>
>
> The theme I initially copied as the basis for the new Ref Guide included a 
> Bootstrap integration, which has the potential to provide us with a number of 
> options, such as organizing some content on a page into tabs (to present the 
> same information in multiple ways - such as Windows vs Unix commands, or 
> hand-editing schema.xml/managed-schema vs Schema API examples). 
> However, the way AsciiDoctor content is inserted into a Jekyll template made 
> it difficult to know how to use some of Bootstrap's features. Particularly 
> since we have to make sure whatever we put into the content comes out right 
> in the PDF.
> I had a bit of a breakthrough on this, and feel confident we can make 
> straightforward instructions for anyone who might want to add this feature to 
> their content. A patch will follow shortly with more details but the summary 
> is:
> * Add an AsciiDoctor passthrough block that includes the Bootstrap HTML code 
> to create the tabs.
> ** This has an {{ifdef::backend-html5[]}} rule on it, so it will only be used 
> if the output format is HTML. The PDF will ignore this section entirely.
> * Use AsciiDoctor's "role" support to name the proper class names, which 
> AsciiDoctor will convert into the right {{}} elements in the HTML.
> ** These will take multiple class names and a section ID, which is perfect 
> for our needs.
> ** One caveat is the divs need to be properly nested, and must be defined on 
> blocks so all the content is inserted into the tab boxes appropriately. This 
> gets a little complicated because you can't nest blocks of the same type 
> (yet), but I found two block types we aren't using otherwise.
> ** The PDF similarly ignores these classes and IDs because it doesn't know 
> what to do with custom classes (but in the future these may be supported and 
> we could define these in a special way if we want).
> * Modify some of the CSS to display the way we want since AsciiDoctor inserts 
> some of its own classes between the defined classes and the inheritance needs 
> to be set up right. Also the default styling for the blocks needs to be 
> changed so it doesn't look strange.
> I'll include a patch with a sample file that has this working, plus detailed 
> instructions in the metadocs. In the meantime, I've attached a screenshot 
> that shows a small snippet from my testing. 
> While the focus here is using tabs & pills, we will be able to use the same 
> principles to support collapsing sections if that's preferred for 
> presentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11584) Ref Guide: support Bootstrap components like tabs and pills

2017-11-01 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11584:
---
Attachment: tabbed_api_output_example.png

Hey this is pretty cool.

I think the tabbed stuff especially has a lot of potential to fit a lot more 
content into the ref-guide without making it more onerous to navigate/scroll 
through.  For example, Solr's API documentation could include output snippets 
for both XML and JSON in separate tabs. (screenshot example attached)

(I poked around a bit to see if anything like this existed when working on 
SOLR-11530...excited to see it pop up as a possibility in the near future!)

> Ref Guide: support Bootstrap components like tabs and pills
> ---
>
> Key: SOLR-11584
> URL: https://issues.apache.org/jira/browse/SOLR-11584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2
>
> Attachments: SOLR-11584.patch, refguide-tabs.png, 
> tabbed_api_output_example.png
>
>
> The theme I initially copied as the basis for the new Ref Guide included a 
> Bootstrap integration, which has the potential to provide us with a number of 
> options, such as organizing some content on a page into tabs (to present the 
> same information in multiple ways - such as Windows vs Unix commands, or 
> hand-editing schema.xml/managed-schema vs Schema API examples). 
> However, the way AsciiDoctor content is inserted into a Jekyll template made 
> it difficult to know how to use some of Bootstrap's features. Particularly 
> since we have to make sure whatever we put into the content comes out right 
> in the PDF.
> I had a bit of a breakthrough on this, and feel confident we can make 
> straightforward instructions for anyone who might want to add this feature to 
> their content. A patch will follow shortly with more details but the summary 
> is:
> * Add an AsciiDoctor passthrough block that includes the Bootstrap HTML code 
> to create the tabs.
> ** This has an {{ifdef::backend-html5[]}} rule on it, so it will only be used 
> if the output format is HTML. The PDF will ignore this section entirely.
> * Use AsciiDoctor's "role" support to name the proper class names, which 
> AsciiDoctor will convert into the right {{}} elements in the HTML.
> ** These will take multiple class names and a section ID, which is perfect 
> for our needs.
> ** One caveat is the divs need to be properly nested, and must be defined on 
> blocks so all the content is inserted into the tab boxes appropriately. This 
> gets a little complicated because you can't nest blocks of the same type 
> (yet), but I found two block types we aren't using otherwise.
> ** The PDF similarly ignores these classes and IDs because it doesn't know 
> what to do with custom classes (but in the future these may be supported and 
> we could define these in a special way if we want).
> * Modify some of the CSS to display the way we want since AsciiDoctor inserts 
> some of its own classes between the defined classes and the inheritance needs 
> to be set up right. Also the default styling for the blocks needs to be 
> changed so it doesn't look strange.
> I'll include a patch with a sample file that has this working, plus detailed 
> instructions in the metadocs. In the meantime, I've attached a screenshot 
> that shows a small snippet from my testing. 
> While the focus here is using tabs & pills, we will be able to use the same 
> principles to support collapsing sections if that's preferred for 
> presentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2145 - Failure

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2145/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSegmentSorting.testSegmentTerminateEarly

Error Message:
KeeperErrorCode = Session expired for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /clusterstate.json
at 
__randomizedtesting.SeedInfo.seed([BB4B15E709F42CB:DB1276F3ACCDD6A1]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$exists$3(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:428)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:442)
at 
org.apache.solr.cloud.TestSegmentSorting.ensureClusterEmpty(TestSegmentSorting.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Comment Edited] (SOLR-11592) add another language detector using OpenNLP

2017-11-01 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16233857#comment-16233857
 ] 

Koji Sekiguchi edited comment on SOLR-11592 at 11/2/17 12:55 AM:
-

OpenNLP's model covers 103 languages. 
https://svn.apache.org/repos/bigdata/opennlp/tags/langdetect-183_RC3/leipzig/resources/README.txt


was (Author: koji):
OpenNLP's model covers 103 languages.

> add another language detector using OpenNLP
> ---
>
> Key: SOLR-11592
> URL: https://issues.apache.org/jira/browse/SOLR-11592
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 7.1
>Reporter: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11592.patch
>
>
> We already have two language detectors, lang-detect and Tika's lang detect. 
> This is a ticket that gives users third option using OpenNLP. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11511) Use existing private field in DistributedUpdateProcessor

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235011#comment-16235011
 ] 

ASF subversion and git services commented on SOLR-11511:


Commit 60061e6e823ef11459f850c55a19f8ed04674b5c in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=60061e6 ]

SOLR-11511: qualify inner type name for javadocs links to fix precommit


> Use existing private field in DistributedUpdateProcessor
> 
>
> Key: SOLR-11511
> URL: https://issues.apache.org/jira/browse/SOLR-11511
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11511.patch
>
>
> The DistributedUpdateProcessor has a private instance field called cloudDesc. 
> It is used in a few places, but most code navigates to CloudDescriptor from 
> the request object instead. 
> The fundamental question of this ticket, is this: is there any reason to 
> distrust this field and do the navigation directly (in which case maybe we 
> get rid of the field instead?) or can we trust it and thus should use it 
> where we can. Since it is a private field only ever updated in the 
> constructor, it's not likely to be changing out from under us. The request 
> from which it is derived is also held in a private final field, so it very 
> much looks to me like this field should have been final and should be used.
> This might or might not be a performance gain (depending on whether or not 
> the compiler can optimize away something like this already), but it will be a 
> readability and consistency gain for sure.
> Attaching patch to tidy this up shortly...
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11511) Use existing private field in DistributedUpdateProcessor

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235010#comment-16235010
 ] 

ASF subversion and git services commented on SOLR-11511:


Commit aa0286540f3648e39e1fb5a9e367fd41c175dccc in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa02865 ]

SOLR-11511: qualify inner type name for javadocs links to fix precommit


> Use existing private field in DistributedUpdateProcessor
> 
>
> Key: SOLR-11511
> URL: https://issues.apache.org/jira/browse/SOLR-11511
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11511.patch
>
>
> The DistributedUpdateProcessor has a private instance field called cloudDesc. 
> It is used in a few places, but most code navigates to CloudDescriptor from 
> the request object instead. 
> The fundamental question of this ticket, is this: is there any reason to 
> distrust this field and do the navigation directly (in which case maybe we 
> get rid of the field instead?) or can we trust it and thus should use it 
> where we can. Since it is a private field only ever updated in the 
> constructor, it's not likely to be changing out from under us. The request 
> from which it is derived is also held in a private final field, so it very 
> much looks to me like this field should have been final and should be used.
> This might or might not be a performance gain (depending on whether or not 
> the compiler can optimize away something like this already), but it will be a 
> readability and consistency gain for sure.
> Attaching patch to tidy this up shortly...
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4261 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4261/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([66CE629D2352EB57:339E8A0F8FAB24A7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:842)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
  

[jira] [Commented] (LUCENE-8031) DOCS_ONLY fields set incorrect length norms

2017-11-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235003#comment-16235003
 ] 

Michael McCandless commented on LUCENE-8031:


bq. But this seems tricky, today you can downgrade to DOCS_ONLY on the fly,

Maybe we should stop allowing this?  I.e. throw an exception if the index 
options try to downgrade for a field.

> DOCS_ONLY fields set incorrect length norms
> ---
>
> Key: LUCENE-8031
> URL: https://issues.apache.org/jira/browse/LUCENE-8031
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
>
> Term frequencies are discarded in the DOCS_ONLY case from the postings list 
> but they still count against the length normalization, which looks like it 
> may screw stuff up.
> I ran some quick experiments on LUCENE-8025, by encoding 
> fieldInvertState.getUniqueTermCount() and it seemed worth fixing (e.g. 20% or 
> 30% improvement potentially). Happy to do testing for real, if we want to fix.
> But this seems tricky, today you can downgrade to DOCS_ONLY on the fly, and 
> its hard for me to think about that case (i think its generally screwed up 
> besides this, but still).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8031) DOCS_ONLY fields set incorrect length norms

2017-11-01 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-8031:
---

 Summary: DOCS_ONLY fields set incorrect length norms
 Key: LUCENE-8031
 URL: https://issues.apache.org/jira/browse/LUCENE-8031
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Priority: Major


Term frequencies are discarded in the DOCS_ONLY case from the postings list but 
they still count against the length normalization, which looks like it may 
screw stuff up.

I ran some quick experiments on LUCENE-8025, by encoding 
fieldInvertState.getUniqueTermCount() and it seemed worth fixing (e.g. 20% or 
30% improvement potentially). Happy to do testing for real, if we want to fix.

But this seems tricky, today you can downgrade to DOCS_ONLY on the fly, and its 
hard for me to think about that case (i think its generally screwed up besides 
this, but still).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8025) compute avgdl correctly for DOCS_ONLY

2017-11-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-8025.
-
   Resolution: Fixed
Fix Version/s: 7.2
   master (8.0)

> compute avgdl correctly for DOCS_ONLY
> -
>
> Key: LUCENE-8025
> URL: https://issues.apache.org/jira/browse/LUCENE-8025
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8025.patch
>
>
> Spinoff of LUCENE-8007:
> If you omit term frequencies, we should score as if all tf values were 1. 
> This is the way it worked for e.g. ClassicSimilarity and you can understand 
> how it degrades. 
> However for sims such as BM25, we bail out on computing avg doclength (and 
> just return a bogus value of 1) today, screwing up stuff related to length 
> normalization too, which is separate.
> Instead of a bogus value, we should substitute sumDocFreq for 
> sumTotalTermFreq (all postings have freq of 1, since you omitted them).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8025) compute avgdl correctly for DOCS_ONLY

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234978#comment-16234978
 ] 

ASF subversion and git services commented on LUCENE-8025:
-

Commit 7b7bdf39927ffd9a2654f002bf066cdd817315da in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7b7bdf3 ]

LUCENE-8025: fix changes entry, its sumTotalTermFreq


> compute avgdl correctly for DOCS_ONLY
> -
>
> Key: LUCENE-8025
> URL: https://issues.apache.org/jira/browse/LUCENE-8025
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8025.patch
>
>
> Spinoff of LUCENE-8007:
> If you omit term frequencies, we should score as if all tf values were 1. 
> This is the way it worked for e.g. ClassicSimilarity and you can understand 
> how it degrades. 
> However for sims such as BM25, we bail out on computing avg doclength (and 
> just return a bogus value of 1) today, screwing up stuff related to length 
> normalization too, which is separate.
> Instead of a bogus value, we should substitute sumDocFreq for 
> sumTotalTermFreq (all postings have freq of 1, since you omitted them).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20799 - Still unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20799/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.TestConfigSetsAPI.testUpload

Error Message:
Error from server at https://127.0.0.1:41325/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41325/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([335621ECA7A88633:30EC71ED44CACEB9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.TestConfigSetsAPI.createCollection(TestConfigSetsAPI.java:512)
at 
org.apache.solr.cloud.TestConfigSetsAPI.testUpload(TestConfigSetsAPI.java:327)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8025) compute avgdl correctly for DOCS_ONLY

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234976#comment-16234976
 ] 

ASF subversion and git services commented on LUCENE-8025:
-

Commit 2658ff62c84e2cc8405a6b6ef988060be430f61a in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2658ff6 ]

LUCENE-8025: fix changes entry, its sumTotalTermFreq


> compute avgdl correctly for DOCS_ONLY
> -
>
> Key: LUCENE-8025
> URL: https://issues.apache.org/jira/browse/LUCENE-8025
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8025.patch
>
>
> Spinoff of LUCENE-8007:
> If you omit term frequencies, we should score as if all tf values were 1. 
> This is the way it worked for e.g. ClassicSimilarity and you can understand 
> how it degrades. 
> However for sims such as BM25, we bail out on computing avg doclength (and 
> just return a bogus value of 1) today, screwing up stuff related to length 
> normalization too, which is separate.
> Instead of a bogus value, we should substitute sumDocFreq for 
> sumTotalTermFreq (all postings have freq of 1, since you omitted them).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8025) compute avgdl correctly for DOCS_ONLY

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234972#comment-16234972
 ] 

ASF subversion and git services commented on LUCENE-8025:
-

Commit 4e1ef13a1274a3beb17b2696d08318a241e4d86e in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e1ef13 ]

LUCENE-8025: Use totalTermFreq=sumDocFreq when scoring DOCS_ONLY fields


> compute avgdl correctly for DOCS_ONLY
> -
>
> Key: LUCENE-8025
> URL: https://issues.apache.org/jira/browse/LUCENE-8025
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8025.patch
>
>
> Spinoff of LUCENE-8007:
> If you omit term frequencies, we should score as if all tf values were 1. 
> This is the way it worked for e.g. ClassicSimilarity and you can understand 
> how it degrades. 
> However for sims such as BM25, we bail out on computing avg doclength (and 
> just return a bogus value of 1) today, screwing up stuff related to length 
> normalization too, which is separate.
> Instead of a bogus value, we should substitute sumDocFreq for 
> sumTotalTermFreq (all postings have freq of 1, since you omitted them).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-01 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234969#comment-16234969
 ] 

Jason Gerlowski commented on SOLR-11507:


+1 on the attached patch.

Note that with the setter-move, this is very similar SOLR-10469.  (Not saying 
that as a positive or negative, just as bookkeeping).

Most of the other setters have "move to setter" issues filed.  I implemented 
some of these, but stopped when there didn't seem to be much interest in 
unifying the SolrClient setters.  (I'd be happy to follow up on those related 
issues though if that's a change you agree with, but I'm also happy to let it 
go).

> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8025) compute avgdl correctly for DOCS_ONLY

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234956#comment-16234956
 ] 

ASF subversion and git services commented on LUCENE-8025:
-

Commit 7495a9d75bb2efde2f76d68b376560ab86693cd9 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7495a9d ]

LUCENE-8025: Use totalTermFreq=sumDocFreq when scoring DOCS_ONLY fields


> compute avgdl correctly for DOCS_ONLY
> -
>
> Key: LUCENE-8025
> URL: https://issues.apache.org/jira/browse/LUCENE-8025
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8025.patch
>
>
> Spinoff of LUCENE-8007:
> If you omit term frequencies, we should score as if all tf values were 1. 
> This is the way it worked for e.g. ClassicSimilarity and you can understand 
> how it degrades. 
> However for sims such as BM25, we bail out on computing avg doclength (and 
> just return a bogus value of 1) today, screwing up stuff related to length 
> normalization too, which is separate.
> Instead of a bogus value, we should substitute sumDocFreq for 
> sumTotalTermFreq (all postings have freq of 1, since you omitted them).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 278 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/278/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

10 tests failed.
FAILED:  
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testExhaustedLooped

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.TestPerfTasksLogic_9C0FCF13A466D933-001\benchmark-001\test-mapping-ISOLatin1Accent-partial.txt

Stack Trace:
java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.TestPerfTasksLogic_9C0FCF13A466D933-001\benchmark-001\test-mapping-ISOLatin1Accent-partial.txt
at 
__randomizedtesting.SeedInfo.seed([9C0FCF13A466D933:16E37C378D5D0E35]:0)
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at 
java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at java.nio.file.Files.newOutputStream(Files.java:216)
at java.nio.file.Files.copy(Files.java:3016)
at 
org.apache.lucene.benchmark.BenchmarkTestCase.copyToWorkDir(BenchmarkTestCase.java:56)
at 
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.setUp(TestPerfTasksLogic.java:67)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-10934) create a link+anchor checker for the ref-guide PDF using PDFBox

2017-11-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10934:

Attachment: SOLR-10934.patch

bq. What we might want to consider, is refactoring our build.xml, so that the 
same  task options use to generate the PDF, could also be 
used to generate a bare bones version of the html-site – ie: not using jekyll, 
just using raw asciidoctor with the "html5" output option. Then we could (in 
theory) run the same HTML link checking code we currently have against that 
output dir – just for the purpose of checking the links, not with any plan to 
ever publish it.

I'm attaching a path that takes this approach -- i think it works pretty well.

Unfortunately refactoring just the build.xml file proved to be insufficient to 
be able to re-use the existing {{}} in a macro because of 
how the underlying Task class works -- it has some hard assumptions about XML 
element attributes like "sourceDocumentName" not being used even if they are ht 
empty string because of ant property expansion -- but i was able to deal with 
that by adding out own little AntTask subclass into the tools jar.

i also did a little more refactoring of the build.xml file so running building 
both the PDF & jekyll site via {{ant}} wouldn't waste time redudently also 
building & validating the bare-bones HTML version. (unfortunately if you 
explicitly run {{ant build-pdf build-site}} this still happens, but hey: baby 
steps)

like the previous patch, this includes some "nocommit" annotated intentional 
anchor/link errors in the {{*.adoc}} files.  If you apply the patch as is, and 
run {{ant}} or {{ant build-pdf}} or {{ant build-site}} you'll get all the same 
validation errors that we want to see happen with this kind of bad content.  If 
you refer the {{solr/solr-ref-guide/src}} changes then everything will start 
building happily.

what do folks think of this approach?



> create a link+anchor checker for the ref-guide PDF using PDFBox
> ---
>
> Key: SOLR-10934
> URL: https://issues.apache.org/jira/browse/SOLR-10934
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-10934.patch, SOLR-10934.patch
>
>
> We currently have CheckLinksAndAnchors.java which is automatically run 
> against the ref-guide HTML as part of the build to use JSoup to find bad 
> links/anchors that asciidoctor doesn't complain about -- but not everyone 
> does/can build the HTML version of the ref-guide sincif we can e it requires 
> manually installing jekyll.
> The PDF build only requires things installed by ivy (via JRuby) and we 
> already have some PDFBox based code in ReducePDFSize.java that operates on 
> this PDF every time it's run -- so if we can find a way to do similar checks 
> using the PDFBox API we could catch these broken links faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 720 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/720/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestBulkSchemaConcurrent.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:37817/ul_m/di

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:37817/ul_m/di
at 
__randomizedtesting.SeedInfo.seed([E08292E0B4D2F99C:68D6AD3A1A2E9464]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:315)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 6990 - Still Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6990/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC 
--illegal-access=deny

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_44183A536AA95D74-001\3.0.2-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_44183A536AA95D74-001\3.0.2-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_44183A536AA95D74-001\3.0.2-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_44183A536AA95D74-001\3.0.2-nocfs-001

at __randomizedtesting.SeedInfo.seed([44183A536AA95D74]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 11 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:825)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1092)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1123)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: 
git://git.apache.org/lucene-solr.git: Connection refused: connect
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:634)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:205)
at hudson.remoting.UserRequest.perform(UserRequest.java:52)
at hudson.remoting.Request$2.run(Request.java:356)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
Windows VBOX
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1655)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:308)
at hudson.remoting.Channel.call(Channel.java:904)
at 

[jira] [Updated] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-01 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6144:
---
Fix Version/s: (was: 6.0)
   (was: 5.0)

> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Steve Rowe
I attached a modernized patch to already existing LUCENE-6144.

--
Steve
www.lucidworks.com

> On Nov 1, 2017, at 4:43 PM, Steve Rowe  wrote:
> 
> Not fixed in 2.3, according to comments on 
> https://issues.apache.org/jira/browse/IVY-1489 .  However a comment there 
> mentions Ivy 2.4’s "artifact-lock-nio” strategy as a more reliable 
> alternative to the standard locking.  I’ll make an issue to upgrade our Ivy 
> dependency and switch lock strategies.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Nov 1, 2017, at 4:02 PM, Michael McCandless  
>> wrote:
>> 
>> Hmm but didn't we upgrade to Ivy 2.3.0 to fix that lingering .lck bug?
>> 
>> Is the bug not actually fixed (in Ivy)?
>> 
>> Mike McCandless
>> 
>> http://blog.mikemccandless.com
>> 
>> On Wed, Nov 1, 2017 at 3:59 PM, Steve Rowe  wrote:
>> Looks like lingering .lck files in the the Ivy cache from an interrupted 
>> build.  I’ll work on cleaning it up.
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Nov 1, 2017, at 3:43 PM, Apache Jenkins Server 
>>>  wrote:
>>> 
>>> Build: https://builds.apache.org/job/Solr-Artifacts-7.x/75/
>>> 
>>> No tests ran.
>>> 
>>> Build Log:
>>> [...truncated 3209 lines...]
>>> BUILD FAILED
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549: 
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451: 
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/test-framework/build.xml:97:
>>>  The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:556:
>>>  The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:551:
>>>  The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build.xml:484:
>>>  The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2262:
>>>  The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:409:
>>>  impossible to resolve dependencies:
>>>  resolve failed - see output for details
>>> 
>>> Total time: 25 minutes 47 seconds
>>> Build step 'Invoke Ant' marked build as failure
>>> Archiving artifacts
>>> Publishing Javadoc
>>> Email was triggered for: Failure - Any
>>> Sending email for trigger: Failure - Any
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
>> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6144) Upgrade ivy to 2.4.0

2017-11-01 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6144:
---
Attachment: LUCENE-6144.patch

Modernized patch.  Also switches lock strategy to "artifact-lock-nio", as 
recommended on a comment on IVY-1489.

bq. I'm not sure whether we have a minimum version check for ivy, or whether we 
are using any features that *require* a minimum version check.

We do have a check for disallowed ivy versions, in ivy-availability-check; I 
updated the regex to also disallow 2.3.X.  "artifact-lock-nio" is new in 2.4.0, 
so 2.3.X will have to be disallowed.

With this patch I interrupted {{ant resolve}} with ctrl-c a couple times, and 
each following invocation succeeded, so I think it's an improvement over 2.3.0.

If there are no objections, I'll commit tomorrow.

> Upgrade ivy to 2.4.0
> 
>
> Key: LUCENE-6144
> URL: https://issues.apache.org/jira/browse/LUCENE-6144
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Major
> Fix For: 5.0, 6.0
>
> Attachments: LUCENE-6144.patch, LUCENE-6144.patch
>
>
> Ivy 2.4.0 is released.  IVY-1489 is likely to still be a problem.
> I'm not sure whether we have a minimum version check for ivy, or whether we 
> are using any features that *require* a minimum version check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-01 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234768#comment-16234768
 ] 

Houston Putman commented on SOLR-11144:
---

I'm fine keeping the "Reference" out of the titles.

> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20798 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20798/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard

Error Message:
Error from server at http://127.0.0.1:37157/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:37157/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([C6169F184BF90CF9:660CD443EF8947F5]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard(DeleteShardTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Steve Rowe
Not fixed in 2.3, according to comments on 
https://issues.apache.org/jira/browse/IVY-1489 .  However a comment there 
mentions Ivy 2.4’s "artifact-lock-nio” strategy as a more reliable alternative 
to the standard locking.  I’ll make an issue to upgrade our Ivy dependency and 
switch lock strategies.

--
Steve
www.lucidworks.com

> On Nov 1, 2017, at 4:02 PM, Michael McCandless  
> wrote:
> 
> Hmm but didn't we upgrade to Ivy 2.3.0 to fix that lingering .lck bug?
> 
> Is the bug not actually fixed (in Ivy)?
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> On Wed, Nov 1, 2017 at 3:59 PM, Steve Rowe  wrote:
> Looks like lingering .lck files in the the Ivy cache from an interrupted 
> build.  I’ll work on cleaning it up.
> 
> --
> Steve
> www.lucidworks.com
> 
> > On Nov 1, 2017, at 3:43 PM, Apache Jenkins Server 
> >  wrote:
> >
> > Build: https://builds.apache.org/job/Solr-Artifacts-7.x/75/
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 3209 lines...]
> > BUILD FAILED
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549: 
> > The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451: 
> > The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/test-framework/build.xml:97:
> >  The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:556:
> >  The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:551:
> >  The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build.xml:484:
> >  The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2262:
> >  The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:409:
> >  impossible to resolve dependencies:
> >   resolve failed - see output for details
> >
> > Total time: 25 minutes 47 seconds
> > Build step 'Invoke Ant' marked build as failure
> > Archiving artifacts
> > Publishing Javadoc
> > Email was triggered for: Failure - Any
> > Sending email for trigger: Failure - Any
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11144) Analytics Component Documentation

2017-11-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234706#comment-16234706
 ] 

Cassandra Targett commented on SOLR-11144:
--

bq. Changed the Titles of the mismatched pages to reflect their shortnames. (So 
I removed the 'Reference' at the end of each)

Since my previous comment, Hoss figured out a way to remove the need for that 
title/shortname/filename match (SOLR-11540), so we don't need to declare 
page-shortname and page-permalink as params on each page anymore. I'll take 
those out of your latest patch for you. If you want to add "Reference" back 
into the titles, let me know and I'll add those back in - we don't need to 
match titles to filenames anymore at all.

The other changes look pretty straightforward, I'll do one more review for 
typos and commit.

> Analytics Component Documentation
> -
>
> Key: SOLR-11144
> URL: https://issues.apache.org/jira/browse/SOLR-11144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
>Reporter: Houston Putman
>Assignee: Cassandra Targett
>Priority: Critical
>
> Adding a Solr Reference Guide page for the Analytics Component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11595:

Attachment: SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-01 Thread David Smiley (JIRA)
David Smiley created SOLR-11595:
---

 Summary: optimize SolrIndexSearcher.localCollectionStatistics to 
use cached MultiFields
 Key: SOLR-11595
 URL: https://issues.apache.org/jira/browse/SOLR-11595
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 7.2


{{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
{{IndexSearcher.collectionStatistics(field)}} which in turn calls 
{{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
fields in the query shows that building the MultiTerms here is expensive.  
Fortunately it turns out that Solr already has a cached instance via 
{{SlowCompositeReaderWrapper}} (using MultiFields which has a ConcurrentHashMap 
to the MultiTerms keyed by field String.

Perhaps this should be improved on the Lucene side... not sure.  But here on 
the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Michael McCandless
Hmm but didn't we upgrade to Ivy 2.3.0 to fix that lingering .lck bug?

Is the bug not actually fixed (in Ivy)?

Mike McCandless

http://blog.mikemccandless.com

On Wed, Nov 1, 2017 at 3:59 PM, Steve Rowe  wrote:

> Looks like lingering .lck files in the the Ivy cache from an interrupted
> build.  I’ll work on cleaning it up.
>
> --
> Steve
> www.lucidworks.com
>
> > On Nov 1, 2017, at 3:43 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
> >
> > Build: https://builds.apache.org/job/Solr-Artifacts-7.x/75/
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 3209 lines...]
> > BUILD FAILED
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/
> solr/test-framework/build.xml:97: The following error occurred while
> executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:556:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:551:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build.xml:484:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2262:
> The following error occurred while executing this line:
> > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:409:
> impossible to resolve dependencies:
> >   resolve failed - see output for details
> >
> > Total time: 25 minutes 47 seconds
> > Build step 'Invoke Ant' marked build as failure
> > Archiving artifacts
> > Publishing Javadoc
> > Email was triggered for: Failure - Any
> > Sending email for trigger: Failure - Any
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Steve Rowe
Looks like lingering .lck files in the the Ivy cache from an interrupted build. 
 I’ll work on cleaning it up.

--
Steve
www.lucidworks.com

> On Nov 1, 2017, at 3:43 PM, Apache Jenkins Server  
> wrote:
> 
> Build: https://builds.apache.org/job/Solr-Artifacts-7.x/75/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 3209 lines...]
> BUILD FAILED
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549: 
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451: 
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/test-framework/build.xml:97:
>  The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:556:
>  The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:551:
>  The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build.xml:484: 
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2262:
>  The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:409:
>  impossible to resolve dependencies:
>   resolve failed - see output for details
> 
> Total time: 25 minutes 47 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Publishing Javadoc
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11584) Ref Guide: support Bootstrap components like tabs and pills

2017-11-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11584:
-
Attachment: SOLR-11584.patch

> Ref Guide: support Bootstrap components like tabs and pills
> ---
>
> Key: SOLR-11584
> URL: https://issues.apache.org/jira/browse/SOLR-11584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2
>
> Attachments: SOLR-11584.patch, refguide-tabs.png
>
>
> The theme I initially copied as the basis for the new Ref Guide included a 
> Bootstrap integration, which has the potential to provide us with a number of 
> options, such as organizing some content on a page into tabs (to present the 
> same information in multiple ways - such as Windows vs Unix commands, or 
> hand-editing schema.xml/managed-schema vs Schema API examples). 
> However, the way AsciiDoctor content is inserted into a Jekyll template made 
> it difficult to know how to use some of Bootstrap's features. Particularly 
> since we have to make sure whatever we put into the content comes out right 
> in the PDF.
> I had a bit of a breakthrough on this, and feel confident we can make 
> straightforward instructions for anyone who might want to add this feature to 
> their content. A patch will follow shortly with more details but the summary 
> is:
> * Add an AsciiDoctor passthrough block that includes the Bootstrap HTML code 
> to create the tabs.
> ** This has an {{ifdef::backend-html5[]}} rule on it, so it will only be used 
> if the output format is HTML. The PDF will ignore this section entirely.
> * Use AsciiDoctor's "role" support to name the proper class names, which 
> AsciiDoctor will convert into the right {{}} elements in the HTML.
> ** These will take multiple class names and a section ID, which is perfect 
> for our needs.
> ** One caveat is the divs need to be properly nested, and must be defined on 
> blocks so all the content is inserted into the tab boxes appropriately. This 
> gets a little complicated because you can't nest blocks of the same type 
> (yet), but I found two block types we aren't using otherwise.
> ** The PDF similarly ignores these classes and IDs because it doesn't know 
> what to do with custom classes (but in the future these may be supported and 
> we could define these in a special way if we want).
> * Modify some of the CSS to display the way we want since AsciiDoctor inserts 
> some of its own classes between the defined classes and the inheritance needs 
> to be set up right. Also the default styling for the blocks needs to be 
> changed so it doesn't look strange.
> I'll include a patch with a sample file that has this working, plus detailed 
> instructions in the metadocs. In the meantime, I've attached a screenshot 
> that shows a small snippet from my testing. 
> While the focus here is using tabs & pills, we will be able to use the same 
> principles to support collapsing sections if that's preferred for 
> presentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11584) Ref Guide: support Bootstrap components like tabs and pills

2017-11-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234635#comment-16234635
 ] 

Cassandra Targett commented on SOLR-11584:
--

The attached patch demonstrates how to do two types of content layout options: 
a) tabbed sections and b) content that's hidden behind a button. There are 
several things in this patch:

* An updated {{stream-decorator-reference.adoc}} file that uses tabs for every 
"Parameters" and "Syntax" section. There are 25 decorators on that page, so 
this shows what it might look like if a full page used this.
** This page also includes one example (in the section on the daemon stream 
decorator) of hiding content behind a button users need to click on to see the 
example - that specific example may not be the best use of that functionality, 
but I wanted to show another Bootstrap component that we could use relatively 
easily.
**  (As a side note, I started this to try to find a new way to organize & 
display all the streaming expressions in an easier-to-consume way. I suspect 
this isn't it, but since I started there, I finished there to demonstrate all 
its complexity on a large page).
* Updates to several CSS files to style tabs and buttons according to our style 
guidelines.
* Addition of documentation in {{meta-docs/jekyll.adoc}} to explain how to 
insert these types of sections in any page.
* Also, since I was playing with stuff, I added in a way to make a column-based 
TOC at the top of pages, as another option besides a long single list at the 
top or on the right side (use {{:page-tocclass: column}} to use it). This could 
be committed completely separately.

If you take a look at the docs I wrote on how to implement this, it requires a 
bit of knowledge of how Jekyll consumes Asciidoctor-converted content, how 
Asciidoctor deals with what it calls roles (which become CSS classes in the 
HTML), and how to nest different content block types into one another. 

IOW, it's fiddly, as they say - if someone doesn't get all the parts exactly 
right, we could end up with a mess. Since most people don't run the HTML 
conversion locally before committing, they may not know if they got it wrong 
until after it's published. I chatted with [~hossman] offline about it, and he 
promised to take a look to see if there is a way to do this via a macro, or 
some other way that's less easy to mess up.

> Ref Guide: support Bootstrap components like tabs and pills
> ---
>
> Key: SOLR-11584
> URL: https://issues.apache.org/jira/browse/SOLR-11584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2
>
> Attachments: refguide-tabs.png
>
>
> The theme I initially copied as the basis for the new Ref Guide included a 
> Bootstrap integration, which has the potential to provide us with a number of 
> options, such as organizing some content on a page into tabs (to present the 
> same information in multiple ways - such as Windows vs Unix commands, or 
> hand-editing schema.xml/managed-schema vs Schema API examples). 
> However, the way AsciiDoctor content is inserted into a Jekyll template made 
> it difficult to know how to use some of Bootstrap's features. Particularly 
> since we have to make sure whatever we put into the content comes out right 
> in the PDF.
> I had a bit of a breakthrough on this, and feel confident we can make 
> straightforward instructions for anyone who might want to add this feature to 
> their content. A patch will follow shortly with more details but the summary 
> is:
> * Add an AsciiDoctor passthrough block that includes the Bootstrap HTML code 
> to create the tabs.
> ** This has an {{ifdef::backend-html5[]}} rule on it, so it will only be used 
> if the output format is HTML. The PDF will ignore this section entirely.
> * Use AsciiDoctor's "role" support to name the proper class names, which 
> AsciiDoctor will convert into the right {{}} elements in the HTML.
> ** These will take multiple class names and a section ID, which is perfect 
> for our needs.
> ** One caveat is the divs need to be properly nested, and must be defined on 
> blocks so all the content is inserted into the tab boxes appropriately. This 
> gets a little complicated because you can't nest blocks of the same type 
> (yet), but I found two block types we aren't using otherwise.
> ** The PDF similarly ignores these classes and IDs because it doesn't know 
> what to do with custom classes (but in the future these may be supported and 
> we could define these in a special way if we want).
> * Modify some of the CSS to display the way we want 

[JENKINS] Solr-Artifacts-7.x - Build # 75 - Failure

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-7.x/75/

No tests ran.

Build Log:
[...truncated 3209 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/test-framework/build.xml:97:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:556:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:551:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build.xml:484: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2262:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:409:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 25 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 719 - Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/719/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations

Error Message:
Expected 2x1 for collection: collection1 null Live Nodes: 
[127.0.0.1:37115_solr, 127.0.0.1:37531_solr, 127.0.0.1:44275_solr] Last 
available state: 
DocCollection(collection1//collections/collection1/state.json/3)={   
"pullReplicas":"0",   "replicationFactor":"1",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   
"replicas":{"core_node3":{   "core":"collection1_shard1_replica_n1",
   "base_url":"https://127.0.0.1:37531/solr;,   
"node_name":"127.0.0.1:37531_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{"core_node4":{  
 "core":"collection1_shard2_replica_n2",   
"base_url":"https://127.0.0.1:37115/solr;,   
"node_name":"127.0.0.1:37115_solr",   "state":"down",   
"type":"NRT",   "leader":"true",   "router":{"name":"compositeId"}, 
  "maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"1",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected 2x1 for collection: collection1
null
Live Nodes: [127.0.0.1:37115_solr, 127.0.0.1:37531_solr, 127.0.0.1:44275_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/3)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:37531/solr;,
  "node_name":"127.0.0.1:37531_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{"core_node4":{
  "core":"collection1_shard2_replica_n2",
  "base_url":"https://127.0.0.1:37115/solr;,
  "node_name":"127.0.0.1:37115_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([8321120DDDE4DC6D:73CBC1F102C7AC07]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations(TestSkipOverseerOperations.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 272 - Failure!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/272/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<105> but was:<104>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<105> but was:<104>
at 
__randomizedtesting.SeedInfo.seed([5AC48A92168D0AF3:D290B548B871670B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:881)
at 
org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:664)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 208 - Still Failing

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/208/

All tests passed

Build Log:
[...truncated 4725 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:826: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:770: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:59: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:495: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2262:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:409:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 115 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11504) Provide a config to restrict number of indexing threads

2017-11-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234595#comment-16234595
 ] 

David Smiley commented on SOLR-11504:
-

Doh!  Of course Nawab.

> Provide a config to restrict number of indexing threads 
> 
>
> Key: SOLR-11504
> URL: https://issues.apache.org/jira/browse/SOLR-11504
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3, 6.0, 7.0
>Reporter: Nawab Zada Asad iqbal
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> For heavy indexing load (through REST api), Solr does not have any way to 
> restrict number of threads. There used to be a config in lucene to restrict 
> number of threads but that has been removed since 
> https://issues.apache.org/jira/browse/LUCENE-6659 . 
> For example, in my bulk indexing scenario, within few minutes, my solr server 
> had created 300 parallel threads each writing its own segment. The result was 
> tons of small segments getting flushed to disk (as total RAM limit was 
> reached quickly by sum of all segments), and solr has to spend time later to 
> merge them into reasonable sizes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11504) Provide a config to restrict number of indexing threads

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reopened SOLR-11504:
-

> Provide a config to restrict number of indexing threads 
> 
>
> Key: SOLR-11504
> URL: https://issues.apache.org/jira/browse/SOLR-11504
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3, 6.0, 7.0
>Reporter: Nawab Zada Asad iqbal
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> For heavy indexing load (through REST api), Solr does not have any way to 
> restrict number of threads. There used to be a config in lucene to restrict 
> number of threads but that has been removed since 
> https://issues.apache.org/jira/browse/LUCENE-6659 . 
> For example, in my bulk indexing scenario, within few minutes, my solr server 
> had created 300 parallel threads each writing its own segment. The result was 
> tons of small segments getting flushed to disk (as total RAM limit was 
> reached quickly by sum of all segments), and solr has to spend time later to 
> merge them into reasonable sizes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11507:

Attachment: SOLR-11507.patch

The attached patch does what we've said it should.

In addition, I've moved the parallelUpdates flag from being a mutable state 
member of CloudSolrClient to be a Builder setting like the other flags.  I 
refactored callers of setParallelUpdates (which all randomly set it in tests) 
to not do so anymore, relying on this happening implicitly now.  
setParallelUpdates is still there but now marked deprecated and doesn't do 
anything (is this ok?).  I should probably change the issue title accordingly 
as this change isn't just some little test change.  I wonder why anyone would 
disable this setting?  Seems unlikely to be useful, so I suspect the impact is 
low.  Suggested new title: "move CloudSolrClient.setParallelUpdates to the 
Builder".

> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10307) Provide SSL/TLS keystore password a more secure way

2017-11-01 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234576#comment-16234576
 ] 

Varun Thacker commented on SOLR-10307:
--

Hi Mano,

We need to update the ref guide for this change as well right?

> Provide SSL/TLS keystore password a more secure way
> ---
>
> Key: SOLR-10307
> URL: https://issues.apache.org/jira/browse/SOLR-10307
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
>Priority: Major
> Fix For: 6.7, 7.0
>
> Attachments: SOLR-10307.2.patch, SOLR-10307.patch, SOLR-10307.patch, 
> SOLR-10307.patch
>
>
> Currently the only way to pass server and client side SSL keytstore and 
> truststore passwords is to set specific environment variables that will be 
> passed as system properties, through command line parameter.
> First option is to pass passwords through environment variables which gives a 
> better level of protection. Second option would be to use hadoop credential 
> provider interface to access credential store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-11-01 Thread Michael A. Alcorn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael A. Alcorn resolved SOLR-11386.
--
Resolution: Workaround

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
>Priority: Major
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}=id,score,[features]=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}=id,score,[features]=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11386) Extracting learning to rank features fails when word ordering of EFI argument changed.

2017-11-01 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234526#comment-16234526
 ] 

Michael A. Alcorn commented on SOLR-11386:
--

To close the loop on this, the issue is that the FieldQParser [automatically 
converts multiple terms into 
phrases|https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-FieldQueryParser].

> Extracting learning to rank features fails when word ordering of EFI argument 
> changed.
> --
>
> Key: SOLR-11386
> URL: https://issues.apache.org/jira/browse/SOLR-11386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.5.1
>Reporter: Michael A. Alcorn
>Priority: Major
> Attachments: solr_efi_examples.zip
>
>
> I'm getting some extremely strange behavior when trying to extract features 
> for a learning to rank model. The following query incorrectly says all 
> features have zero values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added 
> couple of fiber channel={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=added couple of fiber channel 
> efi.case_issue=the efi.case_environment=the}=id,score,[features]=10
> {code}
> But this query, which simply moves the word "added" from the front of the 
> provided text to the back, properly fills in the feature values:
> {code}
> http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple of 
> fiber channel added={!ltr model=redhat_efi_model reRankDocs=1 
> efi.case_summary=the efi.case_description=couple of fiber channel added 
> efi.case_issue=the efi.case_environment=the}=id,score,[features]=10
> {code}
> The explain output for the failing query can be found here:
> https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4
> and the explain output for the properly functioning query can be found here:
> https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20797 - Still Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20797/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny

2 tests failed.
FAILED:  org.apache.solr.cloud.TestCollectionAPI.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:35985

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:35985
at 
__randomizedtesting.SeedInfo.seed([3BD18F2CC432FDAD:B385B0F66ACE9055]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:315)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234477#comment-16234477
 ] 

ASF subversion and git services commented on SOLR-11594:


Commit 1691a04ec908e8b07229a917a09f27b2c7610c1b in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1691a04 ]

SOLR-11594: Add precision Stream Evaluator


> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11594.patch
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11581) NoMergeScheduler ctor should be public for allowing instantiation from SOLR

2017-11-01 Thread Nawab Zada Asad iqbal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234474#comment-16234474
 ] 

Nawab Zada Asad iqbal commented on SOLR-11581:
--

Thanks Amrit, and Michael. 

I am already doing most of what you are recommending. I want to write a blog on 
it, once I successfully upgrade to solr 7 and also look forward to your Amrit's 
article. Though, we don't use "useCompondFile" for the sake of better query 
performance. 

Michael: In our current design, we bulk index all the accumulated documents, 
then merge explicitly to an optimal number of segments (10 or so). Only then we 
start live indexing and query traffic to the servers (there are some 
intermediate steps to replace solrconfig and also index for the time taken 
during bulk indexing). In earlier experiments with older Solr versions, keeping 
merging ON while bulk indexing slowed down the whole process. 



> NoMergeScheduler ctor should be public for allowing instantiation from SOLR
> ---
>
> Key: SOLR-11581
> URL: https://issues.apache.org/jira/browse/SOLR-11581
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Priority: Minor
>
> There are scenarios where a SOLR user may want to use NoMergeScheduler. 
> However, it is not possible to use it today, since its constructor is private 
> and solrconfig.xml requires a Scheduler with public constructor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234467#comment-16234467
 ] 

ASF subversion and git services commented on SOLR-11594:


Commit 6eea7f70a09fe5a8345f881fa0796c2620711466 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6eea7f7 ]

SOLR-11594: Add precision Stream Evaluator


> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11594.patch
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11291) Adding Solr Core Reporter

2017-11-01 Thread Omar Abdelnabi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omar Abdelnabi updated SOLR-11291:
--
Attachment: SOLR-11291.patch

Attaching a new patch that fixes a small printing issue in 
SolrConsoleReporter.java

> Adding Solr Core Reporter
> -
>
> Key: SOLR-11291
> URL: https://issues.apache.org/jira/browse/SOLR-11291
> Project: Solr
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Omar Abdelnabi
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11291.patch, SOLR-11291.patch, SOLR-11291.patch
>
>
> Adds a new reporter, SolrCoreReporter, which allows metrics to be reported on 
> per-core basis.
> Also modifies the SolrMetricManager and SolrCoreMetricManager to take 
> advantage of this new reporter.
> Adds a test/example that uses the  SolrCoreReporter. Also adds randomization 
> to SolrCloudReportersTest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11594:
--
Attachment: SOLR-11594.patch

> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11594.patch
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 207 - Still Failing

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/207/

All tests passed

Build Log:
[...truncated 4724 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:826: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:770: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:59: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:495: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2262:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:409:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 115 minutes 29 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11504) Provide a config to restrict number of indexing threads

2017-11-01 Thread Nawab Zada Asad iqbal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234401#comment-16234401
 ] 

Nawab Zada Asad iqbal commented on SOLR-11504:
--

[~dsmiley]
You have duplicated it with "SOLR-3585" . Isn't that JIRA very broad scoped? 
The scope in the current ticket (11504) is to restrict the requests from Solr 
to Lucene's `IndexWriter`. My initial thoughts are: IndexWriter.getDocument(s) 
and updateDocument(s) is mostly used from `DirectUpdateHandler2`  (It is also 
used in `FileBasedSpellChecker.java` : which seems to be a non-routine 
scenario). For the purpose of fixing SOLR-11504, it seems enough to use a 
counting semaphore (or any similar structure) to control the flow of indexing 
requests from `DirectUpdateHandler2` to `IndexWriter`. 

What do you think?

> Provide a config to restrict number of indexing threads 
> 
>
> Key: SOLR-11504
> URL: https://issues.apache.org/jira/browse/SOLR-11504
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3, 6.0, 7.0
>Reporter: Nawab Zada Asad iqbal
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> For heavy indexing load (through REST api), Solr does not have any way to 
> restrict number of threads. There used to be a config in lucene to restrict 
> number of threads but that has been removed since 
> https://issues.apache.org/jira/browse/LUCENE-6659 . 
> For example, in my bulk indexing scenario, within few minutes, my solr server 
> had created 300 parallel threads each writing its own segment. The result was 
> tons of small segments getting flushed to disk (as total RAM limit was 
> reached quickly by sum of all segments), and solr has to spend time later to 
> merge them into reasonable sizes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11594:
-

 Summary: Add precision Stream Evaluator
 Key: SOLR-11594
 URL: https://issues.apache.org/jira/browse/SOLR-11594
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds the precision Stream Evaluator which rounds decimals to a 
specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11594:
--
Fix Version/s: 7.2

> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11594) Add precision Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11594:
-

Assignee: Joel Bernstein

> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-11-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7994.

Resolution: Fixed

> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch, LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234399#comment-16234399
 ] 

ASF subversion and git services commented on LUCENE-7994:
-

Commit ff35365b51b47900d73748fbe1eb05ca4c4de098 in lucene-solr's branch 
refs/heads/branch_7x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff35365 ]

LUCENE-7994: use int/int scatter map to count facets when number of hits is 
small relative to number of unique facet labels


> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch, LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7994) Use int/int hash map for int taxonomy facet counts

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234381#comment-16234381
 ] 

ASF subversion and git services commented on LUCENE-7994:
-

Commit 77e6e291bf34ffaa6f1afc2d9c64779f4b250b65 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=77e6e29 ]

LUCENE-7994: use int/int scatter map to count facets when number of hits is 
small relative to number of unique facet labels


> Use int/int hash map for int taxonomy facet counts
> --
>
> Key: LUCENE-7994
> URL: https://issues.apache.org/jira/browse/LUCENE-7994
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-7994.patch, LUCENE-7994.patch
>
>
> Int taxonomy facets today always count into a dense {{int[]}}, which is 
> wasteful in cases where the number of unique facet labels is high and the 
> size of the current result set is small.
> I factored the native hash map from LUCENE-7927 and use a simple heuristic 
> (customizable by the user by subclassing) to decide up front whether to count 
> sparse or dense.  I also made loading of the large children and siblings 
> {{int[]}} lazy, so that they are only instantiated if you really need them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8018) FieldInfos retains garbage if non-sparse

2017-11-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234379#comment-16234379
 ] 

Michael McCandless commented on LUCENE-8018:


Hi [~jvassev], hmm it should not prevent merging, but rather prevent deleting 
of index files that are still in use by old searchers, even if they have been 
merged away in the latest index.  I.e. if you print the latest searcher you 
should see a "contained" number of segments in it.

Also, if you refresh every 10 seconds, and every such searcher is used (i.e. a 
new search always happens within the 10 seconds), then shouldn't you at worst 
every have 30 * 6 = 180 live searchers?

Do you use {{SearcherLifetimeManager}} to track all these searchers?

> FieldInfos retains garbage if non-sparse
> 
>
> Key: LUCENE-8018
> URL: https://issues.apache.org/jira/browse/LUCENE-8018
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 6.5
> Environment: Lucene 6.5.0, java 8
> openjdk version "1.8.0_45-internal"
> OpenJDK Runtime Environment (build 1.8.0_45-internal-b14)
> OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Julian Vassev
>Assignee: Adrien Grand
>Priority: Major
>  Labels: easyfix, performance
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8018.patch
>
>
> A heap dump revealed a lot of TreeMap.Entry instances (millions of them) for 
> a system with about ~1000 active searchers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8018) FieldInfos retains garbage if non-sparse

2017-11-01 Thread Julian Vassev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234330#comment-16234330
 ] 

Julian Vassev commented on LUCENE-8018:
---

Hi Michael,
Thank you for your interest in this matter.

Yes, the default session timeout is 30 minutes. As new documents are indexed 
almost every 10 seconds, every new session creates a searcher. This also 
prevents efficient merging and during a synthetic test I can observe segment 
file count grow as much 2.5x the number of documents.

I tried with using NRTCachingDirectory but it seems to make no difference.

> FieldInfos retains garbage if non-sparse
> 
>
> Key: LUCENE-8018
> URL: https://issues.apache.org/jira/browse/LUCENE-8018
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 6.5
> Environment: Lucene 6.5.0, java 8
> openjdk version "1.8.0_45-internal"
> OpenJDK Runtime Environment (build 1.8.0_45-internal-b14)
> OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Julian Vassev
>Assignee: Adrien Grand
>Priority: Major
>  Labels: easyfix, performance
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8018.patch
>
>
> A heap dump revealed a lot of TreeMap.Entry instances (millions of them) for 
> a system with about ~1000 active searchers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11484) CloudSolrClient's cache of collection clusterstate can cause RouteExceptions when attempting directUpdates after collection modifications

2017-11-01 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234311#comment-16234311
 ] 

Varun Thacker commented on SOLR-11484:
--

Hi Everyone,

 [~cpoerschke] what are your thoughts on this? I guess the work "Only" in the 
flag would mean that the update should fail if there are no leaders?

In which case our tests should not set this flag and use the default behaviour 
which is "If there is no leader, send the request to any live NRT node"

> CloudSolrClient's cache of collection clusterstate can cause RouteExceptions 
> when attempting directUpdates after collection modifications
> -
>
> Key: SOLR-11484
> URL: https://issues.apache.org/jira/browse/SOLR-11484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11484.patch, SOLR-11484.patch, 
> jenkins.thetaphi.20662.txt
>
>
> This was discovered while auditing jenkins failures from 
> {{TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete}} 
> (where a test explicitly deletes and then recreates a collection with the 
> same name), but as noted in a comment below, SOLR-11392 is another example of 
> non-obvious test failures that can pop up because of this bug.
> In practice, it can affect any CloudSolrClient user after changes have been 
> made to a collection (to add/move replicas, etc...)
> 
> Original jira notes...
> {{TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete}}
> seems to fail with non-trivial frequency, so I grabbed the logs from a recent 
> failure and starting trying to follow along with the actions to figure out 
> what exactly is happening
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20662/
> {noformat}
>[junit4] ERROR   20.3s J1 | 
> TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
> server at https://127.0.0.1:42959/solr/testcollection_shard1_replica_n3: 
> Expected mime type a
> pplication/octet-stream but got text/html. 
>[junit4]> 
>[junit4]>  content="text/html;charset=ISO-8859-1"/>
>[junit4]> Error 404 
> {noformat}
> The crux of this failure appears to be a genuine bug in how CloudSolrClient 
> uses it's cached ClusterState info when doing (direct) updates.  The key bits 
> seem to be:
> * CloudSolrClient does _something_ (update,query,etc...) with a collection 
> causing the current cluster state for the collection to be cached
> * The actual collection changes such that a Solr node/core no longer exists 
> as part of the collection
> * CloudSolrClient is asked to process an UpdateRequest which triggers the 
> code paths for the {{directUpdate()}} method -- which attempts to route the 
> updates directly to a replica of the appropriate shard using the (cache) 
> collection state info
> * CloudSolrClient (may) attempt to send that UpdateRequest to a node/core 
> that doesn't exist, getting a 404 -- which does not (seem to) trigger a state 
> refresh, or retry to find a correct URL to resend the update to.
> Details to follow in comment



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11172) Add Mann-Whitney U test Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11172:
--
Fix Version/s: 7.2

> Add Mann-Whitney U test Stream Evaluator
> 
>
> Key: SOLR-11172
> URL: https://issues.apache.org/jira/browse/SOLR-11172
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11172
>
>
> This ticket will add a Stream Evaluator to perform the Mann-Whitney U Test on 
> two arrays of numbers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11172) Add Mann-Whitney U test Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11172:
--
Issue Type: New Feature  (was: Bug)

> Add Mann-Whitney U test Stream Evaluator
> 
>
> Key: SOLR-11172
> URL: https://issues.apache.org/jira/browse/SOLR-11172
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11172
>
>
> This ticket will add a Stream Evaluator to perform the Mann-Whitney U Test on 
> two arrays of numbers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-01 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234257#comment-16234257
 ] 

Christine Poerschke commented on SOLR-11507:


bq. Need any help with this one?  I'll take over if you want.

Please go ahead, I wouldn't get to this anytime soon. Thanks!

> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234251#comment-16234251
 ] 

David Smiley commented on SOLR-11507:
-

Need any help with this one?  I'll take over if you want.

> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11511) Use existing private field in DistributedUpdateProcessor

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234250#comment-16234250
 ] 

ASF subversion and git services commented on SOLR-11511:


Commit 1916ce058c2ef710ad0e9ddbf8526369da29e21c in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1916ce0 ]

SOLR-11511: minor: use existing fields in DURP: coreDesc, zkController, 
collection

(cherry picked from commit 1ff6084)


> Use existing private field in DistributedUpdateProcessor
> 
>
> Key: SOLR-11511
> URL: https://issues.apache.org/jira/browse/SOLR-11511
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11511.patch
>
>
> The DistributedUpdateProcessor has a private instance field called cloudDesc. 
> It is used in a few places, but most code navigates to CloudDescriptor from 
> the request object instead. 
> The fundamental question of this ticket, is this: is there any reason to 
> distrust this field and do the navigation directly (in which case maybe we 
> get rid of the field instead?) or can we trust it and thus should use it 
> where we can. Since it is a private field only ever updated in the 
> constructor, it's not likely to be changing out from under us. The request 
> from which it is derived is also held in a private final field, so it very 
> much looks to me like this field should have been final and should be used.
> This might or might not be a performance gain (depending on whether or not 
> the compiler can optimize away something like this already), but it will be a 
> readability and consistency gain for sure.
> Attaching patch to tidy this up shortly...
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11511) Use existing private field in DistributedUpdateProcessor

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234249#comment-16234249
 ] 

ASF subversion and git services commented on SOLR-11511:


Commit 1ff6084d8ee9fa26d3ca642d3379fc8fc7b31289 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ff6084 ]

SOLR-11511: minor: use existing fields in DURP: coreDesc, zkController, 
collection


> Use existing private field in DistributedUpdateProcessor
> 
>
> Key: SOLR-11511
> URL: https://issues.apache.org/jira/browse/SOLR-11511
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR-11511.patch
>
>
> The DistributedUpdateProcessor has a private instance field called cloudDesc. 
> It is used in a few places, but most code navigates to CloudDescriptor from 
> the request object instead. 
> The fundamental question of this ticket, is this: is there any reason to 
> distrust this field and do the navigation directly (in which case maybe we 
> get rid of the field instead?) or can we trust it and thus should use it 
> where we can. Since it is a private field only ever updated in the 
> constructor, it's not likely to be changing out from under us. The request 
> from which it is derived is also held in a private final field, so it very 
> much looks to me like this field should have been final and should be used.
> This might or might not be a performance gain (depending on whether or not 
> the compiler can optimize away something like this already), but it will be a 
> readability and consistency gain for sure.
> Attaching patch to tidy this up shortly...
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11593) Add support for covariance matrices to the cov Stream Evaluator

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234244#comment-16234244
 ] 

ASF subversion and git services commented on SOLR-11593:


Commit 6406a345f26f539d90634cf6d5b4539d615c83a3 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6406a34 ]

SOLR-11593: Add support for covariance matrices to the cov Stream Evaluator


> Add support for covariance matrices to the cov Stream Evaluator
> ---
>
> Key: SOLR-11593
> URL: https://issues.apache.org/jira/browse/SOLR-11593
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11593.patch
>
>
> This ticket adds support for covariances matrices to the *cov* Stream 
> Evaluator. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11593) Add support for covariance matrices to the cov Stream Evaluator

2017-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234233#comment-16234233
 ] 

ASF subversion and git services commented on SOLR-11593:


Commit 6d5a7920ae17e4b209c58749f972ca6db38df600 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6d5a792 ]

SOLR-11593: Add support for covariance matrices to the cov Stream Evaluator


> Add support for covariance matrices to the cov Stream Evaluator
> ---
>
> Key: SOLR-11593
> URL: https://issues.apache.org/jira/browse/SOLR-11593
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11593.patch
>
>
> This ticket adds support for covariances matrices to the *cov* Stream 
> Evaluator. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 276 - Unstable!

2017-11-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/276/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test

Error Message:
Expected 4 active replicas null Live Nodes: [127.0.0.1:63590_solr, 
127.0.0.1:63591_solr, 127.0.0.1:63592_solr, 127.0.0.1:63593_solr] Last 
available state: null

Stack Trace:
java.lang.AssertionError: Expected 4 active replicas
null
Live Nodes: [127.0.0.1:63590_solr, 127.0.0.1:63591_solr, 127.0.0.1:63592_solr, 
127.0.0.1:63593_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([20A809573B775129:A8FC368D958B3CD1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.AssignBackwardCompatibilityTest.test(AssignBackwardCompatibilityTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Updated] (SOLR-11593) Add support for covariance matrices to the cov Stream Evaluator

2017-11-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11593:
--
Attachment: SOLR-11593.patch

> Add support for covariance matrices to the cov Stream Evaluator
> ---
>
> Key: SOLR-11593
> URL: https://issues.apache.org/jira/browse/SOLR-11593
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11593.patch
>
>
> This ticket adds support for covariances matrices to the *cov* Stream 
> Evaluator. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11542) Add URP to route time partitioned collections

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11542:

Description: 
Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
for the metadata facility), we'll then need to route documents to the right 
collection.  I propose a new URP.  _(edit: originally it was thought 
DistributedURP would be modified but thankfully we can avoid that)._

The scope of this issue is:
* decide on some alias metadata names & semantics
* decide the collection suffix pattern.  Read/write code (needed to route).
* the routing code

No new partition creation nor deletion happens is this issue.

  was:
Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
for the metadata facility), we'll then need to route documents to the right 
collection.  I tentatively propose a helper class to DistributedURP to do this. 
 Perhaps a separate URP is plausible, though it will take some modifications to 
DistributedURP.

The scope of this issue is:
* decide on some alias metadata names & semantics
* decide the collection suffix pattern.  Read/write code (needed to route).
* the routing code

No new partition creation nor deletion happens is this issue.


> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I propose a new URP.  _(edit: originally it was thought 
> DistributedURP would be modified but thankfully we can avoid that)._
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 878 - Still Failing

2017-11-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/878/

No tests ran.

Build Log:
[...truncated 1302 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:606:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build.xml:484:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/common-build.xml:2262:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/common-build.xml:409:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 23 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-11542) Add URP to route time partitioned collections

2017-11-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11542:

Summary: Add URP to route time partitioned collections  (was: Add feature 
to DistributedURP to route time partitioned collections)

> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I tentatively propose a helper class to DistributedURP to do 
> this.  Perhaps a separate URP is plausible, though it will take some 
> modifications to DistributedURP.
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >