[JENKINS] Lucene-Solr-5.5-Windows (32bit/jdk1.7.0_80) - Build # 82 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/82/ Java: 32bit/jdk1.7.0_80 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.schema.TestManagedSchemaAPI.test Error Message: Error from server at http://127.0.0.1:53021/solr/testschemaapi_shard1_replica2: ERROR: [doc=2] unknown field 'myNewField1' Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:53021/solr/testschemaapi_shard1_replica2: ERROR: [doc=2] unknown field 'myNewField1' at __randomizedtesting.SeedInfo.seed([95A57FFE89DBDB16:1DF140242727B6EE]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806) at org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101) at org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:38 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release file, it contains "CentOS release 5.4 (Final)". Even If I use 'yum install lsof' to try upgrading, the latest version is 4.78 whose '-s' option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release file, it contains "CentOS release 5.4 (Final)". Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:38 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release file, it contains "CentOS release 5.4 (Final)". Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release file, it contains "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:37 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release file, it contains "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:36 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't support "\{protocol-name\}:\{protocol-status\}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper "{protocol-name}\:{protocol-status}" format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:36 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper "{protocol-name}\:{protocol-status}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper "{protocol-name}:{protocol-status}" format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu edited comment on SOLR-7998 at 6/16/16 5:36 AM: -- We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper "{protocol-name}:{protocol-status}" format. So not only suSE or Solaris have this problem. was (Author: scottchu): We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper {protocol-name}:{protocol-status} format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333140#comment-15333140 ] scott chu commented on SOLR-7998: - We have a server that has /etc/redhat-release but no /etc/centos-release. In the redhat-release files, it shows "Centos 5.4 (final). Even I use 'yum install lsof', the latest version is 4.78 which it's -s option doesn't supper {protocol-name}:{protocol-status} format. So not only suSE or Solaris have this problem. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 251 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/251/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'CY val modified' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":1, "params":{ "x":{ "a":"A val", "b":"B val", "":{"v":0}}, "y":{ "c":"CY val", "b":"BY val", "i":20, "d":[ "val 1", "val 2"], "":{"v":0}, from server: http://127.0.0.1:58163/_/i/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'CY val modified' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":1, "params":{ "x":{ "a":"A val", "b":"B val", "":{"v":0}}, "y":{ "c":"CY val", "b":"BY val", "i":20, "d":[ "val 1", "val 2"], "":{"v":0}, from server: http://127.0.0.1:58163/_/i/collection1 at __randomizedtesting.SeedInfo.seed([71A7BE8A84019F72:F9F381502AFDF28A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:194) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 30 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/30/ 1 tests failed. FAILED: org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:53958/_cpb/ct/collection1] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:53958/_cpb/ct/collection1] at __randomizedtesting.SeedInfo.seed([3C682CFF1E3A823:8B92BD155F1FC5DB]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1379) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:99) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:83) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:50) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Congrats Tommaso! On Wed, Jun 15, 2016 at 6:37 PM Michael McCandless < luc...@mikemccandless.com> wrote: > Once a year the Lucene PMC rotates the PMC chair and Apache Vice > President position. > > This year we have nominated and elected Tommaso Teofili as the chair, and > today the board just approved it, so now it's official. > > Congratulations Tommaso! > > Mike McCandless > > http://blog.mikemccandless.com > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling
[ https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332988#comment-15332988 ] Shikha Somani commented on SOLR-8297: - Gentle reminder for the above proposed solution. Please let me know your thoughts on this so I can move ahead with this solution. > Allow join query over 2 sharded collections: enhance functionality and > exception handling > - > > Key: SOLR-8297 > URL: https://issues.apache.org/jira/browse/SOLR-8297 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Paul Blanchaert > > Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail > Khludnev. > A) exception handling: > The exception "SolrCloud join: multiple shards not yet supported" thrown in > the function findLocalReplicaForFromIndex of JoinQParserPlugin is not > triggered correctly: In my use-case, I've a join on a facet.query and when my > results are only found in 1 shard and the facet.query with the join is > querying the last replica of the last slice, then the exception is not thrown. > I believe it's better to verify the nr of slices when we want to verify the > "multiple shards not yet supported" exception (so exception is thrown when > zkController.getClusterState().getSlices(fromIndex).size()>1). > B) functional enhancement: > I would expect that there is no problem to perform a cross-core join over > sharded collections when the following conditions are met: > 1) both collections are sharded with the same replicationFactor and numShards > 2) router.field of the collections is set to the same "key-field" (collection > of "fromindex" has router.field = "from" field and collection joined to has > router.field = "to" field) > The router.field setup ensures that documents with the same "key-field" are > routed to the same node. > So the combination based on the "key-field" should always be available within > the same node. > From a user perspective, I believe these assumptions seem to be a "normal" > use-case in the cross-core join in SolrCloud. > Hope this helps -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.1-Linux (32bit/jdk1.8.0_92) - Build # 42 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/42/ Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Wed Jun 15 23:40:03 ADT 2016 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Wed Jun 15 23:40:03 ADT 2016 at __randomizedtesting.SeedInfo.seed([1F46C0ACC561D22D:C4EDC06AC049BB9E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10996 lines...] [junit4] Suite: org.apache.solr.handler.TestReplicationHandler [junit4]
[JENKINS] Lucene-Solr-6.1-Windows (32bit/jdk1.8.0_92) - Build # 14 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Windows/14/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 2 object(s) that were not released!!! [MockDirectoryWrapper, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 2 object(s) that were not released!!! [MockDirectoryWrapper, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([D9B7FAB906BB7F1A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12344 lines...] [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_D9B7FAB906BB7F1A-001\init-core-data-001 [junit4] 2> 2855783 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2> 2855785 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 2855786 INFO (Thread-6680) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 2855786 INFO (Thread-6680) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 2855887 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.c.ZkTestServer start zk server on port:60662 [junit4] 2> 2855887 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 2855887 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 2855892 INFO (zkCallback-9321-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@bba904 name:ZooKeeperConnection Watcher:127.0.0.1:60662 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 2855892 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 2855892 INFO (SUITE-TestManagedSchemaAPI-seed#[D9B7FAB906BB7F1A]-worker) []
[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.8.0_92) - Build # 288 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/288/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh Error Message: Could not find collection : c1 Stack Trace: org.apache.solr.common.SolrException: Could not find collection : c1 at __randomizedtesting.SeedInfo.seed([B32069276DA55645:AC9A18D0BDC59080]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:136) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:42) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10737 lines...] [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest [junit4]
[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 15 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/15/ No tests ran. Build Log: [...truncated 8019 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/build.xml:529: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build.xml:479: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/common-build.xml:2606: Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/docs/changes/jiraVersionList.json Total time: 2 minutes 54 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 Setting LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 1215 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1215/ All tests passed Build Log: [...truncated 53803 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:740: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build.xml:138: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build.xml:480: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2496: Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/docs/changes/jiraVersionList.json Total time: 74 minutes 1 second Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 28 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/28/ All tests passed Build Log: [...truncated 52858 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/build.xml:750: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/build.xml:101: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/lucene/build.xml:138: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/lucene/build.xml:479: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/lucene/common-build.xml:2606: Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java7/lucene/build/docs/changes/jiraVersionList.json Total time: 75 minutes 22 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 200 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/200/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.TestCryptoKeys.test Error Message: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:38675/solr within 1 ms Stack Trace: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:38675/solr within 1 ms at __randomizedtesting.SeedInfo.seed([5B3110C015FA646C:D3652F1ABB060994]:0) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:180) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:114) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:104) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:227) at org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:502) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.initCloud(AbstractFullDistribZkTestBase.java:268) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:330) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:990) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:38675/solr within 1 ms at
Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Congratulations, Tommaso!! Karl On Wed, Jun 15, 2016 at 6:45 PM, Martin Gaintywrote: > Buona Fortuna Tommaso! > > Martini > __ > > > -- > From: luc...@mikemccandless.com > Date: Wed, 15 Jun 2016 18:36:54 -0400 > Subject: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili > To: dev@lucene.apache.org > > > Once a year the Lucene PMC rotates the PMC chair and Apache Vice > President position. > > This year we have nominated and elected Tommaso Teofili as the chair, and > today the board just approved it, so now it's official. > > Congratulations Tommaso! > > Mike McCandless > > http://blog.mikemccandless.com >
Re: Lucene/Solr 6.1.0
Jan, there seems to be consensus about updating the description. Would you like to give it a try? Le mar. 14 juin 2016 à 17:18, Erick Ericksona écrit : > +1 > > On Tue, Jun 14, 2016 at 7:24 AM, David Smiley > wrote: > > +1 > > > > On Tue, Jun 14, 2016 at 4:55 AM Jan Høydahl > wrote: > >> > >> - https://wiki.apache.org/solr/ReleaseNote61 > >> > >> > >> The Solr lead-text in the announcement says: > >> > >> Solr is the popular, blazing fast, open source NoSQL search platform > from > >> the Apache Lucene project. Its major features include powerful full-text > >> search, hit highlighting, faceted search, dynamic clustering, database > >> integration, rich document (e.g., Word, PDF) handling, and geospatial > >> search. Solr is highly scalable, providing fault tolerant distributed > search > >> and indexing, and powers the search and navigation features of many of > the > >> world's largest internet sites. > >> > >> > >> It may be worth to consider flagging some of the newer features such as > >> ParallellSQL, JDBC, CDCR or Security -- perhaps in place of some more > >> obvious feature like clustering or highlighting? > >> > >> -- > >> Jan Høydahl, search solution architect > >> Cominvent AS - www.cominvent.com > > > > -- > > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker > > LinkedIn: http://linkedin.com/in/davidwsmiley | Book: > > http://www.solrenterprisesearchserver.com > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Congratulations Tommaso! Le jeu. 16 juin 2016 à 00:37, Michael McCandlessa écrit : > Once a year the Lucene PMC rotates the PMC chair and Apache Vice > President position. > > This year we have nominated and elected Tommaso Teofili as the chair, and > today the board just approved it, so now it's official. > > Congratulations Tommaso! > > Mike McCandless > > http://blog.mikemccandless.com >
[JENKINS] Lucene-Solr-Tests-6.x - Build # 272 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/272/ 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud: 1) Thread[id=2728, name=OverseerHdfsCoreFailoverThread-96077765416058894-127.0.0.1:45530_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud: 1) Thread[id=2728, name=OverseerHdfsCoreFailoverThread-96077765416058894-127.0.0.1:45530_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([9CBE136414B5BF3A]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2728, name=OverseerHdfsCoreFailoverThread-96077765416058894-127.0.0.1:45530_solr-n_02, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.fillInStackTrace(Throwable.java:783) at java.lang.Throwable.(Throwable.java:265) at java.lang.Exception.(Exception.java:66) at java.lang.InterruptedException.(InterruptedException.java:67) at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=2728, name=OverseerHdfsCoreFailoverThread-96077765416058894-127.0.0.1:45530_solr-n_02, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.fillInStackTrace(Throwable.java:783) at java.lang.Throwable.(Throwable.java:265) at java.lang.Exception.(Exception.java:66) at java.lang.InterruptedException.(InterruptedException.java:67) at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([9CBE136414B5BF3A]:0) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestBulkSchemaConcurrent Error Message: 1 thread leaked from SUITE scope at org.apache.solr.schema.TestBulkSchemaConcurrent: 1) Thread[id=702, name=httpUriRequest-129-thread-1-processing-x:collection1 r:core_node3 n:127.0.0.1:35591_quo%2Fap https:127.0.0.1:49901//quo//ap//collection1 s:shard2 c:collection1, state=RUNNABLE, group=TGRP-TestBulkSchemaConcurrent] at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) at sun.security.ssl.InputRecord.read(InputRecord.java:503) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387) at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543) at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446) at
[JENKINS] Lucene-Solr-6.1-Linux (32bit/jdk1.8.0_92) - Build # 41 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/41/ Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler Error Message: ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] at __randomizedtesting.SeedInfo.seed([7FC8E61B0B7E2ECC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 4 object(s) that were not released!!! [TransactionLog, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 4 object(s) that were not released!!! [TransactionLog, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([7FC8E61B0B7E2ECC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
RE: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Buona Fortuna Tommaso! Martini __ From: luc...@mikemccandless.com Date: Wed, 15 Jun 2016 18:36:54 -0400 Subject: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili To: dev@lucene.apache.org Once a year the Lucene PMC rotates the PMC chair and Apache Vice President position. This year we have nominated and elected Tommaso Teofili as the chair, and today the board just approved it, so now it's official. Congratulations Tommaso! Mike McCandless http://blog.mikemccandless.com
Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Congrats Tommaso! -- Steve www.lucidworks.com > On Jun 15, 2016, at 6:36 PM, Michael McCandless> wrote: > > Once a year the Lucene PMC rotates the PMC chair and Apache Vice President > position. > > This year we have nominated and elected Tommaso Teofili as the chair, and > today the board just approved it, so now it's official. > > Congratulations Tommaso! > > Mike McCandless > > http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili
Once a year the Lucene PMC rotates the PMC chair and Apache Vice President position. This year we have nominated and elected Tommaso Teofili as the chair, and today the board just approved it, so now it's official. Congratulations Tommaso! Mike McCandless http://blog.mikemccandless.com
[jira] [Resolved] (SOLR-8857) HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256.
[ https://issues.apache.org/jira/browse/SOLR-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved SOLR-8857. -- Resolution: Fixed Fix Version/s: 5.5.2 5.6 > HdfsUpdateLog does not use configured or new default number of version > buckets and is hard coded to 256. > > > Key: SOLR-8857 > URL: https://issues.apache.org/jira/browse/SOLR-8857 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 5.6, 6.1, 5.5.2, master (7.0), 6.0.1 > > Attachments: SOLR-8857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8857) HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256.
[ https://issues.apache.org/jira/browse/SOLR-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332726#comment-15332726 ] ASF subversion and git services commented on SOLR-8857: --- Commit 96deb63fbe868f250a3d477fc439bc665cb0af28 in lucene-solr's branch refs/heads/branch_5_5 from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96deb63 ] SOLR-8857: HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256. > HdfsUpdateLog does not use configured or new default number of version > buckets and is hard coded to 256. > > > Key: SOLR-8857 > URL: https://issues.apache.org/jira/browse/SOLR-8857 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.0.1, 6.1, master (7.0) > > Attachments: SOLR-8857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8857) HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256.
[ https://issues.apache.org/jira/browse/SOLR-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332727#comment-15332727 ] ASF subversion and git services commented on SOLR-8857: --- Commit b5b55dfb35fdf544da8af48c3e5935b776194a4e in lucene-solr's branch refs/heads/branch_5x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b5b55df ] SOLR-8857: HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256. > HdfsUpdateLog does not use configured or new default number of version > buckets and is hard coded to 256. > > > Key: SOLR-8857 > URL: https://issues.apache.org/jira/browse/SOLR-8857 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.0.1, 6.1, master (7.0) > > Attachments: SOLR-8857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8857) HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256.
[ https://issues.apache.org/jira/browse/SOLR-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332728#comment-15332728 ] ASF subversion and git services commented on SOLR-8857: --- Commit 37f4d73105a26bac51b3b56fd9e4a62d1d82cdbe in lucene-solr's branch refs/heads/branch_5x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=37f4d73 ] SOLR-8857: Remove misplaced CHANGES entry > HdfsUpdateLog does not use configured or new default number of version > buckets and is hard coded to 256. > > > Key: SOLR-8857 > URL: https://issues.apache.org/jira/browse/SOLR-8857 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.0.1, 6.1, master (7.0) > > Attachments: SOLR-8857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-8857) HdfsUpdateLog does not use configured or new default number of version buckets and is hard coded to 256.
[ https://issues.apache.org/jira/browse/SOLR-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened SOLR-8857: -- Reopening to backport to 5.6 and 5.5.2. > HdfsUpdateLog does not use configured or new default number of version > buckets and is hard coded to 256. > > > Key: SOLR-8857 > URL: https://issues.apache.org/jira/browse/SOLR-8857 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.0.1, 6.1, master (7.0) > > Attachments: SOLR-8857.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7231) Problem with NGramAnalyzer, PhraseQuery and Highlighter
[ https://issues.apache.org/jira/browse/LUCENE-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved LUCENE-7231. Resolution: Fixed Fix Version/s: 5.6 5.5.2 > Problem with NGramAnalyzer, PhraseQuery and Highlighter > --- > > Key: LUCENE-7231 > URL: https://issues.apache.org/jira/browse/LUCENE-7231 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 5.4.1 >Reporter: Eva Popenda >Assignee: Alan Woodward > Fix For: 6.1, 5.5.2, 5.6, 6.0.1 > > Attachments: LUCENE-7231.patch > > > Using the Highlighter with N-GramAnalyzer and PhraseQuery and searching for a > substring with length = N yields the following exception: > {noformat} > java.lang.IllegalArgumentException: Less than 2 subSpans.size():1 > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:40) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:56) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extractWeightedSpanTerms(WeightedSpanTermExtractor.java:292) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:137) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:506) > at > org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:219) > at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:187) > at > org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:196) > {noformat} > Below is a JUnit-Test reproducing this behavior. In case of searching for a > string with more than N characters or using NGramPhraseQuery this problem > doesn't occur. > Why is it that more than 1 subSpans are required? > {code:java} > public class HighlighterTest { >@Rule >public final ExpectedException exception = ExpectedException.none(); >@Test >public void testHighlighterWithPhraseQueryThrowsException() throws > IOException, InvalidTokenOffsetsException { >final Analyzer analyzer = new NGramAnalyzer(4); >final String fieldName = "substring"; >final List list = new ArrayList<>(); >list.add(new BytesRef("uchu")); >final PhraseQuery query = new PhraseQuery(fieldName, list.toArray(new > BytesRef[list.size()])); >final QueryScorer fragmentScorer = new QueryScorer(query, fieldName); >final SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("", > ""); >exception.expect(IllegalArgumentException.class); >exception.expectMessage("Less than 2 subSpans.size():1"); >final Highlighter highlighter = new > Highlighter(formatter,TextEncoder.NONE.getEncoder(), fragmentScorer); >highlighter.setTextFragmenter(new SimpleFragmenter(100)); >final String fragment = highlighter.getBestFragment(analyzer, > fieldName, "Buchung"); >assertEquals("Buchung",fragment); >} > public final class NGramAnalyzer extends Analyzer { >private final int minNGram; >public NGramAnalyzer(final int minNGram) { >super(); >this.minNGram = minNGram; >} >@Override >protected TokenStreamComponents createComponents(final String fieldName) { >final Tokenizer source = new NGramTokenizer(minNGram, minNGram); >return new TokenStreamComponents(source); >} > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7231) Problem with NGramAnalyzer, PhraseQuery and Highlighter
[ https://issues.apache.org/jira/browse/LUCENE-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332687#comment-15332687 ] ASF subversion and git services commented on LUCENE-7231: - Commit 90e823ed37edcce3984296ba6f16654d47f65d64 in lucene-solr's branch refs/heads/branch_5_5 from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=90e823e ] LUCENE-7231: WeightedSpanTermExtractor correctly deals with single-term PhraseQuery > Problem with NGramAnalyzer, PhraseQuery and Highlighter > --- > > Key: LUCENE-7231 > URL: https://issues.apache.org/jira/browse/LUCENE-7231 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 5.4.1 >Reporter: Eva Popenda >Assignee: Alan Woodward > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7231.patch > > > Using the Highlighter with N-GramAnalyzer and PhraseQuery and searching for a > substring with length = N yields the following exception: > {noformat} > java.lang.IllegalArgumentException: Less than 2 subSpans.size():1 > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:40) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:56) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extractWeightedSpanTerms(WeightedSpanTermExtractor.java:292) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:137) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:506) > at > org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:219) > at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:187) > at > org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:196) > {noformat} > Below is a JUnit-Test reproducing this behavior. In case of searching for a > string with more than N characters or using NGramPhraseQuery this problem > doesn't occur. > Why is it that more than 1 subSpans are required? > {code:java} > public class HighlighterTest { >@Rule >public final ExpectedException exception = ExpectedException.none(); >@Test >public void testHighlighterWithPhraseQueryThrowsException() throws > IOException, InvalidTokenOffsetsException { >final Analyzer analyzer = new NGramAnalyzer(4); >final String fieldName = "substring"; >final List list = new ArrayList<>(); >list.add(new BytesRef("uchu")); >final PhraseQuery query = new PhraseQuery(fieldName, list.toArray(new > BytesRef[list.size()])); >final QueryScorer fragmentScorer = new QueryScorer(query, fieldName); >final SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("", > ""); >exception.expect(IllegalArgumentException.class); >exception.expectMessage("Less than 2 subSpans.size():1"); >final Highlighter highlighter = new > Highlighter(formatter,TextEncoder.NONE.getEncoder(), fragmentScorer); >highlighter.setTextFragmenter(new SimpleFragmenter(100)); >final String fragment = highlighter.getBestFragment(analyzer, > fieldName, "Buchung"); >assertEquals("Buchung",fragment); >} > public final class NGramAnalyzer extends Analyzer { >private final int minNGram; >public NGramAnalyzer(final int minNGram) { >super(); >this.minNGram = minNGram; >} >@Override >protected TokenStreamComponents createComponents(final String fieldName) { >final Tokenizer source = new NGramTokenizer(minNGram, minNGram); >return new TokenStreamComponents(source); >} > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7231) Problem with NGramAnalyzer, PhraseQuery and Highlighter
[ https://issues.apache.org/jira/browse/LUCENE-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332688#comment-15332688 ] ASF subversion and git services commented on LUCENE-7231: - Commit c92703d3875bf8a47ff828d5910f78772e3841af in lucene-solr's branch refs/heads/branch_5x from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c92703d ] LUCENE-7231: WeightedSpanTermExtractor correctly deals with single-term PhraseQuery > Problem with NGramAnalyzer, PhraseQuery and Highlighter > --- > > Key: LUCENE-7231 > URL: https://issues.apache.org/jira/browse/LUCENE-7231 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 5.4.1 >Reporter: Eva Popenda >Assignee: Alan Woodward > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7231.patch > > > Using the Highlighter with N-GramAnalyzer and PhraseQuery and searching for a > substring with length = N yields the following exception: > {noformat} > java.lang.IllegalArgumentException: Less than 2 subSpans.size():1 > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:40) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:56) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extractWeightedSpanTerms(WeightedSpanTermExtractor.java:292) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:137) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:506) > at > org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:219) > at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:187) > at > org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:196) > {noformat} > Below is a JUnit-Test reproducing this behavior. In case of searching for a > string with more than N characters or using NGramPhraseQuery this problem > doesn't occur. > Why is it that more than 1 subSpans are required? > {code:java} > public class HighlighterTest { >@Rule >public final ExpectedException exception = ExpectedException.none(); >@Test >public void testHighlighterWithPhraseQueryThrowsException() throws > IOException, InvalidTokenOffsetsException { >final Analyzer analyzer = new NGramAnalyzer(4); >final String fieldName = "substring"; >final List list = new ArrayList<>(); >list.add(new BytesRef("uchu")); >final PhraseQuery query = new PhraseQuery(fieldName, list.toArray(new > BytesRef[list.size()])); >final QueryScorer fragmentScorer = new QueryScorer(query, fieldName); >final SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("", > ""); >exception.expect(IllegalArgumentException.class); >exception.expectMessage("Less than 2 subSpans.size():1"); >final Highlighter highlighter = new > Highlighter(formatter,TextEncoder.NONE.getEncoder(), fragmentScorer); >highlighter.setTextFragmenter(new SimpleFragmenter(100)); >final String fragment = highlighter.getBestFragment(analyzer, > fieldName, "Buchung"); >assertEquals("Buchung",fragment); >} > public final class NGramAnalyzer extends Analyzer { >private final int minNGram; >public NGramAnalyzer(final int minNGram) { >super(); >this.minNGram = minNGram; >} >@Override >protected TokenStreamComponents createComponents(final String fieldName) { >final Tokenizer source = new NGramTokenizer(minNGram, minNGram); >return new TokenStreamComponents(source); >} > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 29 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/29/ 1 tests failed. FAILED: org.apache.solr.cloud.TestConfigSetsAPIExclusivity.testAPIExclusivity Error Message: Unexpected exception: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46967/solr: Error copying nodes from zookeeper path /configs/baseConfigSet1 to /configs/configSet1 expected:<0> but was:<1> Stack Trace: java.lang.AssertionError: Unexpected exception: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46967/solr: Error copying nodes from zookeeper path /configs/baseConfigSet1 to /configs/configSet1 expected:<0> but was:<1> at __randomizedtesting.SeedInfo.seed([947AFF5CDCA7D4CB:E90FE6A785036D82]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.TestConfigSetsAPIExclusivity.testAPIExclusivity(TestConfigSetsAPIExclusivity.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Reopened] (LUCENE-7231) Problem with NGramAnalyzer, PhraseQuery and Highlighter
[ https://issues.apache.org/jira/browse/LUCENE-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened LUCENE-7231: Reopening to backport to 5.6 and 5.5.2. > Problem with NGramAnalyzer, PhraseQuery and Highlighter > --- > > Key: LUCENE-7231 > URL: https://issues.apache.org/jira/browse/LUCENE-7231 > Project: Lucene - Core > Issue Type: Bug > Components: modules/highlighter >Affects Versions: 5.4.1 >Reporter: Eva Popenda >Assignee: Alan Woodward > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7231.patch > > > Using the Highlighter with N-GramAnalyzer and PhraseQuery and searching for a > substring with length = N yields the following exception: > {noformat} > java.lang.IllegalArgumentException: Less than 2 subSpans.size():1 > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:40) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:56) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extractWeightedSpanTerms(WeightedSpanTermExtractor.java:292) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:137) > at > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:506) > at > org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:219) > at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:187) > at > org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:196) > {noformat} > Below is a JUnit-Test reproducing this behavior. In case of searching for a > string with more than N characters or using NGramPhraseQuery this problem > doesn't occur. > Why is it that more than 1 subSpans are required? > {code:java} > public class HighlighterTest { >@Rule >public final ExpectedException exception = ExpectedException.none(); >@Test >public void testHighlighterWithPhraseQueryThrowsException() throws > IOException, InvalidTokenOffsetsException { >final Analyzer analyzer = new NGramAnalyzer(4); >final String fieldName = "substring"; >final List list = new ArrayList<>(); >list.add(new BytesRef("uchu")); >final PhraseQuery query = new PhraseQuery(fieldName, list.toArray(new > BytesRef[list.size()])); >final QueryScorer fragmentScorer = new QueryScorer(query, fieldName); >final SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("", > ""); >exception.expect(IllegalArgumentException.class); >exception.expectMessage("Less than 2 subSpans.size():1"); >final Highlighter highlighter = new > Highlighter(formatter,TextEncoder.NONE.getEncoder(), fragmentScorer); >highlighter.setTextFragmenter(new SimpleFragmenter(100)); >final String fragment = highlighter.getBestFragment(analyzer, > fieldName, "Buchung"); >assertEquals("Buchung",fragment); >} > public final class NGramAnalyzer extends Analyzer { >private final int minNGram; >public NGramAnalyzer(final int minNGram) { >super(); >this.minNGram = minNGram; >} >@Override >protected TokenStreamComponents createComponents(final String fieldName) { >final Tokenizer source = new NGramTokenizer(minNGram, minNGram); >return new TokenStreamComponents(source); >} > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)
[ https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved LUCENE-7284. Resolution: Fixed Fix Version/s: 5.6 5.5.2 > UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym > Query Expansion) > - > > Key: LUCENE-7284 > URL: https://issues.apache.org/jira/browse/LUCENE-7284 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Daniel Bigham >Assignee: Alan Woodward >Priority: Minor > Fix For: 6.1, 5.5.2, 5.6, 6.0.1 > > Attachments: LUCENE-7284.patch > > > I am trying to support synonyms on the query side by doing > query expansion. > For example, the query "open webpage" can be expanded if the following > things are synonyms: > "open" | "go to" > This becomes the following: (I'm using both the stop word filter and the > stemming filter) > {code} > spanNear( > [ > spanOr([Title:open, Title:go]), > Title:webpag > ], > 0, > true > ) > {code} > Notice that "go to" became just "go", because apparently "to" is removed > by the stop word filter. > Interestingly, if you turn "go to webpage" into a phrase, you get "go ? > webpage", but if you turn "go to" into a phrase, you just get "go", > because apparently a trailing stop word in a PhraseQuery gets dropped. > (there would actually be no way to represent the gap currently because > it represents gaps implicitly via the position of the phrase tokens, and > if there is no second token, there's no way to implicitly indicate that > there is a gap there) > The above query then fails to match "go to webpage", because "go to > webpage" in the index tokenizes as "go _ webpage", and the query, > because it lost its gap, tried to only match "go webpage". > To try and work around that, I represent "go to" not as a phrase, but as > a SpanNearQuery, like this: > {code} > spanNear( > [ > spanOr( > [ > Title:open, > spanNear([Title:go, SpanGap(:1)], 0, true), > ] > ), > Title:webpag > ], > 0, > true > ) > {code} > However, when I run that query, I get the following: > {code} > A Java exception occurred: java.lang.UnsupportedOperationException > at > org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398) > at > org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96) > at > org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45) > at > org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88) > at > org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104) > at > org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82) > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134) > at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38) > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135) > {code} > ... and when I look up that GapSpans class in SpanNearQuery.java, I see: > {code} > @Override > public float positionsCost() { >throw new UnsupportedOperationException(); > } > {code} > I asked this question on the mailing list on May 14 and was directed to > submit a bug here. > This issue is of relatively high priority for us, since this represents the > most promising technique we have for supporting synonyms on top of Lucene. > (since the SynonymFilter suffers serious issues wrt multi-word synonyms) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)
[ https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332641#comment-15332641 ] ASF subversion and git services commented on LUCENE-7284: - Commit fa9940b3e3ab9955a26dfe30839d591b7703a8c4 in lucene-solr's branch refs/heads/branch_5x from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fa9940b ] LUCENE-7284: GapSpans needs to implement positionsCost() > UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym > Query Expansion) > - > > Key: LUCENE-7284 > URL: https://issues.apache.org/jira/browse/LUCENE-7284 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Daniel Bigham >Assignee: Alan Woodward >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7284.patch > > > I am trying to support synonyms on the query side by doing > query expansion. > For example, the query "open webpage" can be expanded if the following > things are synonyms: > "open" | "go to" > This becomes the following: (I'm using both the stop word filter and the > stemming filter) > {code} > spanNear( > [ > spanOr([Title:open, Title:go]), > Title:webpag > ], > 0, > true > ) > {code} > Notice that "go to" became just "go", because apparently "to" is removed > by the stop word filter. > Interestingly, if you turn "go to webpage" into a phrase, you get "go ? > webpage", but if you turn "go to" into a phrase, you just get "go", > because apparently a trailing stop word in a PhraseQuery gets dropped. > (there would actually be no way to represent the gap currently because > it represents gaps implicitly via the position of the phrase tokens, and > if there is no second token, there's no way to implicitly indicate that > there is a gap there) > The above query then fails to match "go to webpage", because "go to > webpage" in the index tokenizes as "go _ webpage", and the query, > because it lost its gap, tried to only match "go webpage". > To try and work around that, I represent "go to" not as a phrase, but as > a SpanNearQuery, like this: > {code} > spanNear( > [ > spanOr( > [ > Title:open, > spanNear([Title:go, SpanGap(:1)], 0, true), > ] > ), > Title:webpag > ], > 0, > true > ) > {code} > However, when I run that query, I get the following: > {code} > A Java exception occurred: java.lang.UnsupportedOperationException > at > org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398) > at > org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96) > at > org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45) > at > org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88) > at > org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104) > at > org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82) > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134) > at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38) > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135) > {code} > ... and when I look up that GapSpans class in SpanNearQuery.java, I see: > {code} > @Override > public float positionsCost() { >throw new UnsupportedOperationException(); > } > {code} > I asked this question on the mailing list on May 14 and was directed to > submit a bug here. > This issue is of relatively high priority for us, since this represents the > most promising technique we have for supporting synonyms on top of Lucene. > (since the SynonymFilter suffers serious issues wrt multi-word synonyms) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)
[ https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332640#comment-15332640 ] ASF subversion and git services commented on LUCENE-7284: - Commit 3e5832291b807a9b9b6271d8fd990678f27a83c4 in lucene-solr's branch refs/heads/branch_5_5 from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e58322 ] LUCENE-7284: GapSpans needs to implement positionsCost() > UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym > Query Expansion) > - > > Key: LUCENE-7284 > URL: https://issues.apache.org/jira/browse/LUCENE-7284 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Daniel Bigham >Assignee: Alan Woodward >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7284.patch > > > I am trying to support synonyms on the query side by doing > query expansion. > For example, the query "open webpage" can be expanded if the following > things are synonyms: > "open" | "go to" > This becomes the following: (I'm using both the stop word filter and the > stemming filter) > {code} > spanNear( > [ > spanOr([Title:open, Title:go]), > Title:webpag > ], > 0, > true > ) > {code} > Notice that "go to" became just "go", because apparently "to" is removed > by the stop word filter. > Interestingly, if you turn "go to webpage" into a phrase, you get "go ? > webpage", but if you turn "go to" into a phrase, you just get "go", > because apparently a trailing stop word in a PhraseQuery gets dropped. > (there would actually be no way to represent the gap currently because > it represents gaps implicitly via the position of the phrase tokens, and > if there is no second token, there's no way to implicitly indicate that > there is a gap there) > The above query then fails to match "go to webpage", because "go to > webpage" in the index tokenizes as "go _ webpage", and the query, > because it lost its gap, tried to only match "go webpage". > To try and work around that, I represent "go to" not as a phrase, but as > a SpanNearQuery, like this: > {code} > spanNear( > [ > spanOr( > [ > Title:open, > spanNear([Title:go, SpanGap(:1)], 0, true), > ] > ), > Title:webpag > ], > 0, > true > ) > {code} > However, when I run that query, I get the following: > {code} > A Java exception occurred: java.lang.UnsupportedOperationException > at > org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398) > at > org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96) > at > org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45) > at > org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88) > at > org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104) > at > org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82) > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134) > at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38) > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135) > {code} > ... and when I look up that GapSpans class in SpanNearQuery.java, I see: > {code} > @Override > public float positionsCost() { >throw new UnsupportedOperationException(); > } > {code} > I asked this question on the mailing list on May 14 and was directed to > submit a bug here. > This issue is of relatively high priority for us, since this represents the > most promising technique we have for supporting synonyms on top of Lucene. > (since the SynonymFilter suffers serious issues wrt multi-word synonyms) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)
[ https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened LUCENE-7284: Reopening to backport to 5.6 and 5.5.2. > UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym > Query Expansion) > - > > Key: LUCENE-7284 > URL: https://issues.apache.org/jira/browse/LUCENE-7284 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Daniel Bigham >Assignee: Alan Woodward >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE-7284.patch > > > I am trying to support synonyms on the query side by doing > query expansion. > For example, the query "open webpage" can be expanded if the following > things are synonyms: > "open" | "go to" > This becomes the following: (I'm using both the stop word filter and the > stemming filter) > {code} > spanNear( > [ > spanOr([Title:open, Title:go]), > Title:webpag > ], > 0, > true > ) > {code} > Notice that "go to" became just "go", because apparently "to" is removed > by the stop word filter. > Interestingly, if you turn "go to webpage" into a phrase, you get "go ? > webpage", but if you turn "go to" into a phrase, you just get "go", > because apparently a trailing stop word in a PhraseQuery gets dropped. > (there would actually be no way to represent the gap currently because > it represents gaps implicitly via the position of the phrase tokens, and > if there is no second token, there's no way to implicitly indicate that > there is a gap there) > The above query then fails to match "go to webpage", because "go to > webpage" in the index tokenizes as "go _ webpage", and the query, > because it lost its gap, tried to only match "go webpage". > To try and work around that, I represent "go to" not as a phrase, but as > a SpanNearQuery, like this: > {code} > spanNear( > [ > spanOr( > [ > Title:open, > spanNear([Title:go, SpanGap(:1)], 0, true), > ] > ), > Title:webpag > ], > 0, > true > ) > {code} > However, when I run that query, I get the following: > {code} > A Java exception occurred: java.lang.UnsupportedOperationException > at > org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398) > at > org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96) > at > org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45) > at > org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88) > at > org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104) > at > org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82) > at > org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41) > at > org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54) > at > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232) > at > org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134) > at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38) > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135) > {code} > ... and when I look up that GapSpans class in SpanNearQuery.java, I see: > {code} > @Override > public float positionsCost() { >throw new UnsupportedOperationException(); > } > {code} > I asked this question on the mailing list on May 14 and was directed to > submit a bug here. > This issue is of relatively high priority for us, since this represents the > most promising technique we have for supporting synonyms on top of Lucene. > (since the SynonymFilter suffers serious issues wrt multi-word synonyms) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7219) (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term optionality logic
[ https://issues.apache.org/jira/browse/LUCENE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved LUCENE-7219. Resolution: Fixed Fix Version/s: (was: 5.x) 5.6 5.5.2 > (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term > optionality logic > -- > > Key: LUCENE-7219 > URL: https://issues.apache.org/jira/browse/LUCENE-7219 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 6.1, 5.5.2, master (7.0), 5.6, 6.0.1 > > Attachments: LUCENE-7219.patch, LUCENE-7219.patch > > > Currently the {{(Point|LegacyNumeric)RangeQuery}} queries themselves support > {{(lower|upper)Term}} optionality e.g. the lowerTerm could be omitted but the > {{(Point|LegacyNumeric)RangeQueryBuilder}} builders mandate > {{(lower|upper)Term}} attributes. This mismatch seems unintended. > Proposed patch for ...QueryBuilder logic to match ...Query logic to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7219) (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term optionality logic
[ https://issues.apache.org/jira/browse/LUCENE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332625#comment-15332625 ] ASF subversion and git services commented on LUCENE-7219: - Commit 8eeb5858d407347099dfe360d01682669a27b02f in lucene-solr's branch refs/heads/branch_5_5 from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8eeb585 ] LUCENE-7219: Make queryparser/xml NumericRange(Query|Filter) builders match the underlying (query|filter)'s (lower|upper)Term optionality logic. (Kaneshanathan Srivisagan, Christine Poerschke) > (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term > optionality logic > -- > > Key: LUCENE-7219 > URL: https://issues.apache.org/jira/browse/LUCENE-7219 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.x, 6.1, 6.0.1, master (7.0) > > Attachments: LUCENE-7219.patch, LUCENE-7219.patch > > > Currently the {{(Point|LegacyNumeric)RangeQuery}} queries themselves support > {{(lower|upper)Term}} optionality e.g. the lowerTerm could be omitted but the > {{(Point|LegacyNumeric)RangeQueryBuilder}} builders mandate > {{(lower|upper)Term}} attributes. This mismatch seems unintended. > Proposed patch for ...QueryBuilder logic to match ...Query logic to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7219) (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term optionality logic
[ https://issues.apache.org/jira/browse/LUCENE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332626#comment-15332626 ] ASF subversion and git services commented on LUCENE-7219: - Commit f4362098f7290104205403b86e901c767c0c4d22 in lucene-solr's branch refs/heads/branch_5_5 from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f436209 ] LUCENE-7219: Add CHANGES entry > (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term > optionality logic > -- > > Key: LUCENE-7219 > URL: https://issues.apache.org/jira/browse/LUCENE-7219 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.x, 6.1, 6.0.1, master (7.0) > > Attachments: LUCENE-7219.patch, LUCENE-7219.patch > > > Currently the {{(Point|LegacyNumeric)RangeQuery}} queries themselves support > {{(lower|upper)Term}} optionality e.g. the lowerTerm could be omitted but the > {{(Point|LegacyNumeric)RangeQueryBuilder}} builders mandate > {{(lower|upper)Term}} attributes. This mismatch seems unintended. > Proposed patch for ...QueryBuilder logic to match ...Query logic to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7219) (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term optionality logic
[ https://issues.apache.org/jira/browse/LUCENE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened LUCENE-7219: Reopening to backport to 5.5.2. > (Point|LegacyNumeric)RangeQuery builders to match queries' (lower|upper)Term > optionality logic > -- > > Key: LUCENE-7219 > URL: https://issues.apache.org/jira/browse/LUCENE-7219 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.x, 6.1, 6.0.1, master (7.0) > > Attachments: LUCENE-7219.patch, LUCENE-7219.patch > > > Currently the {{(Point|LegacyNumeric)RangeQuery}} queries themselves support > {{(lower|upper)Term}} optionality e.g. the lowerTerm could be omitted but the > {{(Point|LegacyNumeric)RangeQueryBuilder}} builders mandate > {{(lower|upper)Term}} attributes. This mismatch seems unintended. > Proposed patch for ...QueryBuilder logic to match ...Query logic to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7279) AIOOBE from JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved LUCENE-7279. Resolution: Fixed Fix Version/s: 5.6 5.5.2 > AIOOBE from JapaneseTokenizer > - > > Key: LUCENE-7279 > URL: https://issues.apache.org/jira/browse/LUCENE-7279 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 5.5.2, master (7.0), 5.6, 6.0.1 > > Attachments: LUCENE-7279.patch > > > On certain Japanese input strings you can hit this: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: -1 > at > __randomizedtesting.SeedInfo.seed([C6752A567B924B1:2B195610610ED60]:0) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.backtrace(JapaneseTokenizer.java:1607) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:902) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:479) > at > org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testBigDocument(TestJapaneseTokenizer.java:837) > {noformat} > I have a patch with a test case and fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7279) AIOOBE from JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332606#comment-15332606 ] ASF subversion and git services commented on LUCENE-7279: - Commit bcf1eb7d24810eae7123c89e079823ce56b9dd25 in lucene-solr's branch refs/heads/branch_5x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bcf1eb7 ] LUCENE-7279: don't throw AIOOBE on some valid inputs > AIOOBE from JapaneseTokenizer > - > > Key: LUCENE-7279 > URL: https://issues.apache.org/jira/browse/LUCENE-7279 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 6.0.1, master (7.0) > > Attachments: LUCENE-7279.patch > > > On certain Japanese input strings you can hit this: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: -1 > at > __randomizedtesting.SeedInfo.seed([C6752A567B924B1:2B195610610ED60]:0) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.backtrace(JapaneseTokenizer.java:1607) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:902) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:479) > at > org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testBigDocument(TestJapaneseTokenizer.java:837) > {noformat} > I have a patch with a test case and fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7279) AIOOBE from JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened LUCENE-7279: Reopening to backport to 5.6 and 5.5.2 > AIOOBE from JapaneseTokenizer > - > > Key: LUCENE-7279 > URL: https://issues.apache.org/jira/browse/LUCENE-7279 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 6.0.1, master (7.0) > > Attachments: LUCENE-7279.patch > > > On certain Japanese input strings you can hit this: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: -1 > at > __randomizedtesting.SeedInfo.seed([C6752A567B924B1:2B195610610ED60]:0) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.backtrace(JapaneseTokenizer.java:1607) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:902) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:479) > at > org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testBigDocument(TestJapaneseTokenizer.java:837) > {noformat} > I have a patch with a test case and fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7279) AIOOBE from JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332605#comment-15332605 ] ASF subversion and git services commented on LUCENE-7279: - Commit 2a3492574b470ca49666f53b66ffa6394a9a78d2 in lucene-solr's branch refs/heads/branch_5_5 from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a34925 ] LUCENE-7279: add CHANGES entry > AIOOBE from JapaneseTokenizer > - > > Key: LUCENE-7279 > URL: https://issues.apache.org/jira/browse/LUCENE-7279 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 6.0.1, master (7.0) > > Attachments: LUCENE-7279.patch > > > On certain Japanese input strings you can hit this: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: -1 > at > __randomizedtesting.SeedInfo.seed([C6752A567B924B1:2B195610610ED60]:0) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.backtrace(JapaneseTokenizer.java:1607) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:902) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:479) > at > org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testBigDocument(TestJapaneseTokenizer.java:837) > {noformat} > I have a patch with a test case and fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7279) AIOOBE from JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332604#comment-15332604 ] ASF subversion and git services commented on LUCENE-7279: - Commit 4a824d62e280f10ad58b43b20d6fe593cabcfd00 in lucene-solr's branch refs/heads/branch_5_5 from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a824d6 ] LUCENE-7279: don't throw AIOOBE on some valid inputs > AIOOBE from JapaneseTokenizer > - > > Key: LUCENE-7279 > URL: https://issues.apache.org/jira/browse/LUCENE-7279 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: 6.0.1, master (7.0) > > Attachments: LUCENE-7279.patch > > > On certain Japanese input strings you can hit this: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: -1 > at > __randomizedtesting.SeedInfo.seed([C6752A567B924B1:2B195610610ED60]:0) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.backtrace(JapaneseTokenizer.java:1607) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:902) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:479) > at > org.apache.lucene.analysis.ja.TestJapaneseTokenizer.testBigDocument(TestJapaneseTokenizer.java:837) > {noformat} > I have a patch with a test case and fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7187) Block join queries' weight impl should implement extractTerms(...)
[ https://issues.apache.org/jira/browse/LUCENE-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved LUCENE-7187. Resolution: Fixed Fix Version/s: 5.5.2 5.6 > Block join queries' weight impl should implement extractTerms(...) > -- > > Key: LUCENE-7187 > URL: https://issues.apache.org/jira/browse/LUCENE-7187 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen >Priority: Minor > Fix For: 5.6, 6.1, 5.5.2, 6.0.1 > > Attachments: LUCENE_7187.patch > > > In the case the distribute document frequencies need to be computed for block > join queries, the child query is ignored. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 209 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/209/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 3 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:65374 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:65374 at __randomizedtesting.SeedInfo.seed([83754146B95FE8AA:B217E9C17A38552]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:601) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:399) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:897) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-7187) Block join queries' weight impl should implement extractTerms(...)
[ https://issues.apache.org/jira/browse/LUCENE-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332577#comment-15332577 ] ASF subversion and git services commented on LUCENE-7187: - Commit 93fcdec815e5f22572b34c798ad19c21872daad8 in lucene-solr's branch refs/heads/branch_5_5 from [~martijn] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=93fcdec ] LUCENE-7187: Block join queries' Weight#extractTerms(...) implementations should delegate to the wrapped weight. > Block join queries' weight impl should implement extractTerms(...) > -- > > Key: LUCENE-7187 > URL: https://issues.apache.org/jira/browse/LUCENE-7187 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE_7187.patch > > > In the case the distribute document frequencies need to be computed for block > join queries, the child query is ignored. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7187) Block join queries' weight impl should implement extractTerms(...)
[ https://issues.apache.org/jira/browse/LUCENE-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332579#comment-15332579 ] ASF subversion and git services commented on LUCENE-7187: - Commit 1c88077132cf04710b22aea150b5a002763ceb1c in lucene-solr's branch refs/heads/branch_5x from [~martijn] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1c88077 ] LUCENE-7187: Block join queries' Weight#extractTerms(...) implementations should delegate to the wrapped weight. > Block join queries' weight impl should implement extractTerms(...) > -- > > Key: LUCENE-7187 > URL: https://issues.apache.org/jira/browse/LUCENE-7187 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE_7187.patch > > > In the case the distribute document frequencies need to be computed for block > join queries, the child query is ignored. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7187) Block join queries' weight impl should implement extractTerms(...)
[ https://issues.apache.org/jira/browse/LUCENE-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe reopened LUCENE-7187: Reopening to backport to 5.6 & 5.5.2. > Block join queries' weight impl should implement extractTerms(...) > -- > > Key: LUCENE-7187 > URL: https://issues.apache.org/jira/browse/LUCENE-7187 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen >Priority: Minor > Fix For: 6.1, 6.0.1 > > Attachments: LUCENE_7187.patch > > > In the case the distribute document frequencies need to be computed for block > join queries, the child query is ignored. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.
[ https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332488#comment-15332488 ] Mike Drob commented on SOLR-7065: - Recently saw something that might be this. Started trying to bring your patch up to current master, but ran into issues; some of the changes that you had in this patch got committed as part of SOLR-7033. I also didn't understand the advantage of returning an int instead of a boolean for sync. It looks like you used it to provide a ternary indicator of error, no sync necessary, or sync completed? That code changed a bunch with the fingerprinting from SOLR-8586. A specific case that doesn't make sense was {code:title=SyncStrategy.java} if (SKIP_AUTO_RECOVERY) { - return true; + return -1; } {code} Should this be {{return 0}}? I see a lot of design discussion in this JIRA prior, but not a lot of consensus. What do you think is the easiest way forward from here, [~markrmil...@gmail.com] > Let a replica become the leader regardless of it's last published state if > all replicas participate in the election process. > > > Key: SOLR-7065 > URL: https://issues.apache.org/jira/browse/SOLR-7065 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: SOLR-7065.patch, SOLR-7065.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated SOLR-7374: --- Attachment: SOLR-7374.patch [~markrmil...@gmail.com] Here is an updated patch. - Fixed trailing white-spaces - Removed unused and deprecated constructor from RestoreCore - Renamed constant referring to "location" property in BackupRepository interface. All the other review comments were already incorporated in your earlier patch. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 1214 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1214/ 2 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: Expected 2 of 3 replicas to be active but only found 1; [core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:51838/eu","node_name":"127.0.0.1:51838_eu","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"c8n_1x3_lf_shard1_replica2", "base_url":"http://127.0.0.1:43082/eu;, "node_name":"127.0.0.1:43082_eu", "state":"down"}, "core_node2":{ "state":"down", "base_url":"http://127.0.0.1:44208/eu;, "core":"c8n_1x3_lf_shard1_replica1", "node_name":"127.0.0.1:44208_eu"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:51838/eu;, "node_name":"127.0.0.1:51838_eu", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} Stack Trace: java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 1; [core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:51838/eu","node_name":"127.0.0.1:51838_eu","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"c8n_1x3_lf_shard1_replica2", "base_url":"http://127.0.0.1:43082/eu;, "node_name":"127.0.0.1:43082_eu", "state":"down"}, "core_node2":{ "state":"down", "base_url":"http://127.0.0.1:44208/eu;, "core":"c8n_1x3_lf_shard1_replica1", "node_name":"127.0.0.1:44208_eu"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:51838/eu;, "node_name":"127.0.0.1:51838_eu", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} at __randomizedtesting.SeedInfo.seed([9302E203687B66A4:1B56DDD9C6870B5C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at
[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332291#comment-15332291 ] Hrishikesh Gadre commented on SOLR-7374: [~varunthacker] [~markrmil...@gmail.com] Thanks for the comments! bq. I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . This is a partial patch handling ONLY core level changes. The collection level changes are being captured in the patch for SOLR-9055. I did this to keep the patch relatively short and easier to review. In the patch for SOLR-9055 - I have changed the CollectionsHandler implementation to read default location from solr.xml (instead of cluster property). Since this core level operation is "internal" - technically we don't have to handle the case for missing "location" param in this patch (i.e. we can keep the original behavior). I think I made this change to simplify unit testing. bq. One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . bq. [Mark] It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. I agree that cluster property approach is more convenient as compared to solr.xml. But since we allow users to configure multiple repositories in solr.xml, we can not really use the current cluster property as is. This is because user may want to specify different location for different file-systems (or repositories). Hence at minimum we need one cluster property per repository configuration (e.g. name could be -location). But based on my understanding CLUSTERPROP API implementation requires fixed (or well-known) property names, https://github.com/apache/lucene-solr/blob/651499c82df482b493b0ed166c2ab7196af0a794/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterProperties.java#L90 We may have to relax this restriction for this work. On the other hand, specifying default location in solr.xml is not so bad since user can always specify a location parameter to avoid restarting the Solr cluster. Thoughts? bq. Can we reuse the "location" string with (BackupRepository.DEFAULT_LOCATION_PROPERTY) on line 871/925 of CoreAdminOperation? Let's fix it in CollectionsHandler and OverseerCollectionMessageHandler as well? Let me fix the CoreAdminOperation in this patch. I will defer the collection level changes to SOLR-9055. bq. In RestoreCore do we need to deprecate the old RestoreCore ctor ? Any reason why we can't remove it directly? The deprecated constructor is used by the ReplicationHandler. The new constructor expects the BackupRepository reference which can be obtained only via CoreContainer. I couldn't find a way to get hold of CoreContainer in ReplicationHandler. Hence I didn't remove this constructor. bq. repo is the key used to specify the implementation. In Solr xml the tag is called repository . Should we just use repository throughout? Sure that make sense. Let me fix this. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332291#comment-15332291 ] Hrishikesh Gadre edited comment on SOLR-7374 at 6/15/16 6:42 PM: - [~varunthacker] [~markrmil...@gmail.com] Thanks for the comments! bq. I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. bq. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . This is a partial patch handling ONLY core level changes. The collection level changes are being captured in the patch for SOLR-9055. I did this to keep the patch relatively short and easier to review. In the patch for SOLR-9055 - I have changed the CollectionsHandler implementation to read default location from solr.xml (instead of cluster property). Since this core level operation is "internal" - technically we don't have to handle the case for missing "location" param in this patch (i.e. we can keep the original behavior). I think I made this change to simplify unit testing. bq. One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . bq. [Mark] It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. I agree that cluster property approach is more convenient as compared to solr.xml. But since we allow users to configure multiple repositories in solr.xml, we can not really use the current cluster property as is. This is because user may want to specify different location for different file-systems (or repositories). Hence at minimum we need one cluster property per repository configuration (e.g. name could be -location). But based on my understanding CLUSTERPROP API implementation requires fixed (or well-known) property names, https://github.com/apache/lucene-solr/blob/651499c82df482b493b0ed166c2ab7196af0a794/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterProperties.java#L90 We may have to relax this restriction for this work. On the other hand, specifying default location in solr.xml is not so bad since user can always specify a location parameter to avoid restarting the Solr cluster. Thoughts? bq. Can we reuse the "location" string with (BackupRepository.DEFAULT_LOCATION_PROPERTY) on line 871/925 of CoreAdminOperation? Let's fix it in CollectionsHandler and OverseerCollectionMessageHandler as well? Let me fix the CoreAdminOperation in this patch. I will defer the collection level changes to SOLR-9055. bq. In RestoreCore do we need to deprecate the old RestoreCore ctor ? Any reason why we can't remove it directly? The deprecated constructor is used by the ReplicationHandler. The new constructor expects the BackupRepository reference which can be obtained only via CoreContainer. I couldn't find a way to get hold of CoreContainer in ReplicationHandler. Hence I didn't remove this constructor. bq. repo is the key used to specify the implementation. In Solr xml the tag is called repository . Should we just use repository throughout? Sure that make sense. Let me fix this. was (Author: hgadre): [~varunthacker] [~markrmil...@gmail.com] Thanks for the comments! bq. I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. bq. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . This is a partial patch handling ONLY core level changes. The collection level changes are being captured in the patch for SOLR-9055. I did this to keep the patch relatively short and easier to review. In the patch for SOLR-9055 - I have changed the CollectionsHandler implementation to read default location from solr.xml (instead of cluster property). Since this core level operation is "internal" - technically we don't have to handle the case for missing "location" param in this patch (i.e. we can keep the original behavior). I think I made this change to simplify unit testing. bq. One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . bq. [Mark] It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. I agree that cluster
[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332291#comment-15332291 ] Hrishikesh Gadre edited comment on SOLR-7374 at 6/15/16 6:41 PM: - [~varunthacker] [~markrmil...@gmail.com] Thanks for the comments! bq. I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. bq. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . This is a partial patch handling ONLY core level changes. The collection level changes are being captured in the patch for SOLR-9055. I did this to keep the patch relatively short and easier to review. In the patch for SOLR-9055 - I have changed the CollectionsHandler implementation to read default location from solr.xml (instead of cluster property). Since this core level operation is "internal" - technically we don't have to handle the case for missing "location" param in this patch (i.e. we can keep the original behavior). I think I made this change to simplify unit testing. bq. One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . bq. [Mark] It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. I agree that cluster property approach is more convenient as compared to solr.xml. But since we allow users to configure multiple repositories in solr.xml, we can not really use the current cluster property as is. This is because user may want to specify different location for different file-systems (or repositories). Hence at minimum we need one cluster property per repository configuration (e.g. name could be -location). But based on my understanding CLUSTERPROP API implementation requires fixed (or well-known) property names, https://github.com/apache/lucene-solr/blob/651499c82df482b493b0ed166c2ab7196af0a794/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterProperties.java#L90 We may have to relax this restriction for this work. On the other hand, specifying default location in solr.xml is not so bad since user can always specify a location parameter to avoid restarting the Solr cluster. Thoughts? bq. Can we reuse the "location" string with (BackupRepository.DEFAULT_LOCATION_PROPERTY) on line 871/925 of CoreAdminOperation? Let's fix it in CollectionsHandler and OverseerCollectionMessageHandler as well? Let me fix the CoreAdminOperation in this patch. I will defer the collection level changes to SOLR-9055. bq. In RestoreCore do we need to deprecate the old RestoreCore ctor ? Any reason why we can't remove it directly? The deprecated constructor is used by the ReplicationHandler. The new constructor expects the BackupRepository reference which can be obtained only via CoreContainer. I couldn't find a way to get hold of CoreContainer in ReplicationHandler. Hence I didn't remove this constructor. bq. repo is the key used to specify the implementation. In Solr xml the tag is called repository . Should we just use repository throughout? Sure that make sense. Let me fix this. was (Author: hgadre): [~varunthacker] [~markrmil...@gmail.com] Thanks for the comments! bq. I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . This is a partial patch handling ONLY core level changes. The collection level changes are being captured in the patch for SOLR-9055. I did this to keep the patch relatively short and easier to review. In the patch for SOLR-9055 - I have changed the CollectionsHandler implementation to read default location from solr.xml (instead of cluster property). Since this core level operation is "internal" - technically we don't have to handle the case for missing "location" param in this patch (i.e. we can keep the original behavior). I think I made this change to simplify unit testing. bq. One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . bq. [Mark] It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. I agree that cluster property
[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 40 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/40/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Thu Jun 16 04:18:02 AEST 2016 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Thu Jun 16 04:18:02 AEST 2016 at __randomizedtesting.SeedInfo.seed([4D182744B582250E:96B32782B0AA4CBD]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11565 lines...] [junit4] Suite:
[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr
[ https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332212#comment-15332212 ] Erick Erickson commented on SOLR-9200: -- Greg: What's the status of this one? Are you actively working on it or is it on the back burner? > Add Delegation Token Support to Solr > > > Key: SOLR-9200 > URL: https://issues.apache.org/jira/browse/SOLR-9200 > Project: Solr > Issue Type: New Feature > Components: security >Reporter: Gregory Chanan >Assignee: Gregory Chanan > > SOLR-7468 added support for kerberos authentication via the hadoop > authentication filter. Hadoop also has support for an authentication filter > that supports delegation tokens, which allow authenticated users the ability > to grab/renew/delete a token that can be used to bypass the normal > authentication path for a time. This is useful in a variety of use cases: > 1) distributed clients (e.g. MapReduce) where each client may not have access > to the user's kerberos credentials. Instead, the job runner can grab a > delegation token and use that during task execution. > 2) If the load on the kerberos server is too high, delegation tokens can > avoid hitting the kerberos server after the first request > 3) If requests/permissions need to be delegated to another user: the more > privileged user can request a delegation token that can be passed to the less > privileged user. > Note to self: > In > https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636 > I made the following comment which I need to investigate further, since I > don't know if anything changed in this area: > {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin > moving forward (I understand this is more a generic auth question than > kerberos specific). For example, in the latest version of the filter we are > using at Cloudera, we play around with the ServletContext in order to pass > information around > (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106). > Is there any way we can get the actual ServletContext in a plugin?{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9212) Enable FastVectorHighlighter to Work on MultiPhraseQuery
Esther Quansah created SOLR-9212: Summary: Enable FastVectorHighlighter to Work on MultiPhraseQuery Key: SOLR-9212 URL: https://issues.apache.org/jira/browse/SOLR-9212 Project: Solr Issue Type: Bug Components: highlighter Affects Versions: 5.3 Environment: Linux, OSx, Windows Reporter: Esther Quansah FastVectorHighlighter will not highlight on MultiPhraseQuery - will instead just skip and return results. Example: I have synonyms.txt file and it contains break,breaks,broke,brake If I search for "brake vehicle", the query parses to MultiPhraseQuery with brake vehicle, break vehicle, breaks vehicle, broke vehicle as possible matches. Would like highlighting to occur on all of those results. Currently there are no highlighting results at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 250 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/250/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'CY val' for path 'params/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "a":"A val", "b":"B val", "wt":"json", "useParams":""}, "context":{ "webapp":"", "path":"/dump1", "httpMethod":"GET"}}, from server: https://127.0.0.1:56150/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'CY val' for path 'params/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "a":"A val", "b":"B val", "wt":"json", "useParams":""}, "context":{ "webapp":"", "path":"/dump1", "httpMethod":"GET"}}, from server: https://127.0.0.1:56150/collection1 at __randomizedtesting.SeedInfo.seed([55EAF178FFA0D4C:8D0A90CD210660B4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:171) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332029#comment-15332029 ] Mark Miller commented on SOLR-7374: --- bq. deprecate the usage of cluster prop and look at query param followed by solr.xml It is a bit odd to have some config in solr.xml and then default location as a cluster prop, but much nicer to be able to easily change the default location on the fly. solr.xml is a pain to change and requires a restart. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-7374: -- Attachment: (was: SOLR-7374.patch) > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-7374: -- Attachment: SOLR-7374.patch > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-7374: -- Attachment: SOLR-7374.patch New patch. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch, SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331958#comment-15331958 ] Mark Miller commented on SOLR-7374: --- bq. "location" in the solr.xml The baseLocation in the test solr.xml files just looks like dev cruft to me. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.1-Windows (32bit/jdk1.8.0_92) - Build # 13 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Windows/13/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud Error Message: 2 threads leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=8563, name=Thread-2896, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:912) at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2510) at org.apache.solr.core.SolrCore$$Lambda$62/5403071.run(Unknown Source) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2408)2) Thread[id=7744, name=Thread-2747, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:912) at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2510) at org.apache.solr.core.SolrCore$$Lambda$62/5403071.run(Unknown Source) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2408) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=8563, name=Thread-2896, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:912) at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2510) at org.apache.solr.core.SolrCore$$Lambda$62/5403071.run(Unknown Source) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2408) 2) Thread[id=7744, name=Thread-2747, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:912) at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2510) at org.apache.solr.core.SolrCore$$Lambda$62/5403071.run(Unknown Source) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2408) at __randomizedtesting.SeedInfo.seed([8BA9A25AF74EB84D]:0) Build Log: [...truncated 11458 lines...] [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud [junit4] 2> Creating dataDir:
[jira] [Commented] (SOLR-9195) UpdateRequestProcessorChain.getReqProcessors tweaks
[ https://issues.apache.org/jira/browse/SOLR-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331933#comment-15331933 ] ASF subversion and git services commented on SOLR-9195: --- Commit f9521549e0dc45c9298dfbebb59b5aaef21e1670 in lucene-solr's branch refs/heads/branch_6x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f952154 ] SOLR-9195: remove unnecessary allocation and null check in UpdateRequestProcessorChain.getReqProcessors > UpdateRequestProcessorChain.getReqProcessors tweaks > --- > > Key: SOLR-9195 > URL: https://issues.apache.org/jira/browse/SOLR-9195 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-9194.patch > > > Remove unused local ArrayList allocation and unnecessary if-null check. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331898#comment-15331898 ] Mark Miller commented on SOLR-7374: --- I have the following changes already in my local copy: * fixed formatting * use BackupRepository.DEFAULT_LOCATION_PROPERTY Let me take a look at the location param in solr.xml issue. > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331892#comment-15331892 ] Varun Thacker commented on SOLR-7374: - Hi Hrishikesh, Patch is looking good! Here are my main two concerns I think we need to deal with the "location" param better. Before this patch we used to read location as a query param . If the query param is empty then we read it from the cluster property. With this patch we are adding the ability to specify "location" in the solr.xml file but it will never be used? CollectionsHandler will bail out early today . {code} String location = req.getParams().get("location"); if (location == null) { location = h.coreContainer.getZkController().getZkStateReader().getClusterProperty("location", (String) null); } if (location == null) { throw new SolrException(ErrorCode.BAD_REQUEST, "'location' is not specified as a query parameter or set as a cluster property"); } {code} One approach would be to deprecate the usage of cluster prop and look at query param followed by solr.xml ? Looking at three places seems messy . {{repo}} is the key used to specify the implementation. In Solr xml the tag is called {{repository}} . Should we just use repository throughout? Small changes: - Javadocs for BackupRepository - s/index/indexes - I think we should follow the {{if (condition)}} spacing convention ? Some of the places don't have spaceIn and some do. - In {{BackupRepositoryFactory}} : In these two log log lines can we mention the name as well - {{LOG.info("Default configuration for backup repository is with configuration params {}", defaultBackupRepoPlugin);}} and {{LOG.info("Added backup repository with configuration params {}", backupRepoPlugins\[i]);}} - Can we reuse the "location" string with (BackupRepository.DEFAULT_LOCATION_PROPERTY) on line 871/925 of CoreAdminOperation? Let's fix it in CollectionsHandler and OverseerCollectionMessageHandler as well? - In RestoreCore do we need to deprecate the old RestoreCore ctor ? Any reason why we can't remove it directly? > Backup/Restore should provide a param for specifying the directory > implementation it should use > --- > > Key: SOLR-7374 > URL: https://issues.apache.org/jira/browse/SOLR-7374 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker >Assignee: Mark Miller > Fix For: 5.2, 6.0 > > Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, > SOLR-7374.patch > > > Currently when we create a backup we use SimpleFSDirectory to write the > backup indexes. Similarly during a restore we open the index using > FSDirectory.open . > We should provide a param called {{directoryImpl}} or {{type}} which will be > used to specify the Directory implementation to backup the index. > Likewise during a restore you would need to specify the directory impl which > was used during backup so that the index can be opened correctly. > This param will address the problem that currently if a user is running Solr > on HDFS there is no way to use the backup/restore functionality as the > directory is hardcoded. > With this one could be running Solr on a local FS but backup the index on > HDFS etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9211) Nested negative clauses don't work as expected in filter queries for the edismax parser
Plamen M Todorov created SOLR-9211: -- Summary: Nested negative clauses don't work as expected in filter queries for the edismax parser Key: SOLR-9211 URL: https://issues.apache.org/jira/browse/SOLR-9211 Project: Solr Issue Type: Bug Affects Versions: 5.4 Reporter: Plamen M Todorov Using the edismax parser, the following query works as expected and returns all documents: CONTENT:(foo OR (-foo)) The same clause doesn't work in a filter query and returns no documents -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7206) nest child query explain into ToParentBlockJoinQuery.BlockJoinScorer.explain(int)
[ https://issues.apache.org/jira/browse/LUCENE-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331820#comment-15331820 ] Mikhail Khludnev commented on LUCENE-7206: -- Just a note, current approach might be too slow, it explains all children and then pickups one of these explanation. For short blocks it's ok, but for longer ones the two phase algorithm makes sense: find appropriate child (min/max or so), and explain it only after that. I prefer to keep it as a reminder, let's raise an issue if someone bother about it too. > nest child query explain into > ToParentBlockJoinQuery.BlockJoinScorer.explain(int) > - > > Key: LUCENE-7206 > URL: https://issues.apache.org/jira/browse/LUCENE-7206 > Project: Lucene - Core > Issue Type: Improvement > Components: core/query/scoring >Affects Versions: 6.0 >Reporter: Mikhail Khludnev > Labels: newbie, newdev > Fix For: 6.1, master (7.0) > > Attachments: LUCENE-7206-one-child-with-tests.patch, > LUCENE-7206-test.patch, LUCENE-7206.diff > > > Now to parent query match is explained with {{Score based on child doc range > from .. to .. }} that's quite useless. > It's proposed to nest child query match explanation from the first matching > child document into parent explain. > WDYT? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5912 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5912/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.codecs.lucene50.TestLucene50TermVectorsFormat Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001\justSoYouGetSomeChannelErrors-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001\justSoYouGetSomeChannelErrors-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001\justSoYouGetSomeChannelErrors-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001\justSoYouGetSomeChannelErrors-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.codecs.lucene50.TestLucene50TermVectorsFormat_6636754741E1ABCD-001 at __randomizedtesting.SeedInfo.seed([6636754741E1ABCD]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1373 lines...] [junit4] Suite: org.apache.lucene.codecs.lucene50.TestLucene50TermVectorsFormat [junit4] 2> NOTE: test params are: codec=Asserting(Lucene62): {}, docValues:{}, maxPointsInLeafNode=1018, maxMBSortInHeap=6.557135230091669, sim=ClassicSimilarity, locale=de-DE, timezone=Asia/Baku [junit4] 2> NOTE: Windows 10 10.0 x86/Oracle Corporation 1.8.0_92 (32-bit)/cpus=3,threads=1,free=119749040,total=298242048 [junit4] 2> NOTE: All tests run in this JVM: [TestDirectPacked, TestLucene53NormsFormat, TestFieldType, TestSegmentTermDocs, TestFieldMaskingSpanQuery, TestDuelingCodecs, TestTryDelete, TestFixedBitSet, TestTimeLimitingCollector, TestTwoPhaseCommitTool, TestDocumentsWriterDeleteQueue, TestFilterDirectory, TestMutableValues, TestSortRescorer, TestCrash, TestFixedLengthBytesRefArray, TestConstantScoreQuery, TestAutomatonQueryUnicode, TestIntsRef, TestNRTCachingDirectory, TestPayloadsOnVectors, TestBufferedIndexInput, TestSloppyMath, TestOfflineSorter, TestSpanExplanations, TestLucene50FieldInfoFormat, TestIndexWriterMerging, TestIndexWriterWithThreads, TestMergeSchedulerExternal, TestDoc, TestRamUsageEstimator, TestTopDocsCollector, Test2BPagedBytes, TestSpanSearchEquivalence, Test2BBKDPoints, TestBytesRef, TestRegexpRandom2, TestFastDecompressionMode, TestNumericTokenStream, TestIndexSearcher, TestDocValuesIndexing, TestInPlaceMergeSorter, TestMinimize, TestLogMergePolicy, TestFieldCacheRewriteMethod, TestNamedSPILoader, TestDocValuesRewriteMethod, TestRegexpRandom, TestSmallFloat, TestTermdocPerf, TestPostingsOffsets, TestCharFilter, TestBytesRefAttImpl, TestMultiDocValues, TestTermScorer, TestPhrasePrefixQuery,
[jira] [Closed] (SOLR-7098) Solr Join: Return Parent and Child Documents
[ https://issues.apache.org/jira/browse/SOLR-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev closed SOLR-7098. -- Resolution: Won't Fix > Solr Join: Return Parent and Child Documents > > > Key: SOLR-7098 > URL: https://issues.apache.org/jira/browse/SOLR-7098 > Project: Solr > Issue Type: Improvement > Components: search >Reporter: jefferyyuan >Priority: Minor > Labels: join, search > Fix For: 6.0, 5.2 > > > Solr JoinQParserPlugin returns only right side(parent) documents, it would be > great if we can return all documents. > User case: > If JoinQParserPlugin can return all (parent and child) documents, client can > group parent and child docs together with same group.field - (optionally) > then use group.main=true to navigate them. > The implementation in single mode: > (as solr join doesn't support distributed search) > req parameter: {!join from=man_id to=id includeParent=true} > Add includeParent into org.apache.solr.search.JoinQuery > Update JoinQuery's hashCode and equals to include includeParent. > In org.apache.solr.search.JoinQuery.JoinQueryWeight.getDocSet() > DocSet fromSet = fromSearcher.getDocSet(q); > if (includeParent) { > rstDocset = rstDocset.union(fromSet); > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6968) LSH Filter
[ https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331794#comment-15331794 ] Tommaso Teofili commented on LUCENE-6968: - yes, I plan to merge it to 6.x, I wanted to have a few more runs on Jenkins before merging it back to make sure there're no token filtering level issues. > LSH Filter > -- > > Key: LUCENE-6968 > URL: https://issues.apache.org/jira/browse/LUCENE-6968 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Reporter: Cao Manh Dat >Assignee: Tommaso Teofili > Fix For: master (7.0) > > Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, > LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch > > > I'm planning to implement LSH. Which support query like this > {quote} > Find similar documents that have 0.8 or higher similar score with a given > document. Similarity measurement can be cosine, jaccard, euclid.. > {quote} > For example. Given following corpus > {quote} > 1. Solr is an open source search engine based on Lucene > 2. Solr is an open source enterprise search engine based on Lucene > 3. Solr is an popular open source enterprise search engine based on Lucene > 4. Apache Lucene is a high-performance, full-featured text search engine > library written entirely in Java > {quote} > We wanna find documents that have 0.6 score in jaccard measurement with this > doc > {quote} > Solr is an open source search engine > {quote} > It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7098) Solr Join: Return Parent and Child Documents
[ https://issues.apache.org/jira/browse/SOLR-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331793#comment-15331793 ] Mikhail Khludnev commented on SOLR-7098: this not need to be done on parser level, you can add child side query as a should clause to join side returning parents. ie instead of {code} q={!join from=man_id to=id}foo:bar {code} you can {code} q=foo:bar {!join from=man_id to=id}foo:bar {code} You might also be interested in \[subquery] transformer > Solr Join: Return Parent and Child Documents > > > Key: SOLR-7098 > URL: https://issues.apache.org/jira/browse/SOLR-7098 > Project: Solr > Issue Type: Improvement > Components: search >Reporter: jefferyyuan >Priority: Minor > Labels: join, search > Fix For: 5.2, 6.0 > > > Solr JoinQParserPlugin returns only right side(parent) documents, it would be > great if we can return all documents. > User case: > If JoinQParserPlugin can return all (parent and child) documents, client can > group parent and child docs together with same group.field - (optionally) > then use group.main=true to navigate them. > The implementation in single mode: > (as solr join doesn't support distributed search) > req parameter: {!join from=man_id to=id includeParent=true} > Add includeParent into org.apache.solr.search.JoinQuery > Update JoinQuery's hashCode and equals to include includeParent. > In org.apache.solr.search.JoinQuery.JoinQueryWeight.getDocSet() > DocSet fromSet = fromSearcher.getDocSet(q); > if (includeParent) { > rstDocset = rstDocset.union(fromSet); > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 199 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/199/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.update.processor.TestNamedUpdateProcessors.test Error Message: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:41584 within 3 ms Stack Trace: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:41584 within 3 ms at __randomizedtesting.SeedInfo.seed([7911D50CF153D6DD:F145EAD65FAFBB25]:0) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:180) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:114) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:109) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:96) at org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:295) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1500) at org.apache.solr.update.processor.TestNamedUpdateProcessors.distribTearDown(TestNamedUpdateProcessors.java:59) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:969) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:41584 within 3 ms at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:228) at
[jira] [Commented] (SOLR-9195) UpdateRequestProcessorChain.getReqProcessors tweaks
[ https://issues.apache.org/jira/browse/SOLR-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331766#comment-15331766 ] ASF subversion and git services commented on SOLR-9195: --- Commit 651499c82df482b493b0ed166c2ab7196af0a794 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=651499c ] SOLR-9195: remove unnecessary allocation and null check in UpdateRequestProcessorChain.getReqProcessors > UpdateRequestProcessorChain.getReqProcessors tweaks > --- > > Key: SOLR-9195 > URL: https://issues.apache.org/jira/browse/SOLR-9195 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-9194.patch > > > Remove unused local ArrayList allocation and unnecessary if-null check. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9210) Dataimport Handlers are only available in original User Interface
[ https://issues.apache.org/jira/browse/SOLR-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331608#comment-15331608 ] Stefan Matheis (steffkes) edited comment on SOLR-9210 at 6/15/16 12:13 PM: --- This is a duplicate of SOLR-8993, therefore closing this one. was (Author: steffkes): This is a duplicate of SOLR-8983, therefore closing this one. > Dataimport Handlers are only available in original User Interface > - > > Key: SOLR-9210 > URL: https://issues.apache.org/jira/browse/SOLR-9210 > Project: Solr > Issue Type: Bug > Components: UI >Affects Versions: 6.0.1 > Environment: Linux >Reporter: ClaudeHe >Assignee: Stefan Matheis (steffkes) >Priority: Minor > Original Estimate: 1h > Remaining Estimate: 1h > > 2 defined Dataimport Handlers, Migrated from 5.2. > UI Message: "Sorry no dataimport-handler defined" > No Problem with: > original UI > using the dataimport-handler URL directly > The Problem still occurs after removing one of the Dataimport Handlers > Here the config: > > > import_a_dih.xml > > > > > import_b_dih.xml > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9210) Dataimport Handlers are only available in original User Interface
[ https://issues.apache.org/jira/browse/SOLR-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Matheis (steffkes) resolved SOLR-9210. - Resolution: Duplicate Assignee: Stefan Matheis (steffkes) This is a duplicate of SOLR-8983, therefore closing this one. > Dataimport Handlers are only available in original User Interface > - > > Key: SOLR-9210 > URL: https://issues.apache.org/jira/browse/SOLR-9210 > Project: Solr > Issue Type: Bug > Components: UI >Affects Versions: 6.0.1 > Environment: Linux >Reporter: ClaudeHe >Assignee: Stefan Matheis (steffkes) >Priority: Minor > Original Estimate: 1h > Remaining Estimate: 1h > > 2 defined Dataimport Handlers, Migrated from 5.2. > UI Message: "Sorry no dataimport-handler defined" > No Problem with: > original UI > using the dataimport-handler URL directly > The Problem still occurs after removing one of the Dataimport Handlers > Here the config: > > > import_a_dih.xml > > > > > import_b_dih.xml > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9210) Dataimport Handlers are only available in original User Interface
Claudio Hebein created SOLR-9210: Summary: Dataimport Handlers are only available in original User Interface Key: SOLR-9210 URL: https://issues.apache.org/jira/browse/SOLR-9210 Project: Solr Issue Type: Bug Components: UI Affects Versions: 6.0.1 Environment: Linux Reporter: Claudio Hebein Priority: Minor 2 defined Dataimport Handlers, Migrated from 5.2. UI Message: "Sorry no dataimport-handler defined" No Problem with: original UI using the dataimport-handler URL directly The Problem still occurs after removing one of the Dataimport Handlers Here the config: import_a_dih.xml import_b_dih.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8668) Remove support for
[ https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331529#comment-15331529 ] Christine Poerschke commented on SOLR-8668: --- Tentatively retagged this issue as 6.2 or 7.0 item. > Remove support for > > > Key: SOLR-8668 > URL: https://issues.apache.org/jira/browse/SOLR-8668 > Project: Solr > Issue Type: Improvement >Reporter: Shai Erera >Priority: Blocker > Fix For: master (7.0), 6.2 > > > Following SOLR-8621, we should remove support for {{}} (and > related {{}} and {{}}) in trunk/6x. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8668) Remove support for
[ https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-8668: -- Fix Version/s: (was: 6.0) 6.2 master (7.0) > Remove support for > > > Key: SOLR-8668 > URL: https://issues.apache.org/jira/browse/SOLR-8668 > Project: Solr > Issue Type: Improvement >Reporter: Shai Erera >Priority: Blocker > Fix For: master (7.0), 6.2 > > > Following SOLR-8621, we should remove support for {{}} (and > related {{}} and {{}}) in trunk/6x. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9161) SolrPluginUtils.invokeSetters should accommodate setter variants
[ https://issues.apache.org/jira/browse/SOLR-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-9161. --- Resolution: Fixed Fix Version/s: 6.2 master (7.0) > SolrPluginUtils.invokeSetters should accommodate setter variants > > > Key: SOLR-9161 > URL: https://issues.apache.org/jira/browse/SOLR-9161 > Project: Solr > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (7.0), 6.2 > > Attachments: SOLR-9161.patch, SOLR-9161.patch > > > The code currently assumes that there is only one setter (or if there are > several setters then the first one found is used and it could mismatch on the > arg type). > Context and motivation is that a class with a > {code} > void setAFloat(float val) { > this.val = val; > } > {code} > setter may wish to also provide a > {code} > void setAFloat(String val) { > this.val = Float.parseFloat(val); > } > {code} > convenience setter. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query
[ https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331466#comment-15331466 ] Michael McCandless commented on LUCENE-7337: Really, the max score that {{MatchNoDocsQuery}} can return is undefined right, since it returns nothing. (i.e. max value over an empty set of elements is not defined). Maybe, instead of adding a new query that also matches no documents, we could just enhance the existing one so you could pass it the norm factor you'd like it to "use"? I do really like your idea of having an empty clause BQ rewrite to {{MatchNoDocsQuery}}: I think we should have one, unambiguous query class that's used for this "matches nothing" rewrite case, if we can get the scoring to work out correctly! > MultiTermQuery are sometimes rewritten into an empty boolean query > -- > > Key: LUCENE-7337 > URL: https://issues.apache.org/jira/browse/LUCENE-7337 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Ferenczi Jim >Priority: Minor > > MultiTermQuery are sometimes rewritten to an empty boolean query (depending > on the rewrite method), it can happen when no expansions are found on a fuzzy > query for instance. > It can be problematic when the multi term query is boosted. > For instance consider the following query: > `((title:bar~1)^100 text:bar)` > This is a boolean query with two optional clauses. The first one is a fuzzy > query on the field title with a boost of 100. > If there is no expansion for "title:bar~1" the query is rewritten into: > `(()^100 text:bar)` > ... and when expansions are found: > `((title:bars | title:bar)^100 text:bar)` > The scoring of those two queries will differ because the normalization factor > and the norm for the first query will be equal to 1 (the boost is ignored > because the empty boolean query is not taken into account for the computation > of the normalization factor) whereas the second query will have a > normalization factor of 10,000 (100*100) and a norm equal to 0.01. > This kind of discrepancy can happen in a single index because the expansions > for the fuzzy query are done at the segment level. It can also happen when > multiple indices are requested (Solr/ElasticSearch case). > A simple fix would be to replace the empty boolean query produced by the > multi term query with a MatchNoDocsQuery but I am not sure that it's the best > way to fix. WDYT ? > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 38 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/38/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([3A33EBC4B9611095]:0) FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasics Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([3A33EBC4B9611095]:0) Build Log: [...truncated 12628 lines...] [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest [junit4] 2> 1006362 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 1006363 INFO (Thread-2950) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 1006363 INFO (Thread-2950) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 1006463 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.ZkTestServer start zk server on port:41339 [junit4] 2> 1006463 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 1006463 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 1006488 INFO (zkCallback-23110-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@692d60c6 name:ZooKeeperConnection Watcher:127.0.0.1:41339 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 1006488 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 1006488 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 1006488 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[3A33EBC4B9611095]) [] o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml [junit4] 2> 1006496 INFO (jetty-launcher-23109-thread-2) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1006496 INFO (jetty-launcher-23109-thread-3) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1006496 INFO (jetty-launcher-23109-thread-1) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1006496 INFO (jetty-launcher-23109-thread-4) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1006496 INFO (jetty-launcher-23109-thread-5) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@385ede05{/solr,null,AVAILABLE} [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-5) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@2622047f{/solr,null,AVAILABLE} [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-3) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@633d97b{/solr,null,AVAILABLE} [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-2) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@59f1fca{/solr,null,AVAILABLE} [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-4) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@7da72220{/solr,null,AVAILABLE} [junit4] 2> 1006498 INFO (jetty-launcher-23109-thread-1) [] o.e.j.s.ServerConnector Started ServerConnector@1a5157f2{HTTP/1.1,[http/1.1]}{127.0.0.1:42635} [junit4] 2> 1006499 INFO (jetty-launcher-23109-thread-1) [] o.e.j.s.Server Started @1008528ms [junit4] 2> 1006499 INFO (jetty-launcher-23109-thread-2) [] o.e.j.s.ServerConnector Started ServerConnector@5df08126{HTTP/1.1,[http/1.1]}{127.0.0.1:35480} [junit4] 2> 1006499 INFO (jetty-launcher-23109-thread-1) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=42635} [junit4] 2> 1006500 INFO (jetty-launcher-23109-thread-2) [] o.e.j.s.Server Started @1008529ms [junit4] 2> 1006500 INFO (jetty-launcher-23109-thread-5) [] o.e.j.s.ServerConnector Started ServerConnector@5aabc29f{HTTP/1.1,[http/1.1]}{127.0.0.1:39256} [junit4] 2> 1006500 INFO (jetty-launcher-23109-thread-4) [] o.e.j.s.ServerConnector Started ServerConnector@4c2ec8{HTTP/1.1,[http/1.1]}{127.0.0.1:41466} [junit4] 2> 1006500 INFO (jetty-launcher-23109-thread-5) [] o.e.j.s.Server Started @1008529ms [junit4] 2> 1006499
[jira] [Commented] (SOLR-8096) Major faceting performance regressions
[ https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331393#comment-15331393 ] Alessandro Benedetti commented on SOLR-8096: Yes David, you know, not the cleanest solution but it's the same approach used for a lot of other legacy facet method "bugs" or incompatibility. The debug for the facet method applied is already in the trunk,part of SOLR-9176, will be logged both the method in input by the user and the method selected by Solr . I can contribute a small patch in the afternoon to force UIF when docValues are not available . Cheers > Major faceting performance regressions > -- > > Key: SOLR-8096 > URL: https://issues.apache.org/jira/browse/SOLR-8096 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.1, 5.2, 5.3, 6.0 >Reporter: Yonik Seeley >Priority: Critical > Attachments: simple_facets.diff > > > Use of the highly optimized faceting that Solr had for multi-valued fields > over relatively static indexes was removed as part of LUCENE-5666, causing > severe performance regressions. > Here are some quick benchmarks to gauge the damage, on a 5M document index, > with each field having between 0 and 5 values per document. *Higher numbers > represent worse 5x performance*. > Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time > ||...|| Percent of index being faceted > ||num_unique_values|| 10% || 50% || 90% || > |10 | 351.17% | 1587.08% | 3057.28% | > |100 | 158.10% | 203.61% | 1421.93% | > |1000 | 143.78% | 168.01% | 1325.87% | > |1| 137.98% | 175.31% | 1233.97% | > |10 | 142.98% | 159.42% | 1252.45% | > |100 | 255.15% | 165.17% | 1236.75% | > For example, a field with 1000 unique values in the whole index, faceting > with 5x took 143% of the 4x time, when ~10% of the docs in the index were > faceted. > One user who brought the performance problem to our attention: > http://markmail.org/message/ekmqh4ocbkwxv3we > "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3) > The disabling of the UnInvertedField algorithm was previously discovered in > SOLR-7190, but we didn't know just how bad the problem was at that time. > edit: removed "secret" adverb by request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8048) bin/solr script should accept user name and password for basicauth
[ https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-8048. -- Resolution: Fixed > bin/solr script should accept user name and password for basicauth > -- > > Key: SOLR-8048 > URL: https://issues.apache.org/jira/browse/SOLR-8048 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Noble Paul >Assignee: Noble Paul > Labels: authentication, security > Fix For: 6.2 > > > Should be able to add the line in{{solr.in.sh}} to support basic auth in the > {{bin/solr}} script > {code} > SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8048) bin/solr script should accept user name and password for basicauth
[ https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331219#comment-15331219 ] ASF subversion and git services commented on SOLR-8048: --- Commit 62452f033a3945d2812fa17ab07cfbe7248bb439 in lucene-solr's branch refs/heads/branch_6x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62452f0 ] SOLR-8048: bin/solr script should support basic auth credentials provided in solr.in.sh > bin/solr script should accept user name and password for basicauth > -- > > Key: SOLR-8048 > URL: https://issues.apache.org/jira/browse/SOLR-8048 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Noble Paul >Assignee: Noble Paul > Labels: authentication, security > Fix For: 6.2 > > > Should be able to add the line in{{solr.in.sh}} to support basic auth in the > {{bin/solr}} script > {code} > SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org