[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455796#comment-16455796 ] Karl Wright commented on LUCENE-8281: - [~ivera], that's a very good observation. I wonder why test point 1 is not on the ellipsoid? Your change in construction of test point 2 essentially puts test point 2 on the ellipsoid even if test point 1 is not. Let me look at why this might be. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281-debugging.patch, LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #364: Branch 6x . clzz.cast(object),Lost part of th...
GitHub user dingziyang opened a pull request: https://github.com/apache/lucene-solr/pull/364 Branch 6x . clzz.cast(object),Lost part of the data in a field of objectã ### Env: - solr-6.5.1 - jdk 1.8 ### Question : - I got the query result from the solr server and tried to assign it to my java bean. After debugging, I found the problem codeï¼```clzz cast(object)```,the object invoked a class named "article",and has a member variables named âcontentâï¼it should get about 2000 bytes but only get about 500 bytes. - Is a bug with solr, or where am I doing wrong? ï¼Please help me,thanks. ### Code: ``` java System.out.println(object); // debug in hereï¼everything is fine. list.add(clzz.cast(object)); System.out.println(object); // debug in hereï¼clzz.cast(object),Lost part of the data in a field of objectã ``` You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr branch_6x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/364.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #364 commit d7fdbc48a4f31fde34752409e541823ba2ff8f38 Author: Steve RoweDate: 2017-06-06T18:23:51Z SOLR-10815: avoid long->float precision loss in 'product(-1,ms(date_field)' by subtracting a base date: 'product(-1,ms(date_field,base_date_field))' commit 77ca26beb39d25448b671cacb209a42fe49d193f Author: Joel Bernstein Date: 2017-06-07T01:51:17Z Ref Guide: Remove place holders commit 3bb7cfcef5f22c6b61c505f8d9ead47f1a8fe6fe Author: Ishan Chattopadhyaya Date: 2017-06-07T02:34:29Z Adding 6.6.0 release version number commit 28d80c7b599bf9363adfefc1b32a463ff6250d1a Author: jpgilaberte Date: 2017-05-29T11:06:05Z LUCENE-7855: The advanced parameters of the Wikipedia tokenizer are added to the factory Closes #209 commit 792a8799168a58477b3165c11cbf3ab241c1d9f8 Author: Adrien Grand Date: 2017-06-07T15:49:33Z LUCENE-7828: Speed up range queries on range fields by improving how we compute the relation between the query and inner nodes of the BKD tree. commit 972cdef6d9d75fdcb3f66f60901530edbf4ebad0 Author: Christine Poerschke Date: 2017-06-07T18:38:17Z SOLR-10174: fix @Ignore in TestMultipleAdditiveTreesModel commit ae1fb106f38c75257a69c2aad1c751f14fead105 Author: Steve Rowe Date: 2017-06-07T23:21:10Z SOLR-10501: Test sortMissing{First,Last} with points fields. commit 60f9c7b916d04774411e7474b5fbae1052f45293 Author: Dawid Weiss Date: 2017-06-08T06:35:18Z LUCENE-7864: IndexMergeTool is not using intermediate hard links (even if possible) commit 9586b919563281d8e4c23452c90f096be5af0e63 Author: Martijn van Groningen Date: 2017-06-07T17:55:32Z LUCENE-7869: Changed MemoryIndex to sort 1d points. In case of 1d points, the PointInSetQuery.MergePointVisitor expects that these points are visited in ascending order. Prior to this change the memory index doesn't do this and this can result in document with multiple points that should match to not match. commit c09d6b0f73ba703b57efab864e927a504e65b433 Author: Christine Poerschke Date: 2017-06-08T09:15:54Z SOLR-10174: reduce makeFeatureWeights code duplication in tests commit bff7deef17719ed20e9a274e68998cfbb3a76d50 Author: Cassandra Targett Date: 2017-06-07T19:58:47Z Ref Guide: Change PDF description list font to bold instead of italic for better readability commit 76786f9689b57955acca8654224ee0e613d62f92 Author: Cassandra Targett Date: 2017-06-07T19:59:20Z Ref Guide: remove errant + in TIP formatting commit e5175163c787fecd6633b00d304d589a9fe49fe7 Author: Cassandra Targett Date: 2017-06-08T14:18:03Z Ref Guide: fix incorrect admonition type style definition in theme commit 894fcefd60b779fd64fd3a259544339866d8d2f6 Author: Cassandra Targett Date: 2017-06-08T14:55:41Z Ref Guide: reduce size of quote blocks commit efe4d727e174f843eed7d7cecbb842a840cfb873 Author: Cassandra Targett Date: 2017-06-08T15:53:22Z Ref Guide: several minor typos and format fixes for 6.6 Guide commit fd95901708ec481c6e4ce01dd751f1a23460da00 Author: Joel Bernstein Date: 2017-06-08T17:13:05Z SOLR-10853:Allow the analyze Stream Evaluator to operate outside of a stream commit afe14bf751ee76bfd19972775fb0d484571fbb3e Author: Joel Bernstein Date: 2017-06-08T19:18:59Z SOLR-10351: Add documention commit 76e2c700216e4809ff4dc0b237c8f4a12b63d1e8 Author: Joel Bernstein Date: 2017-06-08T19:28:42Z
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455786#comment-16455786 ] Ignacio Vera commented on LUCENE-8281: -- This might be useful and it is something I noticed when developing the random test. If I construct the testpoint2 in the following way: {code:java} // Pick the antipodes for testPoint2 GeoPoint p = new GeoPoint(-testPoint.x, -testPoint.y, -testPoint.z); this.testPoint2 = new GeoPoint(planetModel, p.getLatitude(), p.getLongitude());{code} Then we get: {code:java} Finding whether [-1.0011188539790565,-5.132245021424039E-6,-7.291706183250981E-7] is in-set, based on travel from [X=1.0011188539790565, Y=5.132245021274452E-6, Z=-7.291706183250981E-7] along [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] (value=-7.291706183250981E-7) Constructing sector linear crossing edge iterator Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] There were intersection points! Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] intersects travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] Edge [[lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])] --> [lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] There are no intersection points within bounds. Endpoint(s) of edge are not on travel plane Check point in set? false{code} Which is the right answer. Looking at the differences: {code:java} Case 1) Failing test point2 ([X=-1.0011188539790565, Y=-5.132245021274452E-6, Z=-7.291706183250981E-7]): Constructing sector linear crossing edge iterator with bounds: bound1:[A=5.126509205983227E-6, B=-0.99868595, C=0.0, D=0.0, side=-1.0] bound2:[A=-5.126509205983227E-6, B=0.99868595, C=0.0, D=0.0, side=-1.0] Case 2) Success test point2 ([lat=-7.283556946482617E-7, lon=-3.141587527080587([X=-1.0011188539790565, Y=-5.132245021424039E-6, Z=-7.291706183250981E-7])]): Constructing sector linear crossing edge iterator with bounds: bound1:[A=7.283556946481974E-7, B=0.0, C=-0.7348, D=0.0, side=-1.0] bound2:[A=4.983019447078322E-6, B=0.0, C=-0.99875847, D=0.0, side=1.0]{code} So clearly bounds for the iterator are totally messed up. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281-debugging.patch, LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1822 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1822/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 13 tests failed. FAILED: org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate Error Message: Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 AKST 2008) Stack Trace: java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 AKST 2008) at __randomizedtesting.SeedInfo.seed([79065E512B0EB1FA:331F266450A7C64F]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59) at org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate Error Message: Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 AKST 2008) Stack Trace: java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 AKST 2008) at
[jira] [Commented] (SOLR-5129) Timeout property for waiting ZK get started
[ https://issues.apache.org/jira/browse/SOLR-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455761#comment-16455761 ] David Smiley commented on SOLR-5129: ZkFailoverTest.setupCluster adds a config "conf1" but it is not actually used in the test; instead the "_default" config is used. > Timeout property for waiting ZK get started > --- > > Key: SOLR-5129 > URL: https://issues.apache.org/jira/browse/SOLR-5129 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 4.4 >Reporter: Shawn Heisey >Assignee: Cao Manh Dat >Priority: Minor > Fix For: 7.1 > > Attachments: SOLR-5129.patch, SOLR-5129.patch > > > Summary of report from user on mailing list: > If zookeeper is down or doesn't have quorum when you start Solr nodes, they > will not function correctly, even if you later start zookeeper. While > zookeeper is down, the log shows connection failures as expected. When > zookeeper comes back, the log shows: > INFO - 2013-08-09 15:48:41.528; > org.apache.solr.common.cloud.ConnectionManager; Client->ZooKeeper status > change trigger but we are already closed > At that point, Solr (admin UI and all other functions) does not work, and > won't work until it is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8274) I get a AndroidRuntime about Didn't find class "java.lang.ClassValue" while I use lucene 7.2,1 in android
[ https://issues.apache.org/jira/browse/LUCENE-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangzhenan updated LUCENE-8274: Description: I find that javaversion_1.8 can be use in android-26, so I try to replace lucene4.7.2 into lucene7.2.1 in my android project. but I get a AndroidRuntime This is my config: compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } dependencies { compile 'com.android.support:multidex:1.0.1' compile 'org.apache.lucene:lucene-core:7.2.1' compile 'org.apache.lucene:lucene-analyzers-common:7.2.1' compile 'org.apache.lucene:lucene-analyzers-smartcn:7.2.1' compile 'org.apache.lucene:lucene-queries:7.2.1' compile 'org.apache.lucene:lucene-queryparser:7.2.1' } it goes well when Build APK , but got a AndroidRuntime like below: 04-25 10:20:15.129 13251 13273 E AndroidRuntime: java.lang.NoClassDefFoundError: Failed resolution of: Ljava/lang/ClassValue; 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.analysis.standard.StandardAnalyzer.createComponents(StandardAnalyzer.java:103) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:134) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:198) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:240) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParserBase.newFieldQuery(QueryParserBase.java:475) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParserBase.getFieldQuery(QueryParserBase.java:467) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.MultiFieldQueryParser.getFieldQuery(MultiFieldQueryParser.java:154) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:830) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:469) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:355) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:244) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:215) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:109) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at com.android.globalsearch.model.index.ContactsIndexHelper.getQuery(ContactsIndexHelper.java:713) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at com.android.globalsearch.model.index.IndexHelper.initQuery(IndexHelper.java:496) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at com.android.globalsearch.model.task.search.SearchLocalTask$4.run(SearchLocalTask.java:342) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at java.lang.Thread.run(Thread.java:764) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: Caused by: java.lang.ClassNotFoundException: Didn't find class "java.lang.ClassValue" on path: DexPathList[[zip file "/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk"],nativeLibraryDirectories=[/data/app/com.android.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/lib/arm, /data/app/com.andriod.globalsearch-1nMSgWTRPQ5vt_9Co6iFaw==/base.apk!/lib/armeabi, /system/lib, /vendor/lib]] 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at java.lang.ClassLoader.loadClass(ClassLoader.java:379) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: at java.lang.ClassLoader.loadClass(ClassLoader.java:312) 04-25 10:20:15.129 13251 13273 E AndroidRuntime: ... 19 more 这个问题跟stackoverflow上的这个问题一样: [https://stackoverflow.com/questions/47657615/lucene-android-noclassdeffounderror] was: 由于android-26开始支持Java8,所以我着手将android工程中的Lucene4.7.2替换成7.2.1,我指定了java8进行编译 compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } 引用7.2.1 dependencies { compile 'com.android.support:multidex:1.0.1' compile 'org.apache.lucene:lucene-core:7.2.1' compile 'org.apache.lucene:lucene-analyzers-common:7.2.1'
[JENKINS] Lucene-Solr-Tests-7.3 - Build # 49 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/49/ 1 tests failed. FAILED: org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck Error Message: Error from server at https://127.0.0.1:40047/solr: KeeperErrorCode = Session expired for /configs/solrCloudCollectionConfig Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:40047/solr: KeeperErrorCode = Session expired for /configs/solrCloudCollectionConfig at __randomizedtesting.SeedInfo.seed([B5C5E7D1136C9683:5B1659019680AC67]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Comment Edited] (SOLR-12278) Ignore very large document on indexing
[ https://issues.apache.org/jira/browse/SOLR-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455579#comment-16455579 ] Cao Manh Dat edited comment on SOLR-12278 at 4/27/18 12:22 AM: --- Thank [~dsmiley] for reviewing the patch, this is a quick patch help me demonstrate the idea of the solution so many things will be tuned before it can get committed. [~dsmiley] [~wunder] I think that estimated size of an Object may be time-consuming to find out (so it will affect indexing). Because of using reflection (in RamUsageEstimator) or if the SolrInputDocument has million of fields. My plan for this new processor is to make it fast enough to be in default processor chain. was (Author: caomanhdat): Thank [~dsmiley] for reviewing the patch, this is a quick patch help me demonstrate the idea of the solution so many things will be tuned before it can get committed. [~dsmiley] [~wunder] I think that estimated size of an Object may be time-consuming to find out. Because of using reflection (in RamUsageEstimator ) or if the SolrInputDocument has million of fields. My plan for this new processor is to make it fast enough to be in default processor chain. > Ignore very large document on indexing > -- > > Key: SOLR-12278 > URL: https://issues.apache.org/jira/browse/SOLR-12278 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12278.patch > > > Solr should be able to ignore very large document, so it won't affect the > index as well as the tlog. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12278) Ignore very large document on indexing
[ https://issues.apache.org/jira/browse/SOLR-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455579#comment-16455579 ] Cao Manh Dat commented on SOLR-12278: - Thank [~dsmiley] for reviewing the patch, this is a quick patch help me demonstrate the idea of the solution so many things will be tuned before it can get committed. [~dsmiley] [~wunder] I think that estimated size of an Object may be time-consuming to find out. Because of using reflection (in RamUsageEstimator ) or if the SolrInputDocument has million of fields. My plan for this new processor is to make it fast enough to be in default processor chain. > Ignore very large document on indexing > -- > > Key: SOLR-12278 > URL: https://issues.apache.org/jira/browse/SOLR-12278 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12278.patch > > > Solr should be able to ignore very large document, so it won't affect the > index as well as the tlog. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.3-Windows (64bit/jdk-9.0.4) - Build # 32 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Windows/32/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC 6 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.SampleTest Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.3-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_A54A92EA4EF57907-001 at __randomizedtesting.SeedInfo.seed([A54A92EA4EF57907]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testScheduledTrigger Error Message: null Live Nodes: [127.0.0.1:57204_solr, 127.0.0.1:57319_solr] Last available state: null Stack Trace: java.lang.AssertionError: null Live Nodes: [127.0.0.1:57204_solr, 127.0.0.1:57319_solr] Last available state: null at __randomizedtesting.SeedInfo.seed([A54A92EA4EF57907:3651DA9810082233]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testScheduledTrigger(TriggerIntegrationTest.java:1660) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Commented] (SOLR-11229) Add a configuration upgrade tool for Solr
[ https://issues.apache.org/jira/browse/SOLR-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455565#comment-16455565 ] Mark Miller commented on SOLR-11229: bq. I'm curious about the choice of XSLT here. I have never been a fan of XSLT, but in this case it actually ends up being pretty nice for this use case. That is a first for me. Referencing existing rules to add new ones makes it fairly easy even for someone that is doing their best to never understand more than 5% of XSLT. > Add a configuration upgrade tool for Solr > - > > Key: SOLR-11229 > URL: https://issues.apache.org/jira/browse/SOLR-11229 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Hrishikesh Gadre >Priority: Major > > Despite widespread enterprise adoption, Solr lacks automated upgrade tooling. > It has long been a challenge for users to understand the implications of a > Solr upgrade. Users must manually review the Solr release notes to identify > configuration changes either to fix backwards incompatibilities or to utilize > latest features in the new version. Additionally, users must identify a way > to migrate existing index data to the new version (either via an index > upgrade or re-indexing the raw data). > Solr config upgrade tool aims to simplify the upgrade process by providing > upgrade instructions tailored to your configuration. These instructions can > help you to answer following questions > - Does my Solr configuration have any backwards incompatible sections? If yes > which ones? > - For each of the incompatibility - what do I need to do to fix this > incompatibility? Where can I get more information about why this > incompatibility was introduced (e.g. references to Lucene/Solr jiras)? > - Are there any changes in Lucene/Solr which would require me to do a full > reindexing OR can I get away with an index upgrade? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455564#comment-16455564 ] Karl Wright commented on LUCENE-8281: - I've confirmed that the proper edges are traversed for each travel plane -- if the bounds are correct for each edge. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281-debugging.patch, LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11229) Add a configuration upgrade tool for Solr
[ https://issues.apache.org/jira/browse/SOLR-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455561#comment-16455561 ] Hrishikesh Gadre commented on SOLR-11229: - [~janhoy] Yes, although the new development for this tool is happening in the internal cloudera repo. It includes the upgrade rules for Solr 6 -> 7 upgrade as well. I will sync the public github repository ASAP. Eventually we would like to incorporate this tool as part of the SOLR distribution if there is enough interest from the community. > Add a configuration upgrade tool for Solr > - > > Key: SOLR-11229 > URL: https://issues.apache.org/jira/browse/SOLR-11229 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Hrishikesh Gadre >Priority: Major > > Despite widespread enterprise adoption, Solr lacks automated upgrade tooling. > It has long been a challenge for users to understand the implications of a > Solr upgrade. Users must manually review the Solr release notes to identify > configuration changes either to fix backwards incompatibilities or to utilize > latest features in the new version. Additionally, users must identify a way > to migrate existing index data to the new version (either via an index > upgrade or re-indexing the raw data). > Solr config upgrade tool aims to simplify the upgrade process by providing > upgrade instructions tailored to your configuration. These instructions can > help you to answer following questions > - Does my Solr configuration have any backwards incompatible sections? If yes > which ones? > - For each of the incompatibility - what do I need to do to fix this > incompatibility? Where can I get more information about why this > incompatibility was introduced (e.g. references to Lucene/Solr jiras)? > - Are there any changes in Lucene/Solr which would require me to do a full > reindexing OR can I get away with an index upgrade? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter
[ https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455543#comment-16455543 ] Robert Muir commented on LUCENE-8273: - just imaging scenarios more, its probably useful if the thing can avoid corrupting graphs. By that i mean: conceptually the user has to understand that the filtering applies the condition based on the first token and that the filter gets whatever it pulls (based on its wanted context), and those are provided "graph-aligned" or something. I think its just inherent in what you are trying to do and not specific to the implementation: it needs to have some restrictions to avoid trouble? So maybe this filter should also consider positionLength... > Add a ConditionalTokenFilter > > > Key: LUCENE-8273 > URL: https://issues.apache.org/jira/browse/LUCENE-8273 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Priority: Major > Attachments: LUCENE-8273.patch, LUCENE-8273.patch > > > Spinoff of LUCENE-8265. It would be useful to be able to wrap a TokenFilter > in such a way that it could optionally be bypassed based on the current state > of the TokenStream. This could be used to, for example, only apply > WordDelimiterFilter to terms that contain hyphens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455515#comment-16455515 ] Karl Wright commented on LUCENE-8281: - Edges (including their bounds, which figure into what they should match): {code} [junit4] 1> Recording edge [[lat=-1.135982425157425E-6, lon=6.109011074784121E-7([X=0.1683, Y=6.1090110747798E-7, Z=-1.1359824251571808E-6])] --> [lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])]]; bounds = XYZBounds: [xmin=0.89982765 xmax=1.09991798 ymin=6.0990110747798E-7 ymax=1.8367011217762314E-6 zmin=-1.136982425157181E-6 zmax=-2.769790671226972E-7] [junit4] 1> Recording edge [[lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])] --> [lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])]]; bounds = XYZBounds: [xmin=0.89982759 xmax=1.09983425 ymin=1.8009115568523404E-6 ymax=1.8367011217762314E-6 zmin=-2.7897906712269725E-7 zmax=4.5027495572410565E-7] [junit4] 1> Recording edge [[lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])] --> [lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])]]; bounds = XYZBounds: [xmin=0.89982665 xmax=1.09983752 ymin=1.3741481926489823E-6 ymax=1.8029115568523406E-6 zmin=4.482749557241056E-7 zmax=1.2564034678999155E-6] [junit4] 1> Recording edge [[lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])] --> [lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])]]; bounds = XYZBounds: [xmin=0.89982665 xmax=1.1 ymin=-1.0E-9 ymax=1.3761481926489824E-6 zmin=-1.0E-9 zmax=1.2564034678999155E-6] [junit4] 1> Recording edge [[lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])] --> [lat=-1.135982425157425E-6, lon=6.109011074784121E-7([X=0.1683, Y=6.1090110747798E-7, Z=-1.1359824251571808E-6])]]; bounds = XYZBounds: [xmin=0.89991684 xmax=1.1 ymin=-1.0E-9 ymax=6.1190110747798E-7 zmin=-1.136982425157181E-6 zmax=1.0003E-9] {code} > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281-debugging.patch, LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455502#comment-16455502 ] Karl Wright edited comment on LUCENE-8281 at 4/26/18 10:52 PM: --- I'm actually more or less stumped by this. Here's my debug output. There are two operations here; first, figure out of the second test point is in-set or not: {code} [junit4] 1> Determining in-set-ness of test point2 ([X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8]): [junit4] 1> [junit4] 1> IsInSet called for [-0.3657,-1.124732395751746E-6,-5.8143386268861615E-8], testPoint=[lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])]; is in set? true [junit4] 1> Using two planes [junit4] 1> Picking XZ then XY [junit4] 1> Finding whether [X=0.3657, Y=-1.124732395751746E-6, Z=5.8143386268861615E-8] is in-set, based on travel from [lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])] along [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] (value=5.8143386268861615E-8) [junit4] 1> Constructing sector linear crossing edge iterator [junit4] 1> Edge [[lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])] --> [lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>Endpoint(s) of edge are not on travel plane [junit4] 1> Edge [[lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])] --> [lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> There were intersection points! [junit4] 1> Edge [[lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])] --> [lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])]] intersects travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> Intersection point in-set? false [junit4] 1> Finding whether [-0.3657,-1.124732395751746E-6,-5.8143386268861615E-8] is in-set, based on travel from [X=0.3657, Y=-1.124732395751746E-6, Z=5.8143386268861615E-8] along [A=0.0, B=1.0; C=0.0; D=1.124732395751746E-6] (value=-1.124732395751746E-6) [junit4] 1> Constructing full linear crossing edge iterator [junit4] 1> Check point in set? false [junit4] 1> {code} When that is done, the code fires off an assertion, that basically says if you go back to the first test point, you should get the same in-set-ness for it that you started with: {code} [junit4] 1> ... done. Checking against test point1 ([lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])]): [junit4] 1> [junit4] 1> IsInSet called for [0.3657,1.124732395751746E-6,5.8143386268861615E-8], testPoint=[X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8]; is in set? false [junit4] 1> Using two planes [junit4] 1> Picking XZ then XY [junit4] 1> Finding whether [X=0.3657, Y=1.124732395751746E-6, Z=-5.8143386268861615E-8] is in-set, based on travel from [X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8] along [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] (value=-5.8143386268861615E-8) [junit4] 1> Constructing sector linear crossing edge iterator [junit4] 1> Edge [[lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])] --> [lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>Endpoint(s) of edge are not on travel plane [junit4] 1> Edge [[lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])] --> [lat=-1.135982425157425E-6, lon=6.109011074784121E-7([X=0.1683, Y=6.1090110747798E-7, Z=-1.1359824251571808E-6])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>
[jira] [Updated] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Wright updated LUCENE-8281: Attachment: LUCENE-8281-debugging.patch > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281-debugging.patch, LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455502#comment-16455502 ] Karl Wright commented on LUCENE-8281: - I'm actually more or less stumped by this. Here's my debug output. There are two operations here; first, figure out of the second test point is in-set or not: {code} [junit4] 1> Determining in-set-ness of test point2 ([X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8]): [junit4] 1> [junit4] 1> IsInSet called for [-0.3657,-1.124732395751746E-6,-5.8143386268861615E-8], testPoint=[lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])]; is in set? true [junit4] 1> Using two planes [junit4] 1> Picking XZ then XY [junit4] 1> Finding whether [X=0.3657, Y=-1.124732395751746E-6, Z=5.8143386268861615E-8] is in-set, based on travel from [lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])] along [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] (value=5.8143386268861615E-8) [junit4] 1> Constructing sector linear crossing edge iterator [junit4] 1> Edge [[lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])] --> [lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>Endpoint(s) of edge are not on travel plane [junit4] 1> Edge [[lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])] --> [lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> There were intersection points! [junit4] 1> Edge [[lat=1.255403467900245E-6, lon=1.3751481926504994E-6([X=0.99982665, Y=1.3751481926489823E-6, Z=1.2554034678999154E-6])] --> [lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])]] intersects travel plane [A=0.0, B=0.0; C=1.0; D=-5.8143386268861615E-8] [junit4] 1> Intersection point in-set? false [junit4] 1> Finding whether [-0.3657,-1.124732395751746E-6,-5.8143386268861615E-8] is in-set, based on travel from [X=0.3657, Y=-1.124732395751746E-6, Z=5.8143386268861615E-8] along [A=0.0, B=1.0; C=0.0; D=1.124732395751746E-6] (value=-1.124732395751746E-6) [junit4] 1> Constructing full linear crossing edge iterator [junit4] 1> Check point in set? false [junit4] 1> {code} When that is done, the code fires off an assertion, that basically says if you go back to the first test point, you should get the same in-set-ness for it that you started with: {code} [junit4] 1> ... done. Checking against test point1 ([lat=5.8143386268861655E-8, lon=1.1247323957519851E-6([X=0.3657, Y=1.124732395751746E-6, Z=5.8143386268861615E-8])]): [junit4] 1> [junit4] 1> IsInSet called for [0.3657,1.124732395751746E-6,5.8143386268861615E-8], testPoint=[X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8]; is in set? false [junit4] 1> Using two planes [junit4] 1> Picking XZ then XY [junit4] 1> Finding whether [X=0.3657, Y=1.124732395751746E-6, Z=-5.8143386268861615E-8] is in-set, based on travel from [X=-0.3657, Y=-1.124732395751746E-6, Z=-5.8143386268861615E-8] along [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] (value=-5.8143386268861615E-8) [junit4] 1> Constructing sector linear crossing edge iterator [junit4] 1> Edge [[lat=-2.7797906712270083E-7, lon=1.8357011217773334E-6([X=0.99982765, Y=1.8357011217762314E-6, Z=-2.7797906712269723E-7])] --> [lat=4.492749557241206E-7, lon=1.8019115568534971E-6([X=0.99982758, Y=1.8019115568523405E-6, Z=4.492749557241056E-7])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>Endpoint(s) of edge are not on travel plane [junit4] 1> Edge [[lat=1.640689686301266E-25, lon=0.0([X=1.0, Y=0.0, Z=1.640689686301266E-25])] --> [lat=-1.135982425157425E-6, lon=6.109011074784121E-7([X=0.1683, Y=6.1090110747798E-7, Z=-1.1359824251571808E-6])]] potentially crosses travel plane [A=0.0, B=0.0; C=1.0; D=5.8143386268861615E-8] [junit4] 1> There are no intersection points within bounds. [junit4] 1>Endpoint(s) of edge are not on travel plane [junit4] 1>
[jira] [Created] (SOLR-12281) Document the new index back-compat guarantees
Jan Høydahl created SOLR-12281: -- Summary: Document the new index back-compat guarantees Key: SOLR-12281 URL: https://issues.apache.org/jira/browse/SOLR-12281 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: documentation Affects Versions: master (8.0) Reporter: Jan Høydahl Fix For: master (8.0) Refer to discussions in LUCENE-8264. Since 7.x a lucene segment will carry information about *first created* version, and in 8.0 Lucene will refuse to read an index created pre-7.0, even if the index is "upgraded" and rewritten in 7.x format. This new policy breaks with our historic N-1 guarantee, in that you now will be *forced* to reindex from source every 2 major versions, starting from 8.0. There is a chance that the requirement will be relaxed in 9.0 so that 9.x will be able to read 7.x indices but that I guess depends on what breaking index changes are made before 9.0. This all must be documented clearly both in CHANGES and in RefGuide. Now we have wordings that you can upgrade 6->7 and then 7->8. Related to this we should also touch parts of our documentation that deals with stored/docvalues, and add a recommendation to keep things stored because you *will* need to reindex. Probably this also affects backup/restore in that if you plan to upgrade to 8.x you will need to make sure that none of your segments are created pre-7.x. And perhaps it affects CDCR as well if one cluster is upgraded before the other etc. Then finally -- and this is beyond just documentation -- we should look into better product support for various re-index scenarios. We already have the streaming {{update()}} feature, backup/restore, manual /export to json etc. But what about: * A nice Admin UI on top of the streaming capabilities, where you enter the URL of the remote (old) system and then you get a list of collections that you can select from, and click "GO" and go for lunch, then when you return everything is reindexed into the new cluster. * Have the install script warn you when doing an in-place upgrade from 7.x to 8.0? * If Solr 8.x is started and detects a pre-7.0 index, then log big ERROR, and show a big red "UPGRADE" button in the Admin UI. Clicking this button would trigger some magic code that re-builds all collections from source (if stored/DV fields can be found in the old indices). I've put this as a blocker issue for now, in that we at least need some documentation updates before 8.0, and ideally some tools as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455467#comment-16455467 ] Jan Høydahl commented on SOLR-11490: Can this be resolved and create new more targeted JIRAs for more groups of classes? > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest
[ https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455462#comment-16455462 ] Varun Thacker commented on SOLR-10036: -- Kevin pointed me to [this|https://nvd.nist.gov/vuln/search/results?adv_search=true=cpe%3a%2fa%3afasterxml%3ajackson-databind%3a2.5.4] offline which is a list of CVEs for jackson . So we should upgrade jackson . On the other hand if we can move it's usage to noggit or something then we reduce another dependency . I'll start investigating tomorrow > Revise jackson-core version from 2.5.4 to latest > > > Key: SOLR-10036 > URL: https://issues.apache.org/jira/browse/SOLR-10036 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shashank Pedamallu >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-10036.patch > > > The current jackson-core dependency in Solr is not compatible with Amazon AWS > S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses > jackson-core-dependency-2.5.4. This is blocking the usage of latest updates > from S3. > It would be greatly helpful if someone could revise the jackson-core jar in > Solr to the latest version. This is a ShowStopper for our Public company. > Details of my Setup: > Solr Version: 6.3 > AWS SDK version: 1.11.76 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest
[ https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455459#comment-16455459 ] Shawn Heisey commented on SOLR-10036: - I can agree with the notion that Solr should stay current on its dependencies whenever that is possible. But something occurred to me while thinking about this: Solr is intended to be a completely self-contained service. The version of any dependency that Solr includes (such as Jackson) should not matter to any other software on the system. Normally other software will be entirely separate from Solr. > Revise jackson-core version from 2.5.4 to latest > > > Key: SOLR-10036 > URL: https://issues.apache.org/jira/browse/SOLR-10036 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shashank Pedamallu >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-10036.patch > > > The current jackson-core dependency in Solr is not compatible with Amazon AWS > S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses > jackson-core-dependency-2.5.4. This is blocking the usage of latest updates > from S3. > It would be greatly helpful if someone could revise the jackson-core jar in > Solr to the latest version. This is a ShowStopper for our Public company. > Details of my Setup: > Solr Version: 6.3 > AWS SDK version: 1.11.76 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11229) Add a configuration upgrade tool for Solr
[ https://issues.apache.org/jira/browse/SOLR-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455457#comment-16455457 ] Jan Høydahl commented on SOLR-11229: Is this tool useful as is? > Add a configuration upgrade tool for Solr > - > > Key: SOLR-11229 > URL: https://issues.apache.org/jira/browse/SOLR-11229 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Hrishikesh Gadre >Priority: Major > > Despite widespread enterprise adoption, Solr lacks automated upgrade tooling. > It has long been a challenge for users to understand the implications of a > Solr upgrade. Users must manually review the Solr release notes to identify > configuration changes either to fix backwards incompatibilities or to utilize > latest features in the new version. Additionally, users must identify a way > to migrate existing index data to the new version (either via an index > upgrade or re-indexing the raw data). > Solr config upgrade tool aims to simplify the upgrade process by providing > upgrade instructions tailored to your configuration. These instructions can > help you to answer following questions > - Does my Solr configuration have any backwards incompatible sections? If yes > which ones? > - For each of the incompatibility - what do I need to do to fix this > incompatibility? Where can I get more information about why this > incompatibility was introduced (e.g. references to Lucene/Solr jiras)? > - Are there any changes in Lucene/Solr which would require me to do a full > reindexing OR can I get away with an index upgrade? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7292 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7292/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC 15 tests failed. FAILED: org.apache.solr.handler.component.SpellCheckComponentTest.testRebuildOnCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([811453233263AE30:CB182E4E616F0D53]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:916) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:876) at org.apache.solr.handler.component.SpellCheckComponentTest.testRebuildOnCommit(SpellCheckComponentTest.java:303) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//arr[@name='suggestion'][.='lucenejava'] xml response was: 0202 request was:q=lowerfilt:lucenejavt=/spellCheckCompRH=true=xml at
[jira] [Assigned] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest
[ https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-10036: Assignee: Varun Thacker > Revise jackson-core version from 2.5.4 to latest > > > Key: SOLR-10036 > URL: https://issues.apache.org/jira/browse/SOLR-10036 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shashank Pedamallu >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-10036.patch > > > The current jackson-core dependency in Solr is not compatible with Amazon AWS > S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses > jackson-core-dependency-2.5.4. This is blocking the usage of latest updates > from S3. > It would be greatly helpful if someone could revise the jackson-core jar in > Solr to the latest version. This is a ShowStopper for our Public company. > Details of my Setup: > Solr Version: 6.3 > AWS SDK version: 1.11.76 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454866#comment-16454866 ] Karl Wright commented on LUCENE-8281: - I added a reverse assertion to the code which cross-checks the second test point against the first after its in-set-ness has been determined. This assertion fails pretty regularly. With the assertion in place, you can blow it up with the following already-existing test: {code} ant test -Dtestcase=GeoPolygonTest -Dtests.method=testLUCENE8276_case1 {code} The assertion code is pretty straightforward: {code} assert isInSet(testPoint1.x, testPoint1.y, testPoint1.z, testPoint2, testPoint2InSet, testPoint2FixedXPlane, testPoint2FixedXAbovePlane, testPoint2FixedXBelowPlane, testPoint2FixedYPlane, testPoint2FixedYAbovePlane, testPoint2FixedYBelowPlane, testPoint2FixedZPlane, testPoint2FixedZAbovePlane, testPoint2FixedZBelowPlane) == testPoint1InSet : "Test point1 not correctly in/out of set according to test point2"; {code} Debugging now to figure out why this doesn't reliably work. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9510) child level facet exclusions
[ https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-9510: --- Description: h1. Caveat SOLR-12275 bug in Solr 7.3 {{\{!filters}}} as wells as {{filters}} local param in {{\{!parent filters=...}...}} and {{\{!child filters=..}...}} is broken in 7.3, workaround is adding {{cache=false}} as a local parameter. SOLR-12275 is fixed in Solr 7.4. h2. Challenge * Since SOLR-5743 achieved block join child level facets with counts roll-up to parents, there is a demand for filter exclusions. h2. Context * Then, it's worth to consider JSON Facets as an engine for this functionality rather than support a separate component. * During a discussion in SOLR-8998 a solution for block join with child level exclusion has been found. h2. Proposal It's proposed to provide a bit of syntax sugar to make it user friendly, believe it or not. h2. List of improvements * introducing a local parameter {{filters}} for {{{!parent}} query parser referring to _multiple_ filters queries via parameter name: {{{!parent filters=$child.fq ..}..=color:Red=size:XL}} these _filters_ are intersected with a child query supplied as a subordinate clause. * introducing \{{{!filters param=$child.fq excludeTags=color v=$subq}=text:word= \{!tag=color}color:Red=size:XL}} it intersects a subordinate clause (here it's {{subq}} param, and the trick is to refer to the same query from {{{!parent}}}), with multiple filters supplied via parameter name {{param=$child.fq}}, it also supports {{excludeTags}}. h2. Notes Regarding the latter parser, the alternative approach might be to move into {{domain:{..}}} instruction of json facet. From the implementation perspective, it's desired to optimize with bitset processing, however I suppose it's might be deferred until some initial level of maturity. h2. Example {code:java} q={!parent which=type_s:book filters=$child.fq v=$childquery}=comment_t:good={!tag=author}author_s:yonik={!tag=stars}stars_i:(5 3)=json=on={ comments_for_author:{ type:query, q:"{!filters param=$child.fq excludeTags=author v=$childquery}", "//note":"author filter is excluded", domain:{ blockChildren:"type_s:book", "//":"applying filters here might be more promising" }, facet:{ authors:{ type:terms, field:author_s, facet: { in_books: "unique(_root_)" } } } } , comments_for_stars:{ type:query, q:"{!filters param=$child.fq excludeTags=stars v=$childquery}", "//note":"stars_i filter is excluded", domain:{ blockChildren:"type_s:book" }, facet:{ stars:{ type:terms, field:stars_i, facet: { in_books: "unique(_root_)" } } } } } {code} Votes? Opinions? was: h2. Challenge * Since SOLR-5743 achieved block join child level facets with counts roll-up to parents, there is a demand for filter exclusions. h2. Context * Then, it's worth to consider JSON Facets as an engine for this functionality rather than support a separate component. * During a discussion in SOLR-8998 a solution for block join with child level exclusion has been found. h2. Proposal It's proposed to provide a bit of syntax sugar to make it user friendly, believe it or not. h2. List of improvements * introducing a local parameter {{filters}} for {{{!parent}} query parser referring to _multiple_ filters queries via parameter name: {{{!parent filters=$child.fq ..}..=color:Red=size:XL}} these _filters_ are intersected with a child query supplied as a subordinate clause. * introducing \{{{!filters param=$child.fq excludeTags=color v=$subq}=text:word= \{!tag=color}color:Red=size:XL}} it intersects a subordinate clause (here it's {{subq}} param, and the trick is to refer to the same query from {{{!parent}}}), with multiple filters supplied via parameter name {{param=$child.fq}}, it also supports {{excludeTags}}. h2. Notes Regarding the latter parser, the alternative approach might be to move into {{domain:{..}}} instruction of json facet. From the implementation perspective, it's desired to optimize with bitset processing, however I suppose it's might be deferred until some initial level of maturity. h2. Example {code:java} q={!parent which=type_s:book filters=$child.fq v=$childquery}=comment_t:good={!tag=author}author_s:yonik={!tag=stars}stars_i:(5 3)=json=on={ comments_for_author:{ type:query, q:"{!filters param=$child.fq excludeTags=author v=$childquery}", "//note":"author filter is excluded", domain:{ blockChildren:"type_s:book", "//":"applying filters here might be more promising" }, facet:{ authors:{ type:terms, field:author_s, facet: { in_books: "unique(_root_)" } } } } , comments_for_stars:{ type:query, q:"{!filters param=$child.fq excludeTags=stars v=$childquery}", "//note":"stars_i filter is excluded",
[jira] [Commented] (SOLR-12268) Opening a new NRT Searcher can take a while and block calls to /admin/cores.
[ https://issues.apache.org/jira/browse/SOLR-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454860#comment-16454860 ] Mark Miller commented on SOLR-12268: The problem being, an external tool may be polling /admin/cores and you may be opening new searchers a lot with NRT, and applyAllDeletesAndUpdates may take non trivial time and so each call will be much longer than a request like this should and they can start stacking up ... > Opening a new NRT Searcher can take a while and block calls to /admin/cores. > > > Key: SOLR-12268 > URL: https://issues.apache.org/jira/browse/SOLR-12268 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Major > > When we open a new reader from a writer, we get an IndexWriter lock and may > call applyAllDeletesAndUpdates. That call can take a while holding the lock. > Meanwhile calls coming to /admin/cores get isCurrent for the reader, which > checks if the IndexWriter is closed, which requires the IndexWriter lock. > These leads to /admin/cores calls taking as long as applyAllDeletesAndUpdates. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12268) Opening a new NRT Searcher can take a while and block calls to /admin/cores.
[ https://issues.apache.org/jira/browse/SOLR-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454848#comment-16454848 ] Mark Miller commented on SOLR-12268: It seems to me we have to consider not adding anything that would get an indexwriter lock in the default output. Any opinions here? > Opening a new NRT Searcher can take a while and block calls to /admin/cores. > > > Key: SOLR-12268 > URL: https://issues.apache.org/jira/browse/SOLR-12268 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Major > > When we open a new reader from a writer, we get an IndexWriter lock and may > call applyAllDeletesAndUpdates. That call can take a while holding the lock. > Meanwhile calls coming to /admin/cores get isCurrent for the reader, which > checks if the IndexWriter is closed, which requires the IndexWriter lock. > These leads to /admin/cores calls taking as long as applyAllDeletesAndUpdates. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21913 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21913/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([2ADF855CBA2773F2:4914B3DE23E800DF]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 14538 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest [junit4] 2>
[JENKINS] Lucene-Solr-Tests-7.x - Build # 586 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/586/ 2 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([C1E3748BC7E06EA9:A22842095E2F1D84]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace:
[jira] [Comment Edited] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454662#comment-16454662 ] Karl Wright edited comment on LUCENE-8281 at 4/26/18 6:43 PM: -- The debugging output shows no crossings are detected, but I can't make sense of the organization of the output. There are THREE travel planes being traversed; two for finding the in/out of set for the intersection point, and two for finding the in/out of set for the final check point: {code} [junit4] 1> Edge [[lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])] --> [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] [junit4] 1> Edge [[lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])] --> [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] {code} then: {code} [junit4] 1> Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] [junit4] 1> Edge [[lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])] --> [lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] {code} At this point the testPoint2 in-set determination has been made. Then, we have a run against testPoint1: {code} [junit4] 1> Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=5.132245021274452E-6] [junit4] 1> Edge [[lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])] --> [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=5.132245021274452E-6] {code} ... but that fails, so we do a run against testPoint2: {code} [junit4] 1> Edge [[lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])] --> [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=-4.98859471828087E-6] [junit4] 1> Edge [[lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])] --> [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=-4.98859471828087E-6] {code} When I enable use of testPoint2 as the *primary* means of determining in-set-ness, we see quite a lot of failures, so clearly the logic that finds crossings going from testPoint1 to testPoint2 is not working as expected. More research later. was (Author: kwri...@metacarta.com): The debugging output shows no crossings are detected, but I can't make sense of the organization of the output. There are THREE travel planes being traversed, and they are interleaved: {code} [junit4] 1> Edge [[lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])] --> [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] [junit4] 1> Edge [[lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])] --> [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] {code} then: {code} [junit4] 1> Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] [junit4] 1>
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454662#comment-16454662 ] Karl Wright commented on LUCENE-8281: - The debugging output shows no crossings are detected, but I can't make sense of the organization of the output. There are THREE travel planes being traversed, and they are interleaved: {code} [junit4] 1> Edge [[lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])] --> [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] [junit4] 1> Edge [[lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])] --> [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=-5.132245021274452E-6] {code} then: {code} [junit4] 1> Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] [junit4] 1> Edge [[lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])] --> [lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=7.291706183250981E-7] {code} then: {code} [junit4] 1> Edge [[lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])] --> [lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=5.132245021274452E-6] [junit4] 1> Edge [[lat=-1.2617196632339242E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823315E-5])] --> [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])]] crossed travel plane [A=0.0, B=1.0; C=0.0; D=5.132245021274452E-6] {code} then: {code} [junit4] 1> Edge [[lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])] --> [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=-4.98859471828087E-6] [junit4] 1> Edge [[lat=3.8977187534179774E-6, lon=1.9713406091526057E-5([X=1.0011188537902969, Y=1.9735462513207743E-5, Z=3.902079731596721E-6])] --> [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])]] crossed travel plane [A=0.0, B=0.0; C=1.0; D=-4.98859471828087E-6] {code} > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1821 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1821/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 20 tests failed. FAILED: org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader Error Message: Error from server at http://127.0.0.1:43072/solr: ADDREPLICA failed to create replica Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:43072/solr: ADDREPLICA failed to create replica at __randomizedtesting.SeedInfo.seed([3C1E36ABF461BD6:BE9749B45F986508]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader(LIRRollingUpdatesTest.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454631#comment-16454631 ] Ignacio Vera commented on LUCENE-8281: -- This is what is happening: a) Polygon is created with testpoint1 inside the polygon b) testpoint2 ins computed as inside the polygon (wrong!) 3) check point is on one of the plane test but cannot be traveled so throws an exception (that is the path in the graph). 4) evaluate membership with testpoint2. It does the right thing but because membership is wrongly computed it fails. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454627#comment-16454627 ] Karl Wright commented on LUCENE-8281: - [~ivera] Looks pretty simple, but where is the second leg? > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera updated LUCENE-8281: - Attachment: LUCENE-8281.jpg > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: LUCENE-8281.jpg > > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454581#comment-16454581 ] ASF subversion and git services commented on SOLR-12275: Commit 3b6a6f13ad8bb654a479c3a179f87ab09f8940cc in lucene-solr's branch refs/heads/branch_7x from [~mkhludnev] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3b6a6f1 ] SOLR-12275: test cleanup > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454574#comment-16454574 ] ASF subversion and git services commented on SOLR-12275: Commit e3a98171744e25f446c3c6d5df41a3f65a54737c in lucene-solr's branch refs/heads/master from [~mkhludnev] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3a9817 ] SOLR-12275: test cleanup > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454575#comment-16454575 ] Karl Wright commented on LUCENE-8281: - [~ivera], would it be possible to generate a graphic from this? The test case is committed as part of GeoComplexPolygon. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454572#comment-16454572 ] ASF subversion and git services commented on LUCENE-8281: - Commit 3e34429611009c7e1edde49924daf5614524bc81 in lucene-solr's branch refs/heads/branch_6x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e34429 ] LUCENE-8281: Add a test for the new kind of failure, but disabled > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454571#comment-16454571 ] ASF subversion and git services commented on LUCENE-8281: - Commit f602fc19f78986cb6ba27c58cbc4bca07ba389d1 in lucene-solr's branch refs/heads/branch_7x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f602fc1 ] LUCENE-8281: Add a test for the new kind of failure, but disabled > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454570#comment-16454570 ] ASF subversion and git services commented on LUCENE-8281: - Commit e912ed2a2c7a04290f84f4ddfc539b877394d006 in lucene-solr's branch refs/heads/master from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e912ed2 ] LUCENE-8281: Add a test for the new kind of failure, but disabled > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12277) Zookeeper version on the documentation
[ https://issues.apache.org/jira/browse/SOLR-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey resolved SOLR-12277. - Resolution: Invalid Resolving/closing the issue as "invalid". This isn't a problem. If we later determine that there is a real issue, we can re-open. > Zookeeper version on the documentation > -- > > Key: SOLR-12277 > URL: https://issues.apache.org/jira/browse/SOLR-12277 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.3 >Reporter: Maxence SAUNIER >Priority: Minor > > Hello, > In this page: > [https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper|https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper] > Says: _Solr currently uses Apache ZooKeeper v3.4.11._ > But since today, this version is no longer available. So, can you change the > documentation to add an availabled version ? > [http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/|http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/] > Thanks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-12277) Zookeeper version on the documentation
[ https://issues.apache.org/jira/browse/SOLR-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey closed SOLR-12277. --- > Zookeeper version on the documentation > -- > > Key: SOLR-12277 > URL: https://issues.apache.org/jira/browse/SOLR-12277 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.3 >Reporter: Maxence SAUNIER >Priority: Minor > > Hello, > In this page: > [https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper|https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper] > Says: _Solr currently uses Apache ZooKeeper v3.4.11._ > But since today, this version is no longer available. So, can you change the > documentation to add an availabled version ? > [http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/|http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/] > Thanks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12277) Zookeeper version on the documentation
[ https://issues.apache.org/jira/browse/SOLR-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454551#comment-16454551 ] Shawn Heisey commented on SOLR-12277: - It looks like the reason 3.4.11 has been pulled from download is probably this: ZOOKEEPER-2960 I don't think this is a concern for Solr itself, unless somebody is using the embedded ZK and is customizing that ZK config. The embedded ZK is not recommended except for tests or proof of concept. Our advice when running a config that's not recommended would be to not configure ZK in a way that encounters that issue, or to manually upgrade ZK to 3.4.12 in the Solr install. As already said, we won't be updating the 7.3 documentation to refer to a version other than 3.4.11, because that would be incorrect information. > Zookeeper version on the documentation > -- > > Key: SOLR-12277 > URL: https://issues.apache.org/jira/browse/SOLR-12277 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.3 >Reporter: Maxence SAUNIER >Priority: Minor > > Hello, > In this page: > [https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper|https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper] > Says: _Solr currently uses Apache ZooKeeper v3.4.11._ > But since today, this version is no longer available. So, can you change the > documentation to add an availabled version ? > [http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/|http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/] > Thanks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12280) Ref-Guide: Add Digital Signal Processing documentation
[ https://issues.apache.org/jira/browse/SOLR-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12280: -- Summary: Ref-Guide: Add Digital Signal Processing documentation (was: Add Digital Signal Processing documentation) > Ref-Guide: Add Digital Signal Processing documentation > -- > > Key: SOLR-12280 > URL: https://issues.apache.org/jira/browse/SOLR-12280 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > > This ticket will add a new section of documentation in the Math Expressions > docs covering the Digital Signal Processing functions. The main areas of > documentation coverage will include: > * Convolution > * Cross-correlation > * Find Delay > * Auto-correlation > * Fast Fourier Transform -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12280) Add Digital Signal Processing documentation
[ https://issues.apache.org/jira/browse/SOLR-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12280: -- Description: This ticket will add a new section of documentation in the Math Expressions docs covering the Digital Signal Processing functions. The main areas of documentation coverage will include: * Convolution * Cross-correlation * Find Delay * Auto-correlation * Fast Fourier Transform was: This ticket will add a new section of documentation underneath of Math Expressions to cover the Digital Signal Processing functions. The main areas of coverage will be: * Convolution * Cross-correlation * Find Delay * Auto-correlation * Fast Fourier Transform > Add Digital Signal Processing documentation > --- > > Key: SOLR-12280 > URL: https://issues.apache.org/jira/browse/SOLR-12280 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > > This ticket will add a new section of documentation in the Math Expressions > docs covering the Digital Signal Processing functions. The main areas of > documentation coverage will include: > * Convolution > * Cross-correlation > * Find Delay > * Auto-correlation > * Fast Fourier Transform -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12280) Add Digital Signal Processing documentation
[ https://issues.apache.org/jira/browse/SOLR-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12280: -- Fix Version/s: 7.4 > Add Digital Signal Processing documentation > --- > > Key: SOLR-12280 > URL: https://issues.apache.org/jira/browse/SOLR-12280 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > > This ticket will add a new section of documentation underneath of Math > Expressions to cover the Digital Signal Processing functions. The main areas > of coverage will be: > * Convolution > * Cross-correlation > * Find Delay > * Auto-correlation > * Fast Fourier Transform -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12280) Add Digital Signal Processing documentation
[ https://issues.apache.org/jira/browse/SOLR-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-12280: - Assignee: Joel Bernstein > Add Digital Signal Processing documentation > --- > > Key: SOLR-12280 > URL: https://issues.apache.org/jira/browse/SOLR-12280 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > > This ticket will add a new section of documentation underneath of Math > Expressions to cover the Digital Signal Processing functions. The main areas > of coverage will be: > * Convolution > * Cross-correlation > * Find Delay > * Auto-correlation > * Fast Fourier Transform -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12280) Add Digital Signal Processing documentation
Joel Bernstein created SOLR-12280: - Summary: Add Digital Signal Processing documentation Key: SOLR-12280 URL: https://issues.apache.org/jira/browse/SOLR-12280 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket will add a new section of documentation underneath of Math Expressions to cover the Digital Signal Processing functions. The main areas of coverage will be: * Convolution * Cross-correlation * Find Delay * Auto-correlation * Fast Fourier Transform -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12279) Validate Boolean "bin/solr auth" Inputs
[ https://issues.apache.org/jira/browse/SOLR-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-12279: --- Attachment: repro.sh > Validate Boolean "bin/solr auth" Inputs > --- > > Key: SOLR-12279 > URL: https://issues.apache.org/jira/browse/SOLR-12279 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Trivial > Attachments: repro.sh > > > The "auth" command in the {{bin/solr}} scripts has a handful of different > parameters which take in boolean arguments. However, {{bin/solr}} blithely > accepts invalid values without warning administrators in any way of the > mistake. > In most cases, the results are innocuous. But in some cases, silently > handling invalid input causes real issues. Consider: > {code} > $ bin/solr auth enable -type basicAuth -credentials anyUser:anyPass > -blockUnknown ture > Successfully enabled basic auth with username [anyUser] and password > [anyPass]. > $ bin/solr auth enable -type basicAuth -credentials anyUser:anyPass > -blockUnknown ture > Security is already enabled. You can disable it with 'bin/solr auth disable'. > Existing security.json: > { > "authentication":{ >"blockUnknown": false, >"class":"solr.BasicAuthPlugin", >"credentials":{"mount":"3FLVxpOGLt4dlqlyqxgsiFDbGX+i+dc81L6qEhuBdcI= > lrH1W1pFGyGoAdTJ/Isuclh042fvz66ggG7YZ4e7YwA="} > }, > ... > } > {code} > If an administrator accidentally mistypes or fatfingers "true" when enabling > authentication, their Solr instance will remain unprotected without any > warning! > The {{bin/solr auth}} tool should refuse to process invalid boolean > arguments, or at the least spit out a warning in such cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12279) Validate Boolean "bin/solr auth" Inputs
Jason Gerlowski created SOLR-12279: -- Summary: Validate Boolean "bin/solr auth" Inputs Key: SOLR-12279 URL: https://issues.apache.org/jira/browse/SOLR-12279 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: scripts and tools Affects Versions: master (8.0) Reporter: Jason Gerlowski Assignee: Jason Gerlowski Attachments: repro.sh The "auth" command in the {{bin/solr}} scripts has a handful of different parameters which take in boolean arguments. However, {{bin/solr}} blithely accepts invalid values without warning administrators in any way of the mistake. In most cases, the results are innocuous. But in some cases, silently handling invalid input causes real issues. Consider: {code} $ bin/solr auth enable -type basicAuth -credentials anyUser:anyPass -blockUnknown ture Successfully enabled basic auth with username [anyUser] and password [anyPass]. $ bin/solr auth enable -type basicAuth -credentials anyUser:anyPass -blockUnknown ture Security is already enabled. You can disable it with 'bin/solr auth disable'. Existing security.json: { "authentication":{ "blockUnknown": false, "class":"solr.BasicAuthPlugin", "credentials":{"mount":"3FLVxpOGLt4dlqlyqxgsiFDbGX+i+dc81L6qEhuBdcI= lrH1W1pFGyGoAdTJ/Isuclh042fvz66ggG7YZ4e7YwA="} }, ... } {code} If an administrator accidentally mistypes or fatfingers "true" when enabling authentication, their Solr instance will remain unprotected without any warning! The {{bin/solr auth}} tool should refuse to process invalid boolean arguments, or at the least spit out a warning in such cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454476#comment-16454476 ] David Smiley commented on SOLR-12275: - +1 > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454447#comment-16454447 ] Mikhail Khludnev commented on SOLR-12275: - Thanks, [~dsmiley], attaching test cleanup [^SOLR-12275.patch] > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12275: Attachment: SOLR-12275.patch > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12277) Zookeeper version on the documentation
[ https://issues.apache.org/jira/browse/SOLR-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454433#comment-16454433 ] Shawn Heisey commented on SOLR-12277: - The ZK version that is included in a specific version of Solr is not going to change even if what's available for download on the ZK page changes. If we updated the documentation, then it wouldn't match what Solr actually includes. Solr version 7.3 is always going to include 3.4.11. > Zookeeper version on the documentation > -- > > Key: SOLR-12277 > URL: https://issues.apache.org/jira/browse/SOLR-12277 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.3 >Reporter: Maxence SAUNIER >Priority: Minor > > Hello, > In this page: > [https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper|https://lucene.apache.org/solr/guide/7_3/setting-up-an-external-zookeeper-ensemble.html#download-apache-zookeeper] > Says: _Solr currently uses Apache ZooKeeper v3.4.11._ > But since today, this version is no longer available. So, can you change the > documentation to add an availabled version ? > [http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/|http://apache.mirrors.ovh.net/ftp.apache.org/dist/zookeeper/] > Thanks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2508 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2508/ 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: last state: DocCollection(testSplitIntegration_collection//clusterstate.json/81)={ "replicationFactor":"2", "pullReplicas":"0", "router":{"name":"compositeId"}, "maxShardsPerNode":"2", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0", "autoCreated":"true", "shards":{ "shard2":{ "replicas":{ "core_node3":{ "core":"testSplitIntegration_collection_shard2_replica_n3", "leader":"true", "SEARCHER.searcher.maxDoc":11, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10007_solr", "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":11}, "core_node4":{ "core":"testSplitIntegration_collection_shard2_replica_n4", "SEARCHER.searcher.maxDoc":11, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10008_solr", "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":11}}, "range":"0-7fff", "state":"active"}, "shard1":{ "stateTimestamp":"1524853906145055950", "replicas":{ "core_node1":{ "core":"testSplitIntegration_collection_shard1_replica_n1", "leader":"true", "SEARCHER.searcher.maxDoc":14, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10007_solr", "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":14}, "core_node2":{ "core":"testSplitIntegration_collection_shard1_replica_n2", "SEARCHER.searcher.maxDoc":14, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10008_solr", "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":14}}, "range":"8000-", "state":"inactive"}, "shard1_1":{ "parent":"shard1", "stateTimestamp":"1524853906166325650", "range":"c000-", "state":"active", "replicas":{ "core_node10":{ "leader":"true", "core":"testSplitIntegration_collection_shard1_1_replica1", "SEARCHER.searcher.maxDoc":7, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10007_solr", "base_url":"http://127.0.0.1:10007/solr;, "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":7}, "core_node9":{ "core":"testSplitIntegration_collection_shard1_1_replica0", "SEARCHER.searcher.maxDoc":7, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10008_solr", "base_url":"http://127.0.0.1:10008/solr;, "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{ "parent":"shard1", "stateTimestamp":"1524853906166135500", "range":"8000-bfff", "state":"active", "replicas":{ "core_node7":{ "leader":"true", "core":"testSplitIntegration_collection_shard1_0_replica0", "SEARCHER.searcher.maxDoc":7, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10008_solr", "base_url":"http://127.0.0.1:10008/solr;, "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":7}, "core_node8":{ "core":"testSplitIntegration_collection_shard1_0_replica1", "SEARCHER.searcher.maxDoc":7, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10007_solr", "base_url":"http://127.0.0.1:10007/solr;, "state":"active", "type":"NRT", "SEARCHER.searcher.numDocs":7} Stack Trace: java.util.concurrent.TimeoutException: last state: DocCollection(testSplitIntegration_collection//clusterstate.json/81)={ "replicationFactor":"2", "pullReplicas":"0", "router":{"name":"compositeId"}, "maxShardsPerNode":"2", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0", "autoCreated":"true", "shards":{ "shard2":{ "replicas":{ "core_node3":{ "core":"testSplitIntegration_collection_shard2_replica_n3", "leader":"true", "SEARCHER.searcher.maxDoc":11, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":1, "node_name":"127.0.0.1:10007_solr", "state":"active", "type":"NRT",
[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms
[ https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454413#comment-16454413 ] Shawn Heisey commented on SOLR-12243: - The convention is to name all patches the same for any issue. So any patch you would upload for this issue would be named SOLR-12243.patch. Jira will keep all of them available, and it will be easy to determine which one is the latest. There isn't a release manager yet for the versions you mentioned. Until somebody volunteers for the work, nobody has that role. > Edismax missing phrase queries when phrases contain multiterm synonyms > -- > > Key: SOLR-12243 > URL: https://issues.apache.org/jira/browse/SOLR-12243 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.1 > Environment: RHEL, MacOS X > Do not believe this is environment-specific. >Reporter: Elizabeth Haubert >Priority: Major > Attachments: SOLR-12243.patch > > > synonyms.txt: > allergic, hypersensitive > aspirin, acetylsalicylic acid > dog, canine, canis familiris, k 9 > rat, rattus > request handler: > > > > edismax > 0.4 > title^100 > title~20^5000 > title~11 > title~22^1000 > text > > 3-1 6-3 930% > *:* > 25 > > > Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin" against the > above list will not be generated. > "allergic reaction dog" will generate pf2: "allergic reaction", but not > pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction > dog" > "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin > dose" or pf3:"aspirin dose ?" > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12270) Improve "Your Max Processes.." WARN messages while starting Solr's examples
[ https://issues.apache.org/jira/browse/SOLR-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454409#comment-16454409 ] Erick Erickson commented on SOLR-12270: --- bq. In my opinion it would be sufficient to warn in the logs I have to disagree here. I've seen too many situations where users spend lots and lots and lots of time trying to track obscure errors resulting from having these limits too low. I want this message front-and-center. By the time they get annoyed enough to set the environment variable to turn the warning off it will stand a higher chance of being remembered. IMO of course... > Improve "Your Max Processes.." WARN messages while starting Solr's examples > --- > > Key: SOLR-12270 > URL: https://issues.apache.org/jira/browse/SOLR-12270 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > > If I start Solr 7.3 I am greeted with this very VERBOSE message > > {code:java} > ~/solr-7.3.0$ ./bin/solr start -e cloud -noprompt -z localhost:2181 -m 2g > *** [WARN] *** Your open file limit is currently 256. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > *** [WARN] *** Your Max Processes Limit is currently 1418. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > Welcome to the SolrCloud example! > Starting up 2 Solr nodes for your example SolrCloud cluster. > Creating Solr home directory > /Users/varunthacker/solr-7.3.0/example/cloud/node1/solr > Cloning /Users/varunthacker/solr-7.3.0/example/cloud/node1 into > /Users/varunthacker/solr-7.3.0/example/cloud/node2 > Starting up Solr on port 8983 using command: > "bin/solr" start -cloud -p 8983 -s "example/cloud/node1/solr" -z > localhost:2181 -m 2g > *** [WARN] *** Your open file limit is currently 10240. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > *** [WARN] *** Your Max Processes Limit is currently 1418. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > Waiting up to 180 seconds to see Solr running on port 8983 [-] > Started Solr server on port 8983 (pid=82037). Happy searching! > > Starting up Solr on port 7574 using command: > "bin/solr" start -cloud -p 7574 -s "example/cloud/node2/solr" -z > localhost:2181 -m 2g > *** [WARN] *** Your open file limit is currently 10240. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > *** [WARN] *** Your Max Processes Limit is currently 1418. > It should be set to 65000 to avoid operational disruption. > If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in > your profile or solr.in.sh > Waiting up to 180 seconds to see Solr running on port 7574 [\] > Started Solr server on port 7574 (pid=82143). Happy searching! > INFO - 2018-04-24 16:07:10.566; > org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at > localhost:2181 ready > Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with > config-set 'gettingstarted' > Enabling auto soft-commits with maxTime 3 secs using the Config API > POSTing request to Config API: > http://localhost:8983/solr/gettingstarted/config > {"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}} > Successfully set-property updateHandler.autoSoftCommit.maxTime to 3000 > SolrCloud example running, please visit: http://localhost:8983/solr > {code} > Do we really need so many duplicate warnings for the same message? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8281) Random polygon test failures
[ https://issues.apache.org/jira/browse/LUCENE-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454402#comment-16454402 ] Karl Wright commented on LUCENE-8281: - Getting ready to commit AwaitsFix annotations to all the random tests that started failing with the work done within the last 12 hours for GeoComplexPolygons. Given that this was a relatively minor change I wonder what's going on. > Random polygon test failures > > > Key: LUCENE-8281 > URL: https://issues.apache.org/jira/browse/LUCENE-8281 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce here: > {code} > ant test -Dtestcase=RandomGeoPolygonTest > -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ > -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8277) Better validate CodecReaders in addIndexes
[ https://issues.apache.org/jira/browse/LUCENE-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454401#comment-16454401 ] Adrien Grand commented on LUCENE-8277: -- Here is an example of what I had in mind. SegmentMerger runs CheckIndex with slow checks disabled on readers that do not implement SegmentReader. > Better validate CodecReaders in addIndexes > -- > > Key: LUCENE-8277 > URL: https://issues.apache.org/jira/browse/LUCENE-8277 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8277.patch > > > The discussion at LUCENE-8264 made me wonder that we should apply the same > checks to addIndexes(CodecReader) that we apply at index time if the input > reader is not a SegmentReader such as: > - positions are less than the maximum position > - offsets are going forward > And maybe also check that the API is implemented correctly, eg. terms, doc > ids and positions are returned in order? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8277) Better validate CodecReaders in addIndexes
[ https://issues.apache.org/jira/browse/LUCENE-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-8277: - Attachment: LUCENE-8277.patch > Better validate CodecReaders in addIndexes > -- > > Key: LUCENE-8277 > URL: https://issues.apache.org/jira/browse/LUCENE-8277 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8277.patch > > > The discussion at LUCENE-8264 made me wonder that we should apply the same > checks to addIndexes(CodecReader) that we apply at index time if the input > reader is not a SegmentReader such as: > - positions are less than the maximum position > - offsets are going forward > And maybe also check that the API is implemented correctly, eg. terms, doc > ids and positions are returned in order? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8281) Random polygon test failures
Karl Wright created LUCENE-8281: --- Summary: Random polygon test failures Key: LUCENE-8281 URL: https://issues.apache.org/jira/browse/LUCENE-8281 Project: Lucene - Core Issue Type: Bug Components: modules/spatial3d Reporter: Karl Wright Assignee: Karl Wright Reproduce here: {code} ant test -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareSmallPolygons -Dtests.seed=42573983280EE568 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=jmc-TZ -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=US-ASCII {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454384#comment-16454384 ] Ignacio Vera commented on LUCENE-8280: -- I think the problem is that the intersection point is on an edge of the polygon. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454378#comment-16454378 ] Karl Wright commented on LUCENE-8280: - I was able to rule out possibility (2), so it really just has to be the bounds computation. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454368#comment-16454368 ] Karl Wright commented on LUCENE-8280: - I committed the test with an @AwaitsFix annotation. The original failure finds a number of documents where the polygon says they should be inside but the bounds disagrees. Because it's not just one, I'm inclined to think that we have one of two things happening: (1) The bounds is wrong. OR (2) The secondary (backup) test point's "in-set" status is being computed incorrectly, so all "isWithin" operations that require it fail. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454362#comment-16454362 ] ASF subversion and git services commented on LUCENE-8280: - Commit 03251de94654a87a1808ec21688a2cf403c53383 in lucene-solr's branch refs/heads/branch_6x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=03251de ] LUCENE-8280: Add a test for the case the automation found. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454361#comment-16454361 ] ASF subversion and git services commented on LUCENE-8280: - Commit 99605431613c3a260ce7f8e9957f5966e56594a5 in lucene-solr's branch refs/heads/branch_7x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9960543 ] LUCENE-8280: Add a test for the case the automation found. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
[ https://issues.apache.org/jira/browse/LUCENE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454359#comment-16454359 ] ASF subversion and git services commented on LUCENE-8280: - Commit 08cf263132977fdc620880865a07bfcf43d7530c in lucene-solr's branch refs/heads/master from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=08cf263 ] LUCENE-8280: Add a test for the case the automation found. > Repeatable failure for GeoConvexPolygons > > > Key: LUCENE-8280 > URL: https://issues.apache.org/jira/browse/LUCENE-8280 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > > Reproduce with: > {code} > ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations > -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true > -Dtests.file.encoding=Cp1252 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1803 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1803/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareSmallPolygons {seed=[42573983280EE568:AA3247B1AA691F5C]} Error Message: Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, points=[[lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])], [lat=3.8977187534179774E-6, lon=1.9713406091526053E-5([X=1.0011188537902969, Y=1.973546251320774E-5, Z=3.902079731596721E-6])], [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])], [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])]], internalEdges={4}}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, points=[[lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])], [lat=-1.2617196632339239E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823312E-5])]], internalEdges={0}}]} Large polygon: GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, number of shapes=1, address=d873a65, testPoint=[lat=7.283556946482624E-7, lon=5.126509206005681E-6([X=1.0011188539790565, Y=5.13224502127445E-6, Z=7.291706183250987E-7])], testPointInSet=true, shapes={ {[lat=-1.2617196632339239E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823312E-5])], [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])], [lat=3.8977187534179774E-6, lon=1.9713406091526053E-5([X=1.0011188537902969, Y=1.973546251320774E-5, Z=3.902079731596721E-6])], [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])], [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])]}} Point: [lat=4.983019447098944E-6, lon=-3.0E-323([X=1.0011188539799663, Y=-3.0E-323, Z=4.98859471828087E-6])] WKT: POLYGON((3.7802835214482185E-4 -2.2317525568506174E-4,9.213211597434869E-4 -1.6165398092423463E-4,0.0011294949688719308 2.233228342998425E-4,2.3315178103634778E-4 0.0011348087623821073,0.0 4.244E-321,-8.996322151054578E-4 -7.229121163197139E-4,3.7802835214482185E-4 -2.2317525568506174E-4)) WKT: POINT(-1.7E-321 2.855059835503825E-4) normal polygon: false large polygon: true Stack Trace: java.lang.AssertionError: Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, points=[[lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])], [lat=3.8977187534179774E-6, lon=1.9713406091526053E-5([X=1.0011188537902969, Y=1.973546251320774E-5, Z=3.902079731596721E-6])], [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])], [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])]], internalEdges={4}}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, points=[[lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=7.4E-323, lon=0.0([X=1.0011188539924791, Y=0.0, Z=7.4E-323])], [lat=-1.2617196632339239E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823312E-5])]], internalEdges={0}}]} Large polygon: GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, number of shapes=1, address=d873a65, testPoint=[lat=7.283556946482624E-7, lon=5.126509206005681E-6([X=1.0011188539790565, Y=5.13224502127445E-6, Z=7.291706183250987E-7])], testPointInSet=true, shapes={ {[lat=-1.2617196632339239E-5, lon=-1.5701544210600105E-5([X=1.001118853788849, Y=-1.5719111944122703E-5, Z=-1.2631313432823312E-5])], [lat=-3.89514302068452E-6, lon=6.597839410815709E-6([X=1.0011188539630433, Y=6.605221429683868E-6, Z=-3.89950111699443E-6])], [lat=-2.8213942160840002E-6, lon=1.608008770581648E-5([X=1.0011188538590383, Y=1.60980789753873E-5, Z=-2.8245509442632E-6])], [lat=3.8977187534179774E-6, lon=1.9713406091526053E-5([X=1.0011188537902969, Y=1.973546251320774E-5, Z=3.902079731596721E-6])], [lat=1.980614928404974E-5, lon=4.069266235973146E-6([X=1.0011188537865057, Y=4.07381914993205E-6, Z=1.982830947192924E-5])], [lat=7.4E-323,
[jira] [Commented] (SOLR-12278) Ignore very large document on indexing
[ https://issues.apache.org/jira/browse/SOLR-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454329#comment-16454329 ] Walter Underwood commented on SOLR-12278: - This can be done easily in an update request processor script. Get the field, check the size, if it is over the limit return false. > Ignore very large document on indexing > -- > > Key: SOLR-12278 > URL: https://issues.apache.org/jira/browse/SOLR-12278 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12278.patch > > > Solr should be able to ignore very large document, so it won't affect the > index as well as the tlog. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 559 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/559/ [...truncated 32 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2507/consoleText [repro] Revision: d53de2a385c094109cad864b8c25439439c6e04b [repro] Repro line: ant test -Dtestcase=TestCloudJSONFacetJoinDomain -Dtests.seed=CB54D2683B0A2F60 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=de-AT -Dtests.timezone=America/Toronto -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] Repro line: ant test -Dtestcase=LeaderElectionIntegrationTest -Dtests.method=testSimpleSliceLeaderElection -Dtests.seed=CB54D2683B0A2F60 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=US/Pacific-New -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] Repro line: ant test -Dtestcase=SearchRateTriggerIntegrationTest -Dtests.method=testDeleteNode -Dtests.seed=CB54D2683B0A2F60 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pl -Dtests.timezone=Asia/Kuching -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 9f506635e007ee53498f00e49f67669b099f0506 [repro] git fetch [repro] git checkout d53de2a385c094109cad864b8c25439439c6e04b [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] LeaderElectionIntegrationTest [repro] SearchRateTriggerIntegrationTest [repro] TestCloudJSONFacetJoinDomain [repro] ant compile-test [...truncated 3298 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 -Dtests.class="*.LeaderElectionIntegrationTest|*.SearchRateTriggerIntegrationTest|*.TestCloudJSONFacetJoinDomain" -Dtests.showOutput=onerror -Dtests.seed=CB54D2683B0A2F60 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=US/Pacific-New -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 16671 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.LeaderElectionIntegrationTest [repro] 2/5 failed: org.apache.solr.cloud.TestCloudJSONFacetJoinDomain [repro] 4/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest [repro] git checkout 9f506635e007ee53498f00e49f67669b099f0506 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Friendly reminder: please run precommit
For these type of things i use a very simple script that creates a tmp dir and checks out the latest git commit (can be local) and then runs a build. That way you can keep working in your current work directory and don't need to maintain 2 checkouts, or move commits around. https://gist.github.com/selckin/8bcdb8d09cda3e5ee89e61bb8fa457d0 On 26 April 2018 at 16:14, Karl Wrightwrote: > Simon, > > I ran "ant precommit" on my Dell 4-core laptop with SSD's, three years old, > just an hour ago. 23 minutes. So obviously your mileage varies. > > "ant documentation-lint" on just Lucene is much faster, and is within the > tolerable zone. > > Karl > > > On Thu, Apr 26, 2018 at 10:08 AM, David Smiley > wrote: >> >> FWIW I have a second machine that I rsync my changes to in order to run >> "ant test". I occasionally use it for precommit as well. I wrote a small >> script to helps me automate this. Perhaps others here will find this script >> useful: >> >> https://gist.github.com/dsmiley/daff3c978fe234b48a69a01b54ea9914 >> >> On Thu, Apr 26, 2018 at 9:33 AM Robert Muir wrote: >>> >>> everyone here is collaborating: it causes confusion and takes up other >>> people's time when you break the build. I would ask to just run >>> precommit before committing. you don't have to sit and watch it, you >>> can go work on something else while it runs. >>> >>> On Thu, Apr 26, 2018 at 9:23 AM, Karl Wright wrote: >>> > :-) >>> > >>> > 25 minutes is an eternity these days, Robert. This is especially true >>> > when >>> > others are collaborating with what you are doing, as was the case here. >>> > The >>> > other approach would be to create a branch, but I've been avoiding that >>> > on >>> > git. >>> > >>> > "ant documentation-lint" is what I'm looking for, thanks. >>> > >>> > Karl >>> > >>> > >>> > On Thu, Apr 26, 2018 at 8:21 AM, Robert Muir wrote: >>> >> >>> >> I don't understand the turnaround issue, why do the commits need to be >>> >> rushed in? >>> >> There is patch validation recently hooked in to avoid keeping your >>> >> computer busy for 25 minutes. >>> >> If you are not changing third party dependencies or anything "heavy" >>> >> like that you should at least run "ant documentation-lint" from >>> >> lucene/ >>> >> >>> >> >>> >> On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright >>> >> wrote: >>> >> > How long does precommit take you to run? For me, it's a good 25 >>> >> > minutes. >>> >> > That really impacts turnaround, which is why I'd love a precommit >>> >> > that >>> >> > looked only at certain things in the local package I'm dealing with. >>> >> > >>> >> > Karl >>> >> > >>> >> > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer >>> >> > >>> >> > wrote: >>> >> >> >>> >> >> Hey folks, >>> >> >> >>> >> >> I had to fix several glitches lately that are caught by running >>> >> >> precommit. It's a simple step please take the time running `ant >>> >> >> clean >>> >> >> precommit` on top-level. >>> >> >> >>> >> >> Thanks, >>> >> >> >>> >> >> Simon >>> >> >> >>> >> >> >>> >> >> - >>> >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >>> >> >> For additional commands, e-mail: dev-h...@lucene.apache.org >>> >> >> >>> >> > >>> >> >>> >> - >>> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >>> >> For additional commands, e-mail: dev-h...@lucene.apache.org >>> >> >>> > >>> >>> - >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >>> For additional commands, e-mail: dev-h...@lucene.apache.org >>> >> -- >> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: >> http://www.solrenterprisesearchserver.com > > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10308) Solr fails to work with Guava 21.0
[ https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454311#comment-16454311 ] Mark Miller commented on SOLR-10308: bq. Mark Miller, I think you might qualify as a resident HDFS expert. Do you think it's safe to commit an upgrade to hadoop 3.0.0, at least on master? Yeah, it could be done, but only on master. Here is an old issue: SOLR-9515 > Solr fails to work with Guava 21.0 > -- > > Key: SOLR-10308 > URL: https://issues.apache.org/jira/browse/SOLR-10308 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Affects Versions: 6.4.2 >Reporter: Vincent Massol >Priority: Major > Attachments: SOLR-10308.patch > > > This is what we get: > {noformat} > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object; > at > org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299) > at > org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178) > at > org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) > at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) > at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957) > at > org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117) > at > org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122) > at > org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72) > at > org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67) > at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287) > at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395) > at > org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384) > at > org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173) > ... 183 more > {noformat} > Guava 21 has removed some signature that solr is currently using. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms
[ https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454309#comment-16454309 ] Elizabeth Haubert commented on SOLR-12243: -- OK; making a patch against master then. The "how to contribute" doc says that patches should be named by the Jira ticket. Is there a convention to specify what point in time a patch was made? In this case, the code change is so small I doubt it matters, and I'll just replace the existing patch. But for anything substantial, it might. Who is the release manager for 7.4 or 8? > Edismax missing phrase queries when phrases contain multiterm synonyms > -- > > Key: SOLR-12243 > URL: https://issues.apache.org/jira/browse/SOLR-12243 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.1 > Environment: RHEL, MacOS X > Do not believe this is environment-specific. >Reporter: Elizabeth Haubert >Priority: Major > Attachments: SOLR-12243.patch > > > synonyms.txt: > allergic, hypersensitive > aspirin, acetylsalicylic acid > dog, canine, canis familiris, k 9 > rat, rattus > request handler: > > > > edismax > 0.4 > title^100 > title~20^5000 > title~11 > title~22^1000 > text > > 3-1 6-3 930% > *:* > 25 > > > Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin" against the > above list will not be generated. > "allergic reaction dog" will generate pf2: "allergic reaction", but not > pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction > dog" > "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin > dose" or pf3:"aspirin dose ?" > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12278) Ignore very large document on indexing
[ https://issues.apache.org/jira/browse/SOLR-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454282#comment-16454282 ] David Smiley commented on SOLR-12278: - * your initArgs could be simply: {{maxDocumentSize = args.toSolrParams().required().getLong(LIMIT_SIZE_PARAM)}} * I don't think the estimated size should be eagerly calculated; most apps don't need to bother with this? * maybe keep the existing SolrInputDocument constructor but mark deprecated? * Have you seen Lucene's RamUsageEstimator, particularly {{shallowSizeOf}}? > Ignore very large document on indexing > -- > > Key: SOLR-12278 > URL: https://issues.apache.org/jira/browse/SOLR-12278 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12278.patch > > > Solr should be able to ignore very large document, so it won't affect the > index as well as the tlog. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Friendly reminder: please run precommit
Simon, I ran "ant precommit" on my Dell 4-core laptop with SSD's, three years old, just an hour ago. 23 minutes. So obviously your mileage varies. "ant documentation-lint" on just Lucene is much faster, and is within the tolerable zone. Karl On Thu, Apr 26, 2018 at 10:08 AM, David Smileywrote: > FWIW I have a second machine that I rsync my changes to in order to run > "ant test". I occasionally use it for precommit as well. I wrote a small > script to helps me automate this. Perhaps others here will find this > script useful: > > https://gist.github.com/dsmiley/daff3c978fe234b48a69a01b54ea9914 > > On Thu, Apr 26, 2018 at 9:33 AM Robert Muir wrote: > >> everyone here is collaborating: it causes confusion and takes up other >> people's time when you break the build. I would ask to just run >> precommit before committing. you don't have to sit and watch it, you >> can go work on something else while it runs. >> >> On Thu, Apr 26, 2018 at 9:23 AM, Karl Wright wrote: >> > :-) >> > >> > 25 minutes is an eternity these days, Robert. This is especially true >> when >> > others are collaborating with what you are doing, as was the case >> here. The >> > other approach would be to create a branch, but I've been avoiding that >> on >> > git. >> > >> > "ant documentation-lint" is what I'm looking for, thanks. >> > >> > Karl >> > >> > >> > On Thu, Apr 26, 2018 at 8:21 AM, Robert Muir wrote: >> >> >> >> I don't understand the turnaround issue, why do the commits need to be >> >> rushed in? >> >> There is patch validation recently hooked in to avoid keeping your >> >> computer busy for 25 minutes. >> >> If you are not changing third party dependencies or anything "heavy" >> >> like that you should at least run "ant documentation-lint" from >> >> lucene/ >> >> >> >> >> >> On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright >> wrote: >> >> > How long does precommit take you to run? For me, it's a good 25 >> >> > minutes. >> >> > That really impacts turnaround, which is why I'd love a precommit >> that >> >> > looked only at certain things in the local package I'm dealing with. >> >> > >> >> > Karl >> >> > >> >> > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer >> >> > >> >> > wrote: >> >> >> >> >> >> Hey folks, >> >> >> >> >> >> I had to fix several glitches lately that are caught by running >> >> >> precommit. It's a simple step please take the time running `ant >> clean >> >> >> precommit` on top-level. >> >> >> >> >> >> Thanks, >> >> >> >> >> >> Simon >> >> >> >> >> >> >> - >> >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> >> >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> >> >> >> > >> >> >> >> - >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> >> > >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> -- > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker > LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www. > solrenterprisesearchserver.com >
Re: Friendly reminder: please run precommit
FWIW I have a second machine that I rsync my changes to in order to run "ant test". I occasionally use it for precommit as well. I wrote a small script to helps me automate this. Perhaps others here will find this script useful: https://gist.github.com/dsmiley/daff3c978fe234b48a69a01b54ea9914 On Thu, Apr 26, 2018 at 9:33 AM Robert Muirwrote: > everyone here is collaborating: it causes confusion and takes up other > people's time when you break the build. I would ask to just run > precommit before committing. you don't have to sit and watch it, you > can go work on something else while it runs. > > On Thu, Apr 26, 2018 at 9:23 AM, Karl Wright wrote: > > :-) > > > > 25 minutes is an eternity these days, Robert. This is especially true > when > > others are collaborating with what you are doing, as was the case here. > The > > other approach would be to create a branch, but I've been avoiding that > on > > git. > > > > "ant documentation-lint" is what I'm looking for, thanks. > > > > Karl > > > > > > On Thu, Apr 26, 2018 at 8:21 AM, Robert Muir wrote: > >> > >> I don't understand the turnaround issue, why do the commits need to be > >> rushed in? > >> There is patch validation recently hooked in to avoid keeping your > >> computer busy for 25 minutes. > >> If you are not changing third party dependencies or anything "heavy" > >> like that you should at least run "ant documentation-lint" from > >> lucene/ > >> > >> > >> On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright > wrote: > >> > How long does precommit take you to run? For me, it's a good 25 > >> > minutes. > >> > That really impacts turnaround, which is why I'd love a precommit that > >> > looked only at certain things in the local package I'm dealing with. > >> > > >> > Karl > >> > > >> > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer > >> > > >> > wrote: > >> >> > >> >> Hey folks, > >> >> > >> >> I had to fix several glitches lately that are caught by running > >> >> precommit. It's a simple step please take the time running `ant clean > >> >> precommit` on top-level. > >> >> > >> >> Thanks, > >> >> > >> >> Simon > >> >> > >> >> - > >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > >> >> For additional commands, e-mail: dev-h...@lucene.apache.org > >> >> > >> > > >> > >> - > >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > >> For additional commands, e-mail: dev-h...@lucene.apache.org > >> > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[JENKINS] Lucene-Solr-Tests-7.x - Build # 585 - Unstable
Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Sequence/Text_Action1' at input position (line 77, pos 1): ^ java.lang.OutOfMemoryError: Java heap space - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8279) Improve CheckIndex on norms
[ https://issues.apache.org/jira/browse/LUCENE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454244#comment-16454244 ] Robert Muir commented on LUCENE-8279: - or even better maybe just move this check into the postings check so that it happens for each field without creating problematic memory usage. postings check already cross checks some stuff with fieldinfos... > Improve CheckIndex on norms > --- > > Key: LUCENE-8279 > URL: https://issues.apache.org/jira/browse/LUCENE-8279 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8279.patch > > > We should improve CheckIndex to make sure that terms and norms agree on which > documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8279) Improve CheckIndex on norms
[ https://issues.apache.org/jira/browse/LUCENE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454241#comment-16454241 ] Robert Muir commented on LUCENE-8279: - I am thinking of this one: https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java#L1328 Maybe it could be moved to the TermIndexStatus or whatever so that the norms check could be moved to after the postings check and re-use it. > Improve CheckIndex on norms > --- > > Key: LUCENE-8279 > URL: https://issues.apache.org/jira/browse/LUCENE-8279 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8279.patch > > > We should improve CheckIndex to make sure that terms and norms agree on which > documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2507 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2507/ 6 tests failed. FAILED: org.apache.solr.cloud.LeaderElectionIntegrationTest.testSimpleSliceLeaderElection Error Message: Address already in use Stack Trace: java.net.BindException: Address already in use at __randomizedtesting.SeedInfo.seed([CB54D2683B0A2F60:95601F267875B387]:0) at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:334) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:302) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:238) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:397) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:396) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:369) at org.apache.solr.cloud.LeaderElectionIntegrationTest.testSimpleSliceLeaderElection(LeaderElectionIntegrationTest.java:106) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-8279) Improve CheckIndex on norms
[ https://issues.apache.org/jira/browse/LUCENE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454222#comment-16454222 ] Robert Muir commented on LUCENE-8279: - the check is implemented as a "slow" check, but don't we already construct the same bitset already to verify some postings list statistics such as docCount ? > Improve CheckIndex on norms > --- > > Key: LUCENE-8279 > URL: https://issues.apache.org/jira/browse/LUCENE-8279 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8279.patch > > > We should improve CheckIndex to make sure that terms and norms agree on which > documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Friendly reminder: please run precommit
everyone here is collaborating: it causes confusion and takes up other people's time when you break the build. I would ask to just run precommit before committing. you don't have to sit and watch it, you can go work on something else while it runs. On Thu, Apr 26, 2018 at 9:23 AM, Karl Wrightwrote: > :-) > > 25 minutes is an eternity these days, Robert. This is especially true when > others are collaborating with what you are doing, as was the case here. The > other approach would be to create a branch, but I've been avoiding that on > git. > > "ant documentation-lint" is what I'm looking for, thanks. > > Karl > > > On Thu, Apr 26, 2018 at 8:21 AM, Robert Muir wrote: >> >> I don't understand the turnaround issue, why do the commits need to be >> rushed in? >> There is patch validation recently hooked in to avoid keeping your >> computer busy for 25 minutes. >> If you are not changing third party dependencies or anything "heavy" >> like that you should at least run "ant documentation-lint" from >> lucene/ >> >> >> On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright wrote: >> > How long does precommit take you to run? For me, it's a good 25 >> > minutes. >> > That really impacts turnaround, which is why I'd love a precommit that >> > looked only at certain things in the local package I'm dealing with. >> > >> > Karl >> > >> > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer >> > >> > wrote: >> >> >> >> Hey folks, >> >> >> >> I had to fix several glitches lately that are caught by running >> >> precommit. It's a simple step please take the time running `ant clean >> >> precommit` on top-level. >> >> >> >> Thanks, >> >> >> >> Simon >> >> >> >> - >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> >> > >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Friendly reminder: please run precommit
precommit: BUILD SUCCESSFUL Total time: 10 minutes 48 seconds on a my like 2 year old macbook pro I think that is reasonable? On Thu, Apr 26, 2018 at 3:23 PM, Karl Wrightwrote: > :-) > > 25 minutes is an eternity these days, Robert. This is especially true when > others are collaborating with what you are doing, as was the case here. The > other approach would be to create a branch, but I've been avoiding that on > git. > > "ant documentation-lint" is what I'm looking for, thanks. > > Karl > > > On Thu, Apr 26, 2018 at 8:21 AM, Robert Muir wrote: >> >> I don't understand the turnaround issue, why do the commits need to be >> rushed in? >> There is patch validation recently hooked in to avoid keeping your >> computer busy for 25 minutes. >> If you are not changing third party dependencies or anything "heavy" >> like that you should at least run "ant documentation-lint" from >> lucene/ >> >> >> On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright wrote: >> > How long does precommit take you to run? For me, it's a good 25 >> > minutes. >> > That really impacts turnaround, which is why I'd love a precommit that >> > looked only at certain things in the local package I'm dealing with. >> > >> > Karl >> > >> > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer >> > >> > wrote: >> >> >> >> Hey folks, >> >> >> >> I had to fix several glitches lately that are caught by running >> >> precommit. It's a simple step please take the time running `ant clean >> >> precommit` on top-level. >> >> >> >> Thanks, >> >> >> >> Simon >> >> >> >> - >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> >> > >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms
[ https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454186#comment-16454186 ] Shawn Heisey commented on SOLR-12243: - In general, we make changes to master. Then we backport to the current stable development branch. The current stable development branch is branch_7x. If there are any releases currently underway or due to be underway shortly, and the release manager for that version agrees with the change, the change MIGHT be backported to a more specific branch, such as branch_7_3. At some point in the future that has not yet been determined, we will create branch_8x from master and begin the process of releasing 8.0.0. At that point, work on branch_7x is likely to stop. > Edismax missing phrase queries when phrases contain multiterm synonyms > -- > > Key: SOLR-12243 > URL: https://issues.apache.org/jira/browse/SOLR-12243 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.1 > Environment: RHEL, MacOS X > Do not believe this is environment-specific. >Reporter: Elizabeth Haubert >Priority: Major > Attachments: SOLR-12243.patch > > > synonyms.txt: > allergic, hypersensitive > aspirin, acetylsalicylic acid > dog, canine, canis familiris, k 9 > rat, rattus > request handler: > > > > edismax > 0.4 > title^100 > title~20^5000 > title~11 > title~22^1000 > text > > 3-1 6-3 930% > *:* > 25 > > > Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin" against the > above list will not be generated. > "allergic reaction dog" will generate pf2: "allergic reaction", but not > pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction > dog" > "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin > dose" or pf3:"aspirin dose ?" > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Friendly reminder: please run precommit
:-) 25 minutes is an eternity these days, Robert. This is especially true when others are collaborating with what you are doing, as was the case here. The other approach would be to create a branch, but I've been avoiding that on git. "ant documentation-lint" is what I'm looking for, thanks. Karl On Thu, Apr 26, 2018 at 8:21 AM, Robert Muirwrote: > I don't understand the turnaround issue, why do the commits need to be > rushed in? > There is patch validation recently hooked in to avoid keeping your > computer busy for 25 minutes. > If you are not changing third party dependencies or anything "heavy" > like that you should at least run "ant documentation-lint" from > lucene/ > > > On Thu, Apr 26, 2018 at 8:02 AM, Karl Wright wrote: > > How long does precommit take you to run? For me, it's a good 25 minutes. > > That really impacts turnaround, which is why I'd love a precommit that > > looked only at certain things in the local package I'm dealing with. > > > > Karl > > > > On Thu, Apr 26, 2018 at 6:14 AM, Simon Willnauer < > simon.willna...@gmail.com> > > wrote: > >> > >> Hey folks, > >> > >> I had to fix several glitches lately that are caught by running > >> precommit. It's a simple step please take the time running `ant clean > >> precommit` on top-level. > >> > >> Thanks, > >> > >> Simon > >> > >> - > >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > >> For additional commands, e-mail: dev-h...@lucene.apache.org > >> > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[jira] [Created] (LUCENE-8280) Repeatable failure for GeoConvexPolygons
Karl Wright created LUCENE-8280: --- Summary: Repeatable failure for GeoConvexPolygons Key: LUCENE-8280 URL: https://issues.apache.org/jira/browse/LUCENE-8280 Project: Lucene - Core Issue Type: Bug Components: modules/spatial3d Reporter: Karl Wright Assignee: Karl Wright Reproduce with: {code} ant test -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations -Dtests.seed=1F49469C1989BC0 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=Europe/Malta -Dtests.asserts=true -Dtests.file.encoding=Cp1252 {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Wright reassigned LUCENE-8276: --- Assignee: Karl Wright (was: Ignacio Vera) > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454013#comment-16454013 ] Karl Wright commented on LUCENE-8276: - Ok, that's been committed. The unrelated failure still occurs, however, so I will create a separate ticket for that and close this one. > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Wright resolved LUCENE-8276. - Resolution: Fixed Fix Version/s: master (8.0) 7.4 6.7 > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: lucene-solr:master: LUCENE-8276: Add new tests which demonstrate the issue. Only one is now failing.
Karl, this commit seems to be introducing new failures such as the following: *13:03:49*[junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest*13:03:49* [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareSmallPolygons -Dtests.seed=5A8225EF62C5FD14 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ro-RO -Dtests.timezone=Australia/Broken_Hill -Dtests.asserts=true -Dtests.file.encoding=UTF8*13:03:49*[junit4] FAILURE 0.05s J1 | RandomGeoPolygonTest.testCompareSmallPolygons {seed=[5A8225EF62C5FD14:EE055490131C897F]} <<<*13:03:49*[junit4] > Throwable #1: java.lang.AssertionError: *13:03:49*[junit4]> Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: {planetmodel=PlanetModel.SPHERE, points=[[lat=-3.4997017751302744E-6, lon=-1.9104957367233055E-6([X=0.99920512, Y=-1.9104957367104438E-6, Z=-3.4997017751231314E-6])], [lat=-7.240371388770696E-6, lon=-1.1605326690755646E-6([X=0.99731153, Y=-1.160532669044885E-6, Z=-7.240371388707437E-6])], [lat=2.419122758401074E-204, lon=0.0([X=1.0, Y=0.0, Z=2.419122758401074E-204])], [lat=1.2509574330309563E-8, lon=-1.624561463131613E-8([X=1.0, Y=-1.624561463131613E-8, Z=1.2509574330309566E-8])]], internalEdges={3}}, GeoConvexPolygon: {planetmodel=PlanetModel.SPHERE, points=[[lat=3.8278074582174365E-6, lon=-6.939157373153648E-7([X=0.99924334, Y=-6.939157373102256E-7, Z=3.8278074582080895E-6])], [lat=1.119200289477475E-6, lon=-1.388843706701E-6([X=0.99984093, Y=-1.3888437067053498E-6, Z=1.1192002894772413E-6])], [lat=-3.4997017751302744E-6, lon=-1.9104957367233055E-6([X=0.99920512, Y=-1.9104957367104438E-6, Z=-3.4997017751231314E-6])], [lat=1.2509574330309563E-8, lon=-1.624561463131613E-8([X=1.0, Y=-1.624561463131613E-8, Z=1.2509574330309566E-8])]], internalEdges={2, 3}}, GeoConvexPolygon: {planetmodel=PlanetModel.SPHERE, points=[[lat=3.8278074582174365E-6, lon=-6.939157373153648E-7([X=0.99924334, Y=-6.939157373102256E-7, Z=3.8278074582080895E-6])], [lat=1.2509574330309563E-8, lon=-1.624561463131613E-8([X=1.0, Y=-1.624561463131613E-8, Z=1.2509574330309566E-8])], [lat=1.674281893166E-6, lon=8.875002706076884E-7([X=0.99982045, Y=8.87500270606328E-7, Z=1.6742818931692173E-6])]], internalEdges={0}}]}*13:03:49* [junit4]> Large polygon: GeoComplexPolygon: {planetmodel=PlanetModel.SPHERE, number of shapes=1, address=bf46e323, testPoint=[lat=-5.866105640959045E-7, lon=-6.117904562604815E-7([X=0.6407, Y=-6.11790456260338E-7, Z=-5.866105640958707E-7])], testPointInSet=true, shapes={ {[lat=-7.240371388770696E-6, lon=-1.1605326690755646E-6([X=0.99731153, Y=-1.160532669044885E-6, Z=-7.240371388707437E-6])], [lat=2.419122758401074E-204, lon=0.0([X=1.0, Y=0.0, Z=2.419122758401074E-204])], [lat=1.2509574330309563E-8, lon=-1.624561463131613E-8([X=1.0, Y=-1.624561463131613E-8, Z=1.2509574330309566E-8])], [lat=1.674281893166E-6, lon=8.875002706076884E-7([X=0.99982045, Y=8.87500270606328E-7, Z=1.6742818931692173E-6])], [lat=3.8278074582174365E-6, lon=-6.939157373153648E-7([X=0.99924334, Y=-6.939157373102256E-7, Z=3.8278074582080895E-6])], [lat=1.119200289477475E-6, lon=-1.388843706701E-6([X=0.99984093, Y=-1.3888437067053498E-6, Z=1.1192002894772413E-6])], [lat=-3.4997017751302744E-6, lon=-1.9104957367233055E-6([X=0.99920512, Y=-1.9104957367104438E-6, Z=-3.4997017751231314E-6])]}}*13:03:49* [junit4]> Point: [lat=3.2042050955302614E-6, lon=-0.0([X=0.99948666, Y=-0.0, Z=3.2042050955247785E-6])]*13:03:49*[junit4]> WKT: POLYGON((0.0 1.3860552418042745E-202,-9.308051539703931E-7 7.167458126319312E-7,5.085001982253901E-5 9.592928619381434E-5,-3.975844308587909E-5 2.193172121445583E-4,-7.957488279759712E-5 6.412545301687931E-5,-1.0946334249198229E-4 -2.0051814126940701E-4,-6.649362392508249E-5 -4.148427226838355E-4,0.0 1.3860552418042745E-202))*13:03:49*[junit4]> WKT: POINT(-0.0 1.8358742866819673E-4)*13:03:49*[junit4]> normal polygon: false*13:03:49*[junit4]> large polygon: true*13:03:49* [junit4]> at __randomizedtesting.SeedInfo.seed([5A8225EF62C5FD14:EE055490131C897F]:0)*13:03:49* [junit4]>at org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.checkPoint(RandomGeoPolygonTest.java:225)*13:03:49* [junit4]>at org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:200)*13:03:49* [junit4]>at org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareSmallPolygons(RandomGeoPolygonTest.java:107)*13:03:49* [junit4]>at java.lang.Thread.run(Thread.java:748)*13:03:49* [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=1950, maxMBSortInHeap=7.442893192380558, sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@3381e2f8), locale=ro-RO,
[jira] [Commented] (LUCENE-8279) Improve CheckIndex on norms
[ https://issues.apache.org/jira/browse/LUCENE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453981#comment-16453981 ] Adrien Grand commented on LUCENE-8279: -- Here is a patch. There is one case when terms and norms may go out-of-sync: when a document fails indexing eg. because the consumption of the token stream triggers an exception. In such a case you could end up with terms for this document but no norm. Since IndexWriter immediately marks the document as deleted in such a case, the new check only verifies that terms and norms agree on live documents. > Improve CheckIndex on norms > --- > > Key: LUCENE-8279 > URL: https://issues.apache.org/jira/browse/LUCENE-8279 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8279.patch > > > We should improve CheckIndex to make sure that terms and norms agree on which > documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8279) Improve CheckIndex on norms
[ https://issues.apache.org/jira/browse/LUCENE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-8279: - Attachment: LUCENE-8279.patch > Improve CheckIndex on norms > --- > > Key: LUCENE-8279 > URL: https://issues.apache.org/jira/browse/LUCENE-8279 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-8279.patch > > > We should improve CheckIndex to make sure that terms and norms agree on which > documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8279) Improve CheckIndex on norms
Adrien Grand created LUCENE-8279: Summary: Improve CheckIndex on norms Key: LUCENE-8279 URL: https://issues.apache.org/jira/browse/LUCENE-8279 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand We should improve CheckIndex to make sure that terms and norms agree on which documents have a value on an indexed field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453971#comment-16453971 ] ASF subversion and git services commented on LUCENE-8276: - Commit f367f3f08e627bfa58d0d3dd80fc211262b89e1b in lucene-solr's branch refs/heads/branch_6x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f367f3f ] LUCENE-8276: Don't allow travel near a pole; require a different choice of travel plane > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453970#comment-16453970 ] ASF subversion and git services commented on LUCENE-8276: - Commit d8aa4afd4a4e2770965f82e899d3310ded2fe6ff in lucene-solr's branch refs/heads/branch_7x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8aa4af ] LUCENE-8276: Don't allow travel near a pole; require a different choice of travel plane > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8276) GeoComplexPolygon failures
[ https://issues.apache.org/jira/browse/LUCENE-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453968#comment-16453968 ] ASF subversion and git services commented on LUCENE-8276: - Commit 9f506635e007ee53498f00e49f67669b099f0506 in lucene-solr's branch refs/heads/master from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f50663 ] LUCENE-8276: Don't allow travel near a pole; require a different choice of travel plane > GeoComplexPolygon failures > -- > > Key: LUCENE-8276 > URL: https://issues.apache.org/jira/browse/LUCENE-8276 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Attachments: LUCENE-8276-random.patch, LUCENE-8276-test.patch, > LUCENE-8276.patch > > > I have tightened a bit more the random test for polygons and > GeoComplexPolygons still shows issues when traveling planes that are cutting > the world near the pole. I could identify three cases: > > case 1) It happens when the check point is on aone of the test point planes > but the planes is close to the pole and cannot be traversed. In that case we > hit the following part of the code: > {code:java} > } else if (testPointFixedYPlane.evaluateIsZero(x, y, z) || > testPointFixedXPlane.evaluateIsZero(x, y, z) || > testPointFixedZPlane.evaluateIsZero(x, y, z)) { > throw new IllegalArgumentException("Can't compute isWithin for specified > point"); > } else {{code} > > It seems this check is unnecesary. If removed then a traversal is choosen and > evrything works as expected. > > case 2) In this case a {{DualCrossingEdgeIterator}} is used with one of the > planes being close to the pole but inside current restricutions (is a valid > traversal). I think the problem happens when computing the intersection > points for above and below plane in {{computeInsideOutside}}: > {code:java} > final GeoPoint[] outsideOutsidePoints = > testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane); > //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane); > final GeoPoint outsideOutsidePoint = > pickProximate(outsideOutsidePoints);{code} > The intersection above results in two points close to each other and close to > the intersection point, and therefore {{pickProximate}} fails in choosing the > right one. > case 3) In this case a {{LinearCrossingEdgeIterator}} is used with the plane > being close to the pole. In this case when evaluating the intersection > between an edge and the plane, we get two intersections (because are very > close together) inside the bounds instead of one. The result is too many > crossings. > > After evaluating this errors I think we should really prevent using planes > that are near a pole. I attached a new version of {{GeoComplexPolygon}} that > seems to solve this issues. The approach is the following: > {{NEAR_EDGE_CUTOFF}} is not expressed as a linear distance but as a > percentage, curerntly 0.75 . If ab is the value of a semiaxis, the logic > disallows to travel a plane if the distance between the plane and the center > of the world is bigger that {{NEAR_EDGE_CUTOFF}} * pole. > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12275) {!filters} are cached wrong
[ https://issues.apache.org/jira/browse/SOLR-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453958#comment-16453958 ] David Smiley commented on SOLR-12275: - The primary change looks good to me. There apparently was no use in FiltersQuery :) In the test, you can use Java 7 try-with-resources for less verbose code. FWIW IntelliJ will both point this out and do it for you. And: {code:java} try { assertU(delI("12275")); assertU(commit()); } catch(Throwable t) { log.error("ignoring exception occuring in compensation routine", t); } {code} can be replaced with simply {{clearIndex();}} > {!filters} are cached wrong > --- > > Key: SOLR-12275 > URL: https://issues.apache.org/jira/browse/SOLR-12275 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.3 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12275.patch, SOLR-12275.patch > > > As it [was > spotted|https://issues.apache.org/jira/browse/SOLR-9510?focusedCommentId=16452683=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16452683] > by [~dsmi...@mac.com], {{\{!filters}}} behaves wrong when cached. As a > workaround it's proposed to pass {{cache=false}} as a local parameter -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org