[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 729 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/729/ 2 tests failed. REGRESSION: org.apache.solr.search.TestSearcherReuse.test Error Message: expected same:Searcher@5fa7335b[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C2) Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C1)))} was not:Searcher@3b897018[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C2) Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C1)))} Stack Trace: java.lang.AssertionError: expected same:Searcher@5fa7335b[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C2) Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C1)))} was not:Searcher@3b897018[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C2) Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C1)))} at __randomizedtesting.SeedInfo.seed([390C9E12426711EC:B158A1C8EC9B7C14]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotSame(Assert.java:641) at org.junit.Assert.assertSame(Assert.java:580) at org.junit.Assert.assertSame(Assert.java:593) at org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247) at org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13291 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13291/ Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.TestRebalanceLeaders.test Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:32800, https://127.0.0.1:56715, https://127.0.0.1:60069, https://127.0.0.1:56476, https://127.0.0.1:34373] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:32800, https://127.0.0.1:56715, https://127.0.0.1:60069, https://127.0.0.1:56476, https://127.0.0.1:34373] at __randomizedtesting.SeedInfo.seed([85BB2B11D6276227:DEF14CB78DB0FDF]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:280) at org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:107) at org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped
[ https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611649#comment-14611649 ] Adrien Grand commented on LUCENE-6639: -- One issue I have with putting the call in createWeight is that you might sometimes only pull a Weight in order to extract terms (eg. for highlighting or computing distributed term frequencies), so incrementing the counter here would not work. That said, you made good arguments against the current logic. In particular it's true that reusing weights for multiple collections should not be common so maybe we can just call policy.onUse on the first time that Weight.scorer is called? LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped Key: LUCENE-6639 URL: https://issues.apache.org/jira/browse/LUCENE-6639 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.3 Reporter: Terry Smith Priority: Minor Attachments: LUCENE-6639.patch The method {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}} starts with {code} if (context.ord == 0) { policy.onUse(getQuery()); } {code} which can result in a missed call for queries that return a null scorer for the first segment. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Fwd: G1 now the default in JDK 9
FYI for those not on the mailing list. Dawid -- Forwarded message -- From: Stefan Johansson stefan.johans...@oracle.com Date: Wed, Jul 1, 2015 at 9:14 AM Subject: G1 now the default in JDK 9 To: jdk9-...@openjdk.java.net jdk9-...@openjdk.java.net Hi all, A short heads up. The change to make G1 the default garbage collector has now made its way to jdk9/dev [1] and should soon be part of a JDK 9 early access build. Thanks, Stefan [1] http://hg.openjdk.java.net/jdk9/dev/hotspot/rev/d472d1331479 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)
[ https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611625#comment-14611625 ] Philip Willoughby commented on SOLR-247: [~erickerickson] Yes, we could do that. We don't use the schema browser on this core because it crashes or locks up the browser. The underlying /admin/luke endpoint takes over 12 seconds to respond (with 20280 known fields already this is not surprising) so we wouldn't be able to meet our 100ms SLA without re-architecting our application so that it's no longer stateless, which is a big step we aren't willing to take. We are working around this by using both indexing approaches I outlined above and mixing the facets together correctly in application logic. Allow facet.field=* to facet on all fields (without knowing what they are) -- Key: SOLR-247 URL: https://issues.apache.org/jira/browse/SOLR-247 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Priority: Minor Labels: beginners, newdev Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, SOLR-247.patch, SOLR-247.patch I don't know if this is a good idea to include -- it is potentially a bad idea to use it, but that can be ok. This came out of trying to use faceting for the LukeRequestHandler top term collecting. http://www.nabble.com/Luke-request-handler-issue-tf3762155.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4986 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4986/ Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.solr.cloud.AliasIntegrationTest.test Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([66D5C4548D0D066E]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest Error Message: Suite timeout exceeded (= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (= 720 msec). at __randomizedtesting.SeedInfo.seed([66D5C4548D0D066E]:0) Build Log: [...truncated 11030 lines...] [junit4] Suite: org.apache.solr.cloud.AliasIntegrationTest [junit4] 2 Creating dataDir: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.AliasIntegrationTest_66D5C4548D0D066E-001\init-core-data-001 [junit4] 2 472696 INFO (SUITE-AliasIntegrationTest-seed#[66D5C4548D0D066E]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) [junit4] 2 472696 INFO (SUITE-AliasIntegrationTest-seed#[66D5C4548D0D066E]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /oa/g [junit4] 2 472699 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 472701 INFO (Thread-1572) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 472701 INFO (Thread-1572) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 472787 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.ZkTestServer start zk server on port:55080 [junit4] 2 472787 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 472789 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 472795 INFO (zkCallback-444-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@329509a4 name:ZooKeeperConnection Watcher:127.0.0.1:55080 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 472795 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 472795 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 472795 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2 472804 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] o.a.z.s.NIOServerCnxn caught end of stream exception [junit4] 2 EndOfStreamException: Unable to read additional data from client sessionid 0x14e4d6eb6d8, likely client has closed socket [junit4] 2at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [junit4] 2at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [junit4] 2at java.lang.Thread.run(Thread.java:745) [junit4] 2 472812 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 472814 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 472819 INFO (zkCallback-445-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@7562ba7b name:ZooKeeperConnection Watcher:127.0.0.1:55080/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 472821 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 472822 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 472822 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient makePath: /collections/collection1 [junit4] 2 472824 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards [junit4] 2 472921 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient makePath: /collections/control_collection [junit4] 2 473363 INFO (TEST-AliasIntegrationTest.test-seed#[66D5C4548D0D066E]) [] o.a.s.c.c.SolrZkClient makePath: /collections/control_collection/shards [junit4] 2 473366
[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611757#comment-14611757 ] Noble Paul commented on SOLR-5750: -- 2 design suggestions 1) Move the operations to {{CollectionsHandler}} . When the process starts add the backup name and the node that is processing the task to ZK 2) Do not restore all the replicas at once . Just create one replica of a shard first and then do ADDREPLICA till you have enough replicas Backup/Restore API for SolrCloud Key: SOLR-5750 URL: https://issues.apache.org/jira/browse/SOLR-5750 Project: Solr Issue Type: Sub-task Components: SolrCloud Reporter: Shalin Shekhar Mangar Assignee: Varun Thacker Fix For: 5.2, Trunk Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch We should have an easy way to do backups and restores in SolrCloud. The ReplicationHandler supports a backup command which can create snapshots of the index but that is too little. The command should be able to backup: # Snapshots of all indexes or indexes from the leader or the shards # Config set # Cluster state # Cluster properties # Aliases # Overseer work queue? A restore should be able to completely restore the cloud i.e. no manual steps required other than bringing nodes back up or setting up a new cloud cluster. SOLR-5340 will be a part of this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6638) Factor graph flattening out of SynonymFilter
[ https://issues.apache.org/jira/browse/LUCENE-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6638: --- Attachment: LUCENE-6638.patch New patch, now working with holes e.g. from a StopFilter. I also added a test case to simulate the StopFilter after SynFilter case. I think it's close ... another couple nocommits though. Factor graph flattening out of SynonymFilter Key: LUCENE-6638 URL: https://issues.apache.org/jira/browse/LUCENE-6638 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Attachments: LUCENE-6638.patch, LUCENE-6638.patch Spinoff from LUCENE-6582. SynonymFilter is very hairy, and has known nearly-impossible-to-fix bugs: it produces the wrong graph, both accepting too many phrases and not enough phrases, because it never creates new positions. This makes improvements like LUCENE-6582, to fix some of its bugs, unnecessarily hard. I'd like to pull out the graph flattening into its own token filter, and I think I have a starting patch that works. I started with the sausagizer stage on the branch from LUCENE-5012, but changed the approach so that it should not have so many adversarial cases. I think this should make SynonymFilter quite a bit simpler, hopefully to the point where we can just fix its bugs already. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6334) Fast Vector Highlighter does not properly span neighboring term offsets
[ https://issues.apache.org/jira/browse/LUCENE-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611667#comment-14611667 ] Michael McCandless commented on LUCENE-6334: [~vijayk] yes please! Fast Vector Highlighter does not properly span neighboring term offsets --- Key: LUCENE-6334 URL: https://issues.apache.org/jira/browse/LUCENE-6334 Project: Lucene - Core Issue Type: Bug Components: core/termvectors, modules/highlighter Reporter: Chris Earle Labels: easyfix If you are using term vectors for fast vector highlighting along with a multivalue field while matching a phrase that crosses two elements, then it will not properly highlight even though it _properly_ finds the correct values to highlight. A good example of this is when matching source code, where you might have lines like: {code} one two three five two three four five six five six seven eight nine eight nine eight nine eight nine eight nine eight nine eight nine ten eleven twelve thirteen {code} Matching the phrase four five will return {code} two three four five six five six seven eight nine eight nine eight nine eight nine eight eight nine ten eleven {code} However, it does not properly highlight four (on the first line) and five (on the second line) _and_ it is returning too many lines, but not all of them. The problem lies in the [BaseFragmentsBuilder at line 269| https://github.com/apache/lucene-solr/blob/trunk/lucene/highlighter/src/java/org/apache/lucene/search/vectorhighlight/BaseFragmentsBuilder.java#L269] because it is not checking for cross-coverage. Here is a possible solution: {code} boolean started = toffs.getStartOffset() = fieldStart; boolean ended = toffs.getEndOffset() = fieldEnd; // existing behavior: if (started ended) { toffsList.add(toffs); toffsIterator.remove(); } else if (started) { toffsList.add(new Toffs(toffs.getStartOffset(), field.end)); // toffsIterator.remove(); // is this necessary? } else if (ended) { toffsList.add(new Toffs(fieldStart, toff.getEndOffset())); // toffsIterator.remove(); // is this necessary? } else if (toffs.getEndOffset() fieldEnd) { // ie the toff spans whole field toffsList.add(new Toffs(fieldStart, fieldEnd)); // toffsIterator.remove(); // is this necessary? } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611671#comment-14611671 ] Michael McCandless commented on LUCENE-6653: +1, thanks for cleaning up all those dup'd binary token streams [~thetaphi]! Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611735#comment-14611735 ] Uwe Schindler commented on LUCENE-6653: --- Mike, are you also fine with the changes to the TermToBytesRefAttribute? I would backport those and mention the change of workflow in the backwards incompatible changes. People will get a compile error in any case if they define own attributes using this interface, but it will for sure not affect many users (maybe only those who wnated to get binary terms), which is now easy :-) Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6582) SynonymFilter should generate a correct (or, at least, better) graph
[ https://issues.apache.org/jira/browse/LUCENE-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611746#comment-14611746 ] Michael McCandless commented on LUCENE-6582: Thanks [~ianribas]! I've opened LUCENE-6638 to create a graph flattener TokenFilter, and it seems to work well ... I'm going to now try to simplify SynFilter by removing the hairy graph flattening it must do today, and have it create correct graph outputs. I think with the combination of these two we can then have 100% accurate synonym graphs for accurate query-time searches, but also have the sausagized version that indexing needs to match the graph corruption we do today. SynonymFilter should generate a correct (or, at least, better) graph Key: LUCENE-6582 URL: https://issues.apache.org/jira/browse/LUCENE-6582 Project: Lucene - Core Issue Type: Bug Reporter: Ian Ribas Attachments: LUCENE-6582.patch, LUCENE-6582.patch, LUCENE-6582.patch, after.png, after2.png, after3.png, before.png Some time ago, I had a problem with synonyms and phrase type queries (actually, it was elasticsearch and I was using a match query with multiple terms and the and operator, as better explained here: https://github.com/elastic/elasticsearch/issues/10394). That issue led to some work on Lucene: LUCENE-6400 (where I helped a little with tests) and LUCENE-6401. This issue is also related to LUCENE-3843. Starting from the discussion on LUCENE-6400, I'm attempting to implement a solution. Here is a patch with a first step - the implementation to fix SynFilter to be able to 'make positions' (as was mentioned on the [issue|https://issues.apache.org/jira/browse/LUCENE-6400?focusedCommentId=14498554page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14498554]). In this way, the synonym filter generates a correct (or, at least, better) graph. As the synonym matching is greedy, I only had to worry about fixing the position length of the rules of the current match, no future or past synonyms would span over this match (please correct me if I'm wrong!). It did require more buffering, twice as much. The new behavior I added is not active by default, a new parameter has to be passed in a new constructor for {{SynonymFilter}}. The changes I made do change the token stream generated by the synonym filter, and I thought it would be better to let that be a voluntary decision for now. I did some refactoring on the code, but mostly on what I had to change for may implementation, so that the patch was not too hard to read. I created specific unit tests for the new implementation ({{TestMultiWordSynonymFilter}}) that should show how things will be with the new behavior. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611863#comment-14611863 ] Robert Muir commented on LUCENE-6653: - +1, this patch is great! Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7748) Fix bin/solr to work on IBM J9
[ https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shai Erera updated SOLR-7748: - Attachment: solr-7748.patch Patch fixes bin/solr.cmd since I tested it on Windows. If there are no objections to the change, I will change bin/solr too. Fix bin/solr to work on IBM J9 -- Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk Attachments: solr-7748.patch bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure!
Looks like the fix for this one is not in b70: https://bugs.openjdk.java.net/browse/JDK-8086046 If it happens again (ant I am sure it will), I revert to Java 9 b60 Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Uwe Schindler [mailto:u...@thetaphi.de] Sent: Thursday, July 02, 2015 3:22 PM To: dev@lucene.apache.org Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure! Ok, Oh, looks like a new bug in Java 9 b70... Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] Sent: Thursday, July 02, 2015 3:21 PM To: dev@lucene.apache.org Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure! Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13294/ Java: 64bit/jdk1.9.0-ea-b70 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilte r BeforeHolesRandom Error Message: some thread(s) failed Stack Trace: java.lang.RuntimeException: some thread(s) failed at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:531) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:433) at org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilte r BeforeHolesRandom(TestGraphTokenizers.java:366) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j ava:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize dRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando mizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando mizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando mizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRul e SetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBefo reA fterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh readAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestR ule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFail ure .java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta t ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTas k (ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(Thread L eakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran domizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(Rando mizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(Rando mizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando mizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBefo reA fterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta t ementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStore Cl assName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta t ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta t ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta t ementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleA ss ertionsRequired.java:54) at
[jira] [Commented] (LUCENE-6652) Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests
[ https://issues.apache.org/jira/browse/LUCENE-6652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611998#comment-14611998 ] ASF subversion and git services commented on LUCENE-6652: - Commit 1688830 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1688830 ] LUCENE-6653, LUCENE-6652: Refactor TermToBytesRefAttribute; add oal.analysis.tokenattributes.BytesTermAttribute; remove code duplication in tests Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests -- Key: LUCENE-6652 URL: https://issues.apache.org/jira/browse/LUCENE-6652 Project: Lucene - Core Issue Type: Test Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk While implementing LUCENE-6651, I found a lot of duplicates of the same class (in different variants) which is used by tests to generate binary terms. As we now have support for binary terms in Field class itsself, we should remove all of those attributes accross tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure!
Ok, Oh, looks like a new bug in Java 9 b70... Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] Sent: Thursday, July 02, 2015 3:21 PM To: dev@lucene.apache.org Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure! Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13294/ Java: 64bit/jdk1.9.0-ea-b70 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilter BeforeHolesRandom Error Message: some thread(s) failed Stack Trace: java.lang.RuntimeException: some thread(s) failed at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:531) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:433) at org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilter BeforeHolesRandom(TestGraphTokenizers.java:366) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j ava:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize dRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando mizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando mizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando mizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule SetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh readAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask (ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL eakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran domizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(Rando mizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(Rando mizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando mizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCl assName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss ertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore TestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.lucene.analysis.TestGraphTokenizers.testDoubleMockGraphTok enFilterRandom Error Message: term
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13294/ Java: 64bit/jdk1.9.0-ea-b70 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilterBeforeHolesRandom Error Message: some thread(s) failed Stack Trace: java.lang.RuntimeException: some thread(s) failed at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:531) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:433) at org.apache.lucene.analysis.TestGraphTokenizers.testMockGraphTokenFilterBeforeHolesRandom(TestGraphTokenizers.java:366) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.lucene.analysis.TestGraphTokenizers.testDoubleMockGraphTokenFilterRandom Error Message: term 0 expected:[ ] but was:[��] Stack Trace: org.junit.ComparisonFailure: term 0 expected:[ ] but
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611955#comment-14611955 ] Michael McCandless commented on LUCENE-6653: bq. Mike, are you also fine with the changes to the TermToBytesRefAttribute? Yes, big +1 to the new simpler API and to backport hard break to 5.x. Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6654) KNearestNeighborClassifier not taking in consideration Class ranking
Alessandro Benedetti created LUCENE-6654: Summary: KNearestNeighborClassifier not taking in consideration Class ranking Key: LUCENE-6654 URL: https://issues.apache.org/jira/browse/LUCENE-6654 Project: Lucene - Core Issue Type: Improvement Components: modules/classification Affects Versions: 5.2.1 Reporter: Alessandro Benedetti Priority: Minor Currently the KNN Classifier assign the score for a ClassificationResult, based only on the frequency of the class in the top K results. This is conceptually a simplification. Actually the ranking must take a part. If not this can happen : Top 4 1) Class1 2) Class1 3) Class2 4) Class2 As a result of this Top 4 , both the classes will have the same score. But the expected result is that Class1 has a better score, as the MLT score the documents accordingly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611997#comment-14611997 ] ASF subversion and git services commented on LUCENE-6653: - Commit 1688830 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1688830 ] LUCENE-6653, LUCENE-6652: Refactor TermToBytesRefAttribute; add oal.analysis.tokenattributes.BytesTermAttribute; remove code duplication in tests Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7748) Fix bin/solr to work on IBM J9
Shai Erera created SOLR-7748: Summary: Fix bin/solr to work on IBM J9 Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Policeman Jenkins...
...is now running Java 9 b70 (we left out many in between releases because of the bugs Robert found) and Java 8u60 b21. Of course also the mainline 8u45 and 7u80. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6646) make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free
[ https://issues.apache.org/jira/browse/LUCENE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated LUCENE-6646: Summary: make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free (was: support SortingMergePolicy-free use of EarlyTerminatingSortingCollector) make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free - Key: LUCENE-6646 URL: https://issues.apache.org/jira/browse/LUCENE-6646 Project: Lucene - Core Issue Type: Wish Reporter: Christine Poerschke Priority: Minor motivation and summary of proposed changes to follow via github pull request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: LUCENE-6646: SortingMergePolicy-free Ear...
GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/178 LUCENE-6646: SortingMergePolicy-free EarlyTerminatingSortingCollector constructor for https://issues.apache.org/jira/i#browse/LUCENE-6646 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-etsc-lucene Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/178.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #178 commit 7881a5931cf8db8a76f1aee9ca747f6b8de2a63a Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-6646: make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * made SortingMergePolicy's isSorted into a static function * made EarlyTerminatingSortingCollector's constructor SortingMergePolicy-free, class summary docs updated to match * adjusted EarlyTerminatingSortingCollector.canEarlyTerminate to be SortingMergePolicy-free also * corresponding changes to TestEarlyTerminatingSortingCollector * adjusted AnalyzingInfixSuggester's EarlyTerminatingSortingCollector constructor call --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6646) make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free
[ https://issues.apache.org/jira/browse/LUCENE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611907#comment-14611907 ] ASF GitHub Bot commented on LUCENE-6646: GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/178 LUCENE-6646: SortingMergePolicy-free EarlyTerminatingSortingCollector constructor for https://issues.apache.org/jira/i#browse/LUCENE-6646 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-etsc-lucene Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/178.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #178 commit 7881a5931cf8db8a76f1aee9ca747f6b8de2a63a Author: Christine Poerschke cpoersc...@bloomberg.net Date: 2015-06-29T15:02:44Z LUCENE-6646: make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free motivation: * SOLR-5730 to make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr. * outline of draft SOLR-5730 changes: + SolrIndexWriter constructor calls SolrIndexConfig.toIndexWriterConfig (passing the result to its lucene.IndexWriter super class) + SolrIndexConfig.toIndexWriterConfig(SolrCore core) calls SolrIndexConfig.buildMergePolicy + SolrIndexConfig.buildMergePolicy(IndexSchema schema) calls the SortingMergePolicy constructor (using the IndexSchema's mergeSortSpec) + SolrIndexSearcher.buildAndRunCollectorChain calls the EarlyTerminatingSortingCollector constructor (using the IndexSchema's mergeSortSpec) summary of changes: * made SortingMergePolicy's isSorted into a static function * made EarlyTerminatingSortingCollector's constructor SortingMergePolicy-free, class summary docs updated to match * adjusted EarlyTerminatingSortingCollector.canEarlyTerminate to be SortingMergePolicy-free also * corresponding changes to TestEarlyTerminatingSortingCollector * adjusted AnalyzingInfixSuggester's EarlyTerminatingSortingCollector constructor call make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free - Key: LUCENE-6646 URL: https://issues.apache.org/jira/browse/LUCENE-6646 Project: Lucene - Core Issue Type: Wish Reporter: Christine Poerschke Priority: Minor motivation and summary of proposed changes to follow via github pull request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13294 - Failure!
On 02.07.2015 15:27, Uwe Schindler wrote: Looks like the fix for this one is not in b70: https://bugs.openjdk.java.net/browse/JDK-8086046 Yeah - it's marked as resolved on master (i.e. you can pull the code from master and do your own build with the fix to check it out). Once a new build is cut from master, it will be marked as resolved in that build. cheers, dalibor topic -- http://www.oracle.com Dalibor Topic | Principal Product Manager Phone: +494089091214 tel:+494089091214 | Mobile: +491737185961 tel:+491737185961 ORACLE Deutschland B.V. Co. KG | Kühnehöfe 5 | 22761 Hamburg ORACLE Deutschland B.V. Co. KG Hauptverwaltung: Riesstr. 25, D-80992 München Registergericht: Amtsgericht München, HRA 95603 Komplementärin: ORACLE Deutschland Verwaltung B.V. Hertogswetering 163/167, 3543 AS Utrecht, Niederlande Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697 Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher http://www.oracle.com/commitment Oracle is committed to developing practices and products that help protect the environment - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 13295 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13295/ Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test Error Message: Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([6323362784A2DADD:EB7709FD2A5EB725]:0) at org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:118) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 9374 lines...] [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent [junit4] 2 Creating dataDir:
[jira] [Commented] (LUCENE-6652) Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests
[ https://issues.apache.org/jira/browse/LUCENE-6652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612052#comment-14612052 ] ASF subversion and git services commented on LUCENE-6652: - Commit 1688845 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688845 ] Merged revision(s) 1688830 from lucene/dev/trunk: LUCENE-6653, LUCENE-6652: Refactor TermToBytesRefAttribute; add oal.analysis.tokenattributes.BytesTermAttribute; remove code duplication in tests Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests -- Key: LUCENE-6652 URL: https://issues.apache.org/jira/browse/LUCENE-6652 Project: Lucene - Core Issue Type: Test Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk While implementing LUCENE-6651, I found a lot of duplicates of the same class (in different variants) which is used by tests to generate binary terms. As we now have support for binary terms in Field class itsself, we should remove all of those attributes accross tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612059#comment-14612059 ] Uwe Schindler commented on LUCENE-6651: --- So coming back to this one. After cleanup of TermToBytesRefAttribute and test code duplication (LUCENE-6652, LUCENE-6653), I can fix the attributes to implement reflectWith() and make the base method abstract in trunk. Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: (was: FiniteStringsIterator.patch) Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: FiniteStrings_reuse.patch Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: (was: FiniteStringsIterator5.patch) Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: (was: FiniteStringsIterator2.patch) Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: (was: FiniteStringsIterator3.patch) Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6365) Optimized iteration of finite strings
[ https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-6365: -- Attachment: (was: FiniteStringsIterator6.patch) Optimized iteration of finite strings - Key: LUCENE-6365 URL: https://issues.apache.org/jira/browse/LUCENE-6365 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.0 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: FiniteStrings_reuse.patch Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator. Benefits: Avoid huge hash set of finite strings. Avoid massive object/array creation during processing. Downside: Iteration order changed, so when iterating with a limit, the result may differ slightly. Old: emit current node, if accept / recurse. New: recurse / emit current node, if accept. The old method Operations.getFiniteStrings() still exists, because it eases the tests. It is now implemented by use of the new FiniteStringIterator. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6651: -- Attachment: LUCENE-6651.patch Patch. This also adds a missing clone() to one of the attributes encountered. Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612111#comment-14612111 ] Robert Muir commented on LUCENE-6651: - +1 Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9
[ https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612038#comment-14612038 ] Shawn Heisey commented on SOLR-7748: There are severe bugs that happen in Lucene when using IBM's Java. I seem to recall seeing something indicating that IBM is starting to take them seriously, but that might have been wishful thinking. It will be quite a while after IBM fixes the problem (if they ever do) before there is widespread penetration of fixed Java versions, so until that happens, the fact that bin/solr doesn't work right might actually be a good thing. As I understand it, the J9 problems occur because IBM is extremely aggressive at turning on optimizations in the JVM. The performance of IBM's Java is legendary as a result of this, but it also causes a lot of problems. I know that in at least one case of a bug with J9, there is an optimization that can be turned off to fix the problem ... there may be other bugs fixed by turning off certain optimizations. As an initial step, I was thinking about having the startup script detect J9 versions and abort with a message indicating serious JVM bugs (perhaps linking to the JavaBugs page on the Lucene wiki). We already have detection for Java 7u40 through 7u51, which enables the -XX:-UseSuperWord option for Java and prints a warning to the user about upgrading Java. As we learn more, we could start with commandline options to work around the problems, and then if IBM ever actually fixes the problems, run normally with those detected versions. The java version detection currently happens in the script, which I think may be a little fragile. Perhaps a tiny little Java program could be written to detect a whole range of information about the JVM and return one or more known strings back to the script to tell the script what to do . Those actions might include aborting because the java version is not new enough, issuing a warning because it's not 64-bit, turning on X and/or Y commandline options, etc. We might even be able to set the max heap according to the available memory, but do so less aggressively than Java itself does. This comment turned into quite a lot of text! I started off just writing a quick note about J9 bugs. Fix bin/solr to work on IBM J9 -- Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk Attachments: solr-7748.patch bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612051#comment-14612051 ] ASF subversion and git services commented on LUCENE-6653: - Commit 1688845 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688845 ] Merged revision(s) 1688830 from lucene/dev/trunk: LUCENE-6653, LUCENE-6652: Refactor TermToBytesRefAttribute; add oal.analysis.tokenattributes.BytesTermAttribute; remove code duplication in tests Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6652) Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests
[ https://issues.apache.org/jira/browse/LUCENE-6652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-6652. --- Resolution: Fixed Committed through LUCENE-6653. Remove tons of BytesRefAttribute/BytesRefAttributeImpl duplicates in tests -- Key: LUCENE-6652 URL: https://issues.apache.org/jira/browse/LUCENE-6652 Project: Lucene - Core Issue Type: Test Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk While implementing LUCENE-6651, I found a lot of duplicates of the same class (in different variants) which is used by tests to generate binary terms. As we now have support for binary terms in Field class itsself, we should remove all of those attributes accross tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)
[ https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612045#comment-14612045 ] Erick Erickson commented on SOLR-247: - 100 ms SLAs would be hard to meet if you wind up faceting on very many fields in the first place, so I'm not quite sure how this JIRA would solve your problem. Generally having that many fields indicates some design alternatives should be explored... FWIW Allow facet.field=* to facet on all fields (without knowing what they are) -- Key: SOLR-247 URL: https://issues.apache.org/jira/browse/SOLR-247 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Priority: Minor Labels: beginners, newdev Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, SOLR-247.patch, SOLR-247.patch I don't know if this is a good idea to include -- it is potentially a bad idea to use it, but that can be ok. This came out of trying to use faceting for the LukeRequestHandler top term collecting. http://www.nabble.com/Luke-request-handler-issue-tf3762155.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6653) Cleanup TermToBytesRefAttribute
[ https://issues.apache.org/jira/browse/LUCENE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-6653. --- Resolution: Fixed Thanks to [~rcmuir] and [~mikemccand] for review! Cleanup TermToBytesRefAttribute --- Key: LUCENE-6653 URL: https://issues.apache.org/jira/browse/LUCENE-6653 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6653.patch While working on LUCENE-6652, I figured out that there were so many test with wrongly implemented TermsToBytesRefAttribute. In addition, the whole concept back from Lucene 4.0 was no longer correct: - We don't return the hash code anymore; it is calculated by BytesRefHash - The interface is horrible to use. It tends to reuse the BytesRef instance but the whole thing is not correct. Instead we should remove the fillBytesRef() method from the interface and let getBytesRef() populate and return the BytesRef. It does not matter if the attribute reuses the BytesRef or returns a new one. It just get consumed like a standard CharTermAttribute. You get a BytesRef and can use it until you call incrementToken(). As the TermsToBytesRefAttribute is marked experimental, I see no reason why we should not change the semantics to be more easy to understand and behave like all other attributes. I will add a note to the backwards incompatible changes in Lucene 5.3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6273) Cross Data Center Replication
[ https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612476#comment-14612476 ] Martin Grotzke commented on SOLR-6273: -- Hi all, we're currently evaluating how to expand our current single DC solrcloud to multi (2) DCs. This effort here looks very promising, great work! Assuming we'd test how it works for us, could we follow the documentation mentioned above (https://docs.google.com/document/d/1DZHUFM3z9OX171DeGjcLTRI9uULM-NB1KsCSpVL3Zy0/edit?usp=sharing)? Does it match the current implementation? Do you have any other suggestions for us if we'd test this? Thanks! Cross Data Center Replication - Key: SOLR-6273 URL: https://issues.apache.org/jira/browse/SOLR-6273 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Erick Erickson Attachments: SOLR-6273-trunk-testfix1.patch, SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch This is the master issue for Cross Data Center Replication (CDCR) described at a high level here: http://heliosearch.org/solr-cross-data-center-replication/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612477#comment-14612477 ] ASF subversion and git services commented on LUCENE-6651: - Commit 1688903 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688903 ] LUCENE-6651: Try to fix test (somehow empty permissions grant all) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612489#comment-14612489 ] ASF subversion and git services commented on LUCENE-6651: - Commit 1688905 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688905 ] LUCENE-6651: Comment out test for now (fails only on Linux, no idea why!) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b21) - Build # 13122 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13122/ Java: 64bit/jdk1.8.0_60-ea-b21 -XX:-UseCompressedOops -XX:+UseSerialGC 256 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.TestAssertions Error Message: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/J0 write) Stack Trace: java.security.AccessControlException: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/J0 write) at __randomizedtesting.SeedInfo.seed([9ACD37347FDE9699]:0) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) at sun.nio.fs.UnixPath.checkWrite(UnixPath.java:801) at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:376) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.ExtrasFS.createDirectory(ExtrasFS.java:55) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) at java.nio.file.Files.createDirectories(Files.java:767) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.initializeJavaTempDir(TestRuleTemporaryFilesCleanup.java:187) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:115) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.lucene.TestDemo Error Message: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/J0 write) Stack Trace: java.security.AccessControlException: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/J0 write) at __randomizedtesting.SeedInfo.seed([9ACD37347FDE9699]:0) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) at sun.nio.fs.UnixPath.checkWrite(UnixPath.java:801) at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:376) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:133) at org.apache.lucene.mockfile.ExtrasFS.createDirectory(ExtrasFS.java:55) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) at java.nio.file.Files.createDirectories(Files.java:767) at
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13299 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13299/ Java: 64bit/jdk1.9.0-ea-b70 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.lucene.analysis.TestLookaheadTokenFilter.testNeverCallingPeek Error Message: term 34 expected:xwijr[p] but was:xwijr[�] Stack Trace: org.junit.ComparisonFailure: term 34 expected:xwijr[p] but was:xwijr[�] at __randomizedtesting.SeedInfo.seed([7CA3DCFED9F3185E:8F466B993BA04CA1]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:186) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:301) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:818) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:617) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:515) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:439) at org.apache.lucene.analysis.TestLookaheadTokenFilter.testNeverCallingPeek(TestLookaheadTokenFilter.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13299 - Failure!
I reverted the JDK 9 back to b60, so tests pass with working arraycopy. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] Sent: Thursday, July 02, 2015 10:51 PM To: dev@lucene.apache.org Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b70) - Build # 13299 - Failure! Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13299/ Java: 64bit/jdk1.9.0-ea-b70 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.lucene.analysis.TestLookaheadTokenFilter.testNeverCallingPeek Error Message: term 34 expected:xwijr[p] but was:xwijr[ ] Stack Trace: org.junit.ComparisonFailure: term 34 expected:xwijr[p] but was:xwijr[ ] at __randomizedtesting.SeedInfo.seed([7CA3DCFED9F3185E:8F466B993BA04C A1]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamC ontents(BaseTokenStreamTestCase.java:186) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamC ontents(BaseTokenStreamTestCase.java:301) at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamC ontents(BaseTokenStreamTestCase.java:305) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsist ency(BaseTokenStreamTestCase.java:818) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:617) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:515) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(B aseTokenStreamTestCase.java:439) at org.apache.lucene.analysis.TestLookaheadTokenFilter.testNeverCallingPeek (TestLookaheadTokenFilter.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j ava:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize dRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando mizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando mizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando mizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule SetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh readAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask (ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL eakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran domizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(Rando mizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(Rando mizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando mizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCl assName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at
[jira] [Commented] (SOLR-7233) rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH
[ https://issues.apache.org/jira/browse/SOLR-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612509#comment-14612509 ] Jan Høydahl commented on SOLR-7233: --- If moving zkcli.sh to bin/, perhaps we can mature the script a bit as well? * Avoid unzipping {{solr.war}}, could we instead use a custom classloader that looks inside the war? * Get rid of {{log4j.properties}} by programmatically setting console logging defaults * Look in {{solr.in.sh}} for {{ZK_HOST}} and use that as default - one less thing to re-type rename zkcli.sh script it clashes with zkCli.sh from ZooKeeper on Mac when both are in $PATH Key: SOLR-7233 URL: https://issues.apache.org/jira/browse/SOLR-7233 Project: Solr Issue Type: Task Components: scripts and tools Affects Versions: 4.10 Reporter: Hari Sekhon Priority: Trivial Mac is case insensitive on CLI search so zkcli.sh clashes with zkCli.sh from ZooKeeper when both are in the $PATH, ruining commands for one or the other unless the script path is qualified. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b21) - Build # 13122 - Failure!
My fault... - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] Sent: Thursday, July 02, 2015 10:44 PM To: jpou...@apache.org; u...@thetaphi.de; mikemcc...@apache.org; dev@lucene.apache.org Subject: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b21) - Build # 13122 - Failure! Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13122/ Java: 64bit/jdk1.8.0_60-ea-b21 -XX:-UseCompressedOops -XX:+UseSerialGC 256 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.TestAssertions Error Message: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene- Solr-5.x-Linux/lucene/build/core/test/J0 write) Stack Trace: java.security.AccessControlException: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x- Linux/lucene/build/core/test/J0 write) at __randomizedtesting.SeedInfo.seed([9ACD37347FDE9699]:0) at java.security.AccessControlContext.checkPermission(AccessControlContext.j ava:472) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) at sun.nio.fs.UnixPath.checkWrite(UnixPath.java:801) at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.j ava:376) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.ExtrasFS.createDirectory(ExtrasFS.java:55) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) at java.nio.file.Files.createDirectories(Files.java:767) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.initializeJavaTempDir (TestRuleTemporaryFilesCleanup.java:187) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTem poraryFilesCleanup.java:115) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestR uleAdapter.java:26) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:35) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss ertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore TestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.lucene.TestDemo Error Message: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene- Solr-5.x-Linux/lucene/build/core/test/J0 write) Stack Trace: java.security.AccessControlException: access denied (java.io.FilePermission /home/jenkins/workspace/Lucene-Solr-5.x- Linux/lucene/build/core/test/J0 write) at __randomizedtesting.SeedInfo.seed([9ACD37347FDE9699]:0) at java.security.AccessControlContext.checkPermission(AccessControlContext.j ava:472) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) at sun.nio.fs.UnixPath.checkWrite(UnixPath.java:801) at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.j ava:376) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFil eSystemProvider.java:133) at
[jira] [Updated] (SOLR-7707) Add StreamExpression Support to RollupStream
[ https://issues.apache.org/jira/browse/SOLR-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dennis Gove updated SOLR-7707: -- Attachment: SOLR-7707.patch I found the problem. There is a test class called CountStream. In some of the test files (particularly solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig-streaming.xml) the function name count was mapped to that Stream. However, now with a count metric I was also mapping the count function name to CountMetric. For the moment I have corrected this by renaming CountStream to RecordCountStream and commented out the mapping in the solrconfig-streaming.xml file. I chose to change this one because it is a class in the test suite and not, apparently, used outside of testing. However, this brings up an interesting question. Should we allow conflicting names across streams and metrics. Right now both the mapping for function name to Stream or Metric is stored in the same Map and as such we we are not allowing the conflict of names - ie, both a stream and metric cannot share the same function name. However, should we allow that? I believe the answer, for clarity, is no. If you assign the string count to CountMetric then you cannot also assign it to CountStream. This will allow users to know what count() means without having to know the context. For example, allowing count to map to both could result in confusion in the following {code} rollup( count(search()), min(fieldA), count(fieldB) ) {code} Add StreamExpression Support to RollupStream Key: SOLR-7707 URL: https://issues.apache.org/jira/browse/SOLR-7707 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Attachments: SOLR-7707.patch, SOLR-7707.patch This ticket is to add Stream Expression support to the RollupStream as discussed in SOLR-7560. Proposed expression syntax for the RollupStream (copied from that ticket) {code} rollup( someStream(), over=fieldA, fieldB, fieldC, min(fieldA), max(fieldA), min(fieldB), mean(fieldD), sum(fieldC) ) {code} This requires making the *Metric types Expressible but I think that ends up as a good thing. Would make it real easy to support other options on metrics like excluding outliers, for example find the sum of values within 3 standard deviations from the mean could be {code} sum(fieldC, limit=standardDev(3)) {code} (note, how that particular calculation could be implemented is left as an exercise for the reader, I'm just using it as an example of adding additional options on a relatively simple metric). Another option example is what to do with null values. For example, in some cases a null should not impact a mean but in others it should. You could express those as {code} mean(fieldA, replace(null, 0)) // replace null values with 0 thus leading to an impact on the mean mean(fieldA, includeNull=true) // nulls are counted in the denominator but nothing added to numerator mean(fieldA, includeNull=false) // nulls neither counted in denominator nor added to numerator mean(fieldA, replace(null, fieldB), includeNull=true) // if fieldA is null replace it with fieldB, include null fieldB in mean {code} so on and so forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names
[ https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Otis Gospodnetic updated SOLR-7143: --- Comment: was deleted (was: Not sure how this ended in my private e-mail. We were to suggest them to upgrade to 5.x so that, amongst other fixes and improvements, they can use this new MLT QueryParser to solve problem with non-functioning MLT handler in cloud mode ( https://issues.apache.org/jira/browse/SOLR-788), but now it seems that even 5 would have to be patched (if those patches work). On Thu, Jul 2, 2015 at 11:59 PM Otis Gospodnetic (JIRA) j...@apache.org ) MoreLikeThis Query Parser does not handle multiple field names -- Key: SOLR-7143 URL: https://issues.apache.org/jira/browse/SOLR-7143 Project: Solr Issue Type: Bug Components: query parsers Affects Versions: 5.0 Reporter: Jens Wille Assignee: Anshum Gupta Attachments: SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return any results when supplied with multiple fields in the {{qf}} parameter. To reproduce within the techproducts example, compare: {code} curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A' curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A' curl 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A' {code} The first two queries return 8 and 5 results, respectively. The third query doesn't return any results (not even the matched document). In contrast, the MoreLikeThis Handler works as expected (accounting for the default {{mintf}} and {{mindf}} values in SimpleMLTQParser): {code} curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=namemlt.mintf=1mlt.mindf=1' curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=featuresmlt.mintf=1mlt.mindf=1' curl 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=name,featuresmlt.mintf=1mlt.mindf=1' {code} After adding the following line to {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}: {code:language=XML} requestHandler name=/mlt class=solr.MoreLikeThisHandler / {code} The first two queries return 7 and 4 results, respectively (excluding the matched document). The third query returns 7 results, as one would expect. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1688487 - in /lucene/cms/trunk/content/solr: assets/images/book_as3ess.jpg assets/images/book_asess_3ed.jpg assets/images/book_s14ess.jpg resources.mdtext
Ah; I see you fixed it already. Thanks! On Thu, Jul 2, 2015 at 7:18 PM david.w.smi...@gmail.com david.w.smi...@gmail.com wrote: Palm-to-face! I'll fix later today. On Thu, Jul 2, 2015 at 5:29 PM Chris Hostetter hossman_luc...@fucit.org wrote: Ummm, david ... i hate to tell you this, but it looks like you forgot the L in Solr in the title of your own book. more then once. :) : Mitchell](https://www.linkedin.com/in/mattmitchell4) are proud to : *finally* announce the book â??[Apache Sor Enterprise Search Server, ... : +a href=http://www.solrenterprisesearchserver.com;img alt=Apache : Sor Enterprise Search Server, Third Edition (cover) class=float-right -Hoss http://www.lucidworks.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6655) StandardTokenizerFactory constructor fails if the passed in map is immutable
Trejkaz created LUCENE-6655: --- Summary: StandardTokenizerFactory constructor fails if the passed in map is immutable Key: LUCENE-6655 URL: https://issues.apache.org/jira/browse/LUCENE-6655 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.1 Reporter: Trejkaz One of our tests tried to initialise a StandardTokenizer by passing a version into the factory. Unfortunately, this required passing a map: {code} return new StandardTokenizerFactory(ImmutableMap.of( AbstractAnalysisFactory.LUCENE_MATCH_VERSION_PARAM, Version.LUCENE_4_6_1.toString() )).create(); {code} This then fails: {noformat} java.lang.UnsupportedOperationException at com.google.common.collect.ImmutableMap.remove(ImmutableMap.java:338) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.get(AbstractAnalysisFactory.java:122) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.init(AbstractAnalysisFactory.java:71) at org.apache.lucene.analysis.util.TokenizerFactory.init(TokenizerFactory.java:70) at org.apache.lucene.analysis.standard.StandardTokenizerFactory.init(StandardTokenizerFactory.java:42) {noformat} I suspect that someone put in a `remove` when it should have been a `get`... bit of a weird mistake to make, especially when you don't know whether the map will permit it. I haven't verified whether the same occurs in later versions but getting updated to 5.2.1 will probably be the next thing on my list. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3192) NettySolrClient (supported by netty/protobuf)
[ https://issues.apache.org/jira/browse/SOLR-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linbin Chen updated SOLR-3192: -- Description: solr support netty tcp, netty/tcp can handle asynchronous,efficient,keepalive ... it's used on solr cloud or solrj usage: start netty server: add netty.properties in solr_home (sush as: server/solr) {code} port=8001 {code} client {code:java} public void use_netty_client_demo() throws IOException, SolrServerException { SolrClient solrClient = new NettySolrClient(localhost, 8001); SolrQuery query = new SolrQuery(*:*); QueryResponse response = solrClient.query(collection1, query); System.out.println(response.getResults()); solrClient.close(); } {code} was: solr support netty tcp, netty/tcp can handle asynchronous,efficient,keepalive ... it's used on solr cloud or solrj solr.proto maybe {code:java} package org.apache.solr.client.solrj.impl.netty.protocol; option java_package = org.apache.solr.client.solrj.impl.netty.protocol; option java_outer_classname = SolrProtocol; option optimize_for = SPEED; message Param { required string key = 1; // string[] repeated string value = 2; } message ContentStream { optional string name = 1; optional string sourceInfo = 2; optional string contentType = 3; required int64 size = 4; required bytes stream = 5; } message SolrRequest { required int64 rid = 1; optional string collection =2; required string path = 3; // multi param repeated Param param = 4; // multi content stream repeated ContentStream contentStream = 5; optional string method = 6; } message ResponseBody { required string contentType = 1; required bytes body = 2; } message KeyValue { required string key = 1; required string value = 2; } message ExceptionBody { required int32 code = 1; optional string message = 2; repeated KeyValue metadata = 3; optional string trace = 4; } message SolrResponse { required int64 rid = 1; optional ResponseBody responseBody = 2; //maybe multi Exception repeated ExceptionBody exceptionBody = 3; } {code} NettySolrClient (supported by netty/protobuf) - Key: SOLR-3192 URL: https://issues.apache.org/jira/browse/SOLR-3192 Project: Solr Issue Type: New Feature Affects Versions: 5.2.1 Reporter: Linbin Chen Labels: netty Fix For: Trunk, 5x Attachments: SOLR-3192-for-5_2.patch, SOLR-3192-for-5x.patch solr support netty tcp, netty/tcp can handle asynchronous,efficient,keepalive ... it's used on solr cloud or solrj usage: start netty server: add netty.properties in solr_home (sush as: server/solr) {code} port=8001 {code} client {code:java} public void use_netty_client_demo() throws IOException, SolrServerException { SolrClient solrClient = new NettySolrClient(localhost, 8001); SolrQuery query = new SolrQuery(*:*); QueryResponse response = solrClient.query(collection1, query); System.out.println(response.getResults()); solrClient.close(); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3192) NettySolrClient (supported by netty/protobuf)
[ https://issues.apache.org/jira/browse/SOLR-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linbin Chen updated SOLR-3192: -- Attachment: solr.proto NettySolrClient (supported by netty/protobuf) - Key: SOLR-3192 URL: https://issues.apache.org/jira/browse/SOLR-3192 Project: Solr Issue Type: New Feature Affects Versions: 5.2.1 Reporter: Linbin Chen Labels: netty Fix For: Trunk, 5x Attachments: SOLR-3192-for-5_2.patch, SOLR-3192-for-5x.patch, solr.proto solr support netty tcp, netty/tcp can handle asynchronous,efficient,keepalive ... it's used on solr cloud or solrj usage: start netty server: add netty.properties in solr_home (sush as: server/solr) {code} port=8001 {code} client {code:java} public void use_netty_client_demo() throws IOException, SolrServerException { SolrClient solrClient = new NettySolrClient(localhost, 8001); SolrQuery query = new SolrQuery(*:*); QueryResponse response = solrClient.query(collection1, query); System.out.println(response.getResults()); solrClient.close(); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1688487 - in /lucene/cms/trunk/content/solr: assets/images/book_as3ess.jpg assets/images/book_asess_3ed.jpg assets/images/book_s14ess.jpg resources.mdtext
Palm-to-face! I'll fix later today. On Thu, Jul 2, 2015 at 5:29 PM Chris Hostetter hossman_luc...@fucit.org wrote: Ummm, david ... i hate to tell you this, but it looks like you forgot the L in Solr in the title of your own book. more then once. :) : Mitchell](https://www.linkedin.com/in/mattmitchell4) are proud to : *finally* announce the book â??[Apache Sor Enterprise Search Server, ... : +a href=http://www.solrenterprisesearchserver.com;img alt=Apache : Sor Enterprise Search Server, Third Edition (cover) class=float-right -Hoss http://www.lucidworks.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7707) Add StreamExpression Support to RollupStream
[ https://issues.apache.org/jira/browse/SOLR-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612814#comment-14612814 ] Dennis Gove commented on SOLR-7707: --- Looks like I cut my branch from trunk before those changes were committed. I'll go through some rebasing tomorrow and post up a new patch. Sorry abut that. Add StreamExpression Support to RollupStream Key: SOLR-7707 URL: https://issues.apache.org/jira/browse/SOLR-7707 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Attachments: SOLR-7707.patch, SOLR-7707.patch This ticket is to add Stream Expression support to the RollupStream as discussed in SOLR-7560. Proposed expression syntax for the RollupStream (copied from that ticket) {code} rollup( someStream(), over=fieldA, fieldB, fieldC, min(fieldA), max(fieldA), min(fieldB), mean(fieldD), sum(fieldC) ) {code} This requires making the *Metric types Expressible but I think that ends up as a good thing. Would make it real easy to support other options on metrics like excluding outliers, for example find the sum of values within 3 standard deviations from the mean could be {code} sum(fieldC, limit=standardDev(3)) {code} (note, how that particular calculation could be implemented is left as an exercise for the reader, I'm just using it as an example of adding additional options on a relatively simple metric). Another option example is what to do with null values. For example, in some cases a null should not impact a mean but in others it should. You could express those as {code} mean(fieldA, replace(null, 0)) // replace null values with 0 thus leading to an impact on the mean mean(fieldA, includeNull=true) // nulls are counted in the denominator but nothing added to numerator mean(fieldA, includeNull=false) // nulls neither counted in denominator nor added to numerator mean(fieldA, replace(null, fieldB), includeNull=true) // if fieldA is null replace it with fieldB, include null fieldB in mean {code} so on and so forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7650) Allow wildcard on fl for Raw JSON/XML
[ https://issues.apache.org/jira/browse/SOLR-7650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612836#comment-14612836 ] Bill Bell commented on SOLR-7650: - thoughts? Allow wildcard on fl for Raw JSON/XML - Key: SOLR-7650 URL: https://issues.apache.org/jira/browse/SOLR-7650 Project: Solr Issue Type: Improvement Affects Versions: 5.2 Reporter: Bill Bell We would like to allow for * in the field list when using [json]. For example: http://hgsolr2devmstr:8983/solr/select?q=*:*wt=jsonfl=*_json:[json] This 400 errors/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612838#comment-14612838 ] Noble Paul commented on SOLR-7692: -- Thanks for your comments bq.This comment is misleading - probably left over from an earlier iteration. The patch is Work in Progress . So the comments are from a former iteration bq.Please add a test case that uses the salt when authenticating. The test case indeed checks with salt. There will be a test w/o salt as well bq. Do you think it would be reasonable to split out the dependency between BasicAuthPlugin and ZkAuthentication Yes, That is the plan . I've extracted separated the HTTP part and authentication part to two distinct classes. You should be able to extend the {{BasicAuthPlugin}} to provide your own Authentication impl bq. The name might mislead users. The names are subject to change. Suggestions are welcome bq. can you separate out the 2 issues i.e. an authentication and an authorization? There are a bunch of sub-tasks required 1) Authentication 2) Authorization 3) API to manage the users/roles/permissions Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuth, users :{ john :{09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collectionadmin: { roles: [admin] }, coreadmin:{ roles:[admin] }, config-api: { //all collections roles: [admin] }, schema-api: { roles: [admin] }, update: { //all collections roles: null }, query:{ roles:null }, mycoll_update: { collection: mycoll, path:[/update/*], roles: [somebody]//create a dir called /keys/somebody and put in usr.pwd files } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7707) Add StreamExpression Support to RollupStream
[ https://issues.apache.org/jira/browse/SOLR-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612810#comment-14612810 ] Joel Bernstein commented on SOLR-7707: -- Looks like your patch is a commit or two behind svn trunk. Take a look at: https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/SQLHandler.java You'll see it already has the MultipleFieldComparator, StreamComparator incorporated. Wondering if the git repo is falling to far behind. Add StreamExpression Support to RollupStream Key: SOLR-7707 URL: https://issues.apache.org/jira/browse/SOLR-7707 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Attachments: SOLR-7707.patch, SOLR-7707.patch This ticket is to add Stream Expression support to the RollupStream as discussed in SOLR-7560. Proposed expression syntax for the RollupStream (copied from that ticket) {code} rollup( someStream(), over=fieldA, fieldB, fieldC, min(fieldA), max(fieldA), min(fieldB), mean(fieldD), sum(fieldC) ) {code} This requires making the *Metric types Expressible but I think that ends up as a good thing. Would make it real easy to support other options on metrics like excluding outliers, for example find the sum of values within 3 standard deviations from the mean could be {code} sum(fieldC, limit=standardDev(3)) {code} (note, how that particular calculation could be implemented is left as an exercise for the reader, I'm just using it as an example of adding additional options on a relatively simple metric). Another option example is what to do with null values. For example, in some cases a null should not impact a mean but in others it should. You could express those as {code} mean(fieldA, replace(null, 0)) // replace null values with 0 thus leading to an impact on the mean mean(fieldA, includeNull=true) // nulls are counted in the denominator but nothing added to numerator mean(fieldA, includeNull=false) // nulls neither counted in denominator nor added to numerator mean(fieldA, replace(null, fieldB), includeNull=true) // if fieldA is null replace it with fieldB, include null fieldB in mean {code} so on and so forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7539) Add a QueryAutofilteringComponent for query introspection using indexed metadata
[ https://issues.apache.org/jira/browse/SOLR-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612831#comment-14612831 ] Bill Bell commented on SOLR-7539: - +1 Add a QueryAutofilteringComponent for query introspection using indexed metadata Key: SOLR-7539 URL: https://issues.apache.org/jira/browse/SOLR-7539 Project: Solr Issue Type: New Feature Reporter: Ted Sullivan Priority: Minor Fix For: Trunk Attachments: SOLR-7539.patch, SOLR-7539.patch, SOLR-7539.patch The Query Autofiltering Component provides a method of inferring user intent by matching noun phrases that are typically used for faceted-navigation into Solr filter or boost queries (depending on configuration settings) so that more precise user queries can be met with more precise results. The algorithm uses a longest contiguous phrase match strategy which allows it to disambiguate queries where single terms are ambiguous but phrases are not. It will work when there is structured information in the form of String fields that are normally used for faceted navigation. It works across fields by building a map of search term to index field using the Lucene FieldCache (UninvertingReader). This enables users to create free text, multi-term queries that combine attributes across facet fields - as if they had searched and then navigated through several facet layers. To address the problem of exact-match only semantics of String fields, support for synonyms (including multi-term synonyms) and stemming was added. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3963) SOLR: map() does not allow passing sub-functions in 4,5 parameters
[ https://issues.apache.org/jira/browse/SOLR-3963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612854#comment-14612854 ] Hoss Man commented on SOLR-3963: bq. This is still a valid enhancement ...but as far as i can tell, the only patch available is still the same one i reviewed in this comment (2012-Nov-13), and still has the same problems/bugs i noted at that time... https://issues.apache.org/jira/browse/SOLR-3963?focusedCommentId=13496777page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13496777 SOLR: map() does not allow passing sub-functions in 4,5 parameters -- Key: SOLR-3963 URL: https://issues.apache.org/jira/browse/SOLR-3963 Project: Solr Issue Type: Improvement Affects Versions: 4.0 Reporter: Bill Bell Assignee: Hoss Man Priority: Minor Attachments: SOLR-3963.2.patch I want to do: boost=map(achievement_count,1,1000,recip(achievement_count,-.5,10,25),1) I want to return recip(achievement_count,-.5,10,25) if achievement_count is between 1 and 1,000. FOr any other values I want to return 1. I cannot get it to work. I get the error below. Interesting this does work: boost=recip(map(achievement_count,0,0,-200),-.5,10,25) It almost appears that map() cannot take a function. Specified argument was out of the range of valid values. Parameter name: value Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: value Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: value] System.Web.HttpResponse.set_StatusDescription(String value) +5200522 FacilityService.Controllers.FacilityController.ActionCompleted(String actionName, IFacilityResults results) +265 FacilityService.Controllers.FacilityController.SearchByPointCompleted(IFacilityResults results) +25 lambda_method(Closure , ControllerBase , Object[] ) +114 System.Web.Mvc.Async.c__DisplayClass7.BeginExecuteb__5(IAsyncResult asyncResult) +283 System.Web.Mvc.Async.c__DisplayClass41.BeginInvokeAsynchronousActionMethodb__40(IAsyncResult asyncResult) +22 System.Web.Mvc.Async.c__DisplayClass3b.BeginInvokeActionMethodWithFiltersb__35() +120 System.Web.Mvc.Async.c__DisplayClass51.InvokeActionMethodFilterAsynchronouslyb__4b() +452 System.Web.Mvc.Async.c__DisplayClass39.BeginInvokeActionMethodWithFiltersb__38(IAsyncResult asyncResult) +15 System.Web.Mvc.Async.c__DisplayClass2c.BeginInvokeActionb__22() +33 System.Web.Mvc.Async.c__DisplayClass27.BeginInvokeActionb__24(IAsyncResult asyncResult) +240 System.Web.Mvc.c__DisplayClass19.BeginExecuteCoreb__14(IAsyncResult asyncResult) +28 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.Mvc.AsyncController.EndExecuteCore(IAsyncResult asyncResult) +63 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.Mvc.c__DisplayClassb.BeginProcessRequestb__4(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar) +282 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13301 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13301/ Java: 64bit/jdk1.9.0-ea-b60 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.search.TestSearcherReuse.test Error Message: expected same:Searcher@42b5497e[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):c1) Uninverting(_1(6.0.0):c2)))} was not:Searcher@4be1bccb[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):c1) Uninverting(_1(6.0.0):c2)))} Stack Trace: java.lang.AssertionError: expected same:Searcher@42b5497e[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):c1) Uninverting(_1(6.0.0):c2)))} was not:Searcher@4be1bccb[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):c1) Uninverting(_1(6.0.0):c2)))} at __randomizedtesting.SeedInfo.seed([73844F94448439CC:FBD0704EEA785434]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotSame(Assert.java:641) at org.junit.Assert.assertSame(Assert.java:580) at org.junit.Assert.assertSame(Assert.java:593) at org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247) at org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Commented] (LUCENE-6655) StandardTokenizerFactory constructor fails if the passed in map is immutable
[ https://issues.apache.org/jira/browse/LUCENE-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612804#comment-14612804 ] Ryan Ernst commented on LUCENE-6655: This is intentional. All of the analysis factories do this: remove the parameters they have processed, and then error if there are unknown parameters left at the end. StandardTokenizerFactory constructor fails if the passed in map is immutable Key: LUCENE-6655 URL: https://issues.apache.org/jira/browse/LUCENE-6655 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.1 Reporter: Trejkaz One of our tests tried to initialise a StandardTokenizer by passing a version into the factory. Unfortunately, this required passing a map: {code} return new StandardTokenizerFactory(ImmutableMap.of( AbstractAnalysisFactory.LUCENE_MATCH_VERSION_PARAM, Version.LUCENE_4_6_1.toString() )).create(); {code} This then fails: {noformat} java.lang.UnsupportedOperationException at com.google.common.collect.ImmutableMap.remove(ImmutableMap.java:338) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.get(AbstractAnalysisFactory.java:122) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.init(AbstractAnalysisFactory.java:71) at org.apache.lucene.analysis.util.TokenizerFactory.init(TokenizerFactory.java:70) at org.apache.lucene.analysis.standard.StandardTokenizerFactory.init(StandardTokenizerFactory.java:42) {noformat} I suspect that someone put in a `remove` when it should have been a `get`... bit of a weird mistake to make, especially when you don't know whether the map will permit it. I haven't verified whether the same occurs in later versions but getting updated to 5.2.1 will probably be the next thing on my list. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6656) StandardTokenizer.close() can leave the object in an uncloseable state
Trejkaz created LUCENE-6656: --- Summary: StandardTokenizer.close() can leave the object in an uncloseable state Key: LUCENE-6656 URL: https://issues.apache.org/jira/browse/LUCENE-6656 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.1, 4.10.4 Reporter: Trejkaz The following pair of tests shows that if a reader throws IOException from the close() method, StandardTokenizer is left in an inconsistent state where it thinks you didn't call close on the tokeniser, even though you did. To make matters worse, it holds onto the reader so that any subsequent attempts to close the tokeniser will also fail. Possible workarounds: 1. Don't reuse tokenisers. 2. Still reuse tokenisers, but if close() throws anything, discard that tokeniser and create a new one. 3. Wrap every reader you pass in to ensure that close() can't throw an exception. Code follows: {code} public class TestStandardTokenizerCloseIssue { @Test public void testStreamReuse() throws Exception { // Attempts to verify that consumeAndClose itself is not broken. try (Tokenizer stream = new StandardTokenizer()) { stream.setReader(new StringReader(reader #1)); assertThat(consumeAndClose(stream), contains(reader, 1)); stream.setReader(new StringReader(reader 2)); assertThat(consumeAndClose(stream), contains(reader, 2)); } } @Test public void testStreamReuseAfterFailure() throws Exception { class FailingReader extends Reader { @Override public int read(@NotNull char[] buffer, int off, int len) throws IOException { throw new IOException(Synthetic exception); } @Override public void close() throws IOException { throw new IOException(Synthetic exception); } } // Simulating sharing the instance inside some factory. try (Tokenizer stream = new StandardTokenizer()) { try { stream.setReader(new FailingReader()); consumeAndClose(stream); fail(Expected IOException); } catch (IOException e) { // Expected } stream.setReader(new StringReader(working reader)); // Test fails here - even though the consumeAndClose above // did close the tokeniser, the tokeniser didn't clear its reference to // the reader. assertThat(consumeAndClose(stream), contains(working, reader)); } } // Attempts to implement the correct workflow for consuming a // TokenStream. private ListString consumeAndClose(TokenStream stream) throws Exception { ImmutableList.BuilderString tokens = ImmutableList.builder(); //The consumer calls reset(). stream.reset(); try { // The consumer retrieves attributes from the stream and stores // local references to all attributes it wants to access. CharTermAttribute termAttribute = stream.getAttribute(CharTermAttribute.class); // The consumer calls incrementToken() until it returns false // consuming the attributes after each call. while (stream.incrementToken()) { tokens.add(termAttribute.toString()); } // The consumer calls end() so that any end-of-stream operations // can be performed. stream.end(); } finally { // The consumer calls close() to release any resource when finished // using the TokenStream. stream.close(); } return tokens.build(); } } {code} Originally discovered on 4.10.4. Code has been ported to work on 5.1 since initially created and sooner or later I'll get to test 5.2.1, but I don't see anyone else having reported a similar issue yet, so I'm guessing it won't be fixed yet. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7136) Add an AutoPhrasing TokenFilter
[ https://issues.apache.org/jira/browse/SOLR-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612829#comment-14612829 ] Bill Bell commented on SOLR-7136: - +1 Let' get it committed Add an AutoPhrasing TokenFilter --- Key: SOLR-7136 URL: https://issues.apache.org/jira/browse/SOLR-7136 Project: Solr Issue Type: New Feature Reporter: Ted Sullivan Attachments: SOLR-7136.patch, SOLR-7136.patch, SOLR-7136.patch Adds an 'autophrasing' token filter which is designed to enable noun phrases that represent a single entity to be tokenized in a singular fashion. Adds support for ManagedResources and Query parser auto-phrasing support given LUCENE-2605. The rationale for this Token Filter and its use in solving the long standing multi-term synonym problem in Lucene Solr has been documented online. http://lucidworks.com/blog/automatic-phrase-tokenization-improving-lucene-search-precision-by-more-precise-linguistic-analysis/ https://lucidworks.com/blog/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6655) All analysis factory constructors fail if the passed in map is immutable
[ https://issues.apache.org/jira/browse/LUCENE-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Trejkaz updated LUCENE-6655: Summary: All analysis factory constructors fail if the passed in map is immutable (was: StandardTokenizerFactory constructor fails if the passed in map is immutable) There are ways to track that which don't require mutating a map that doesn't belong to you. If all the analysis factories do that by mutating *my* map, then they're all broken. I'll update the summary accordingly. All analysis factory constructors fail if the passed in map is immutable Key: LUCENE-6655 URL: https://issues.apache.org/jira/browse/LUCENE-6655 Project: Lucene - Core Issue Type: Bug Affects Versions: 5.1 Reporter: Trejkaz One of our tests tried to initialise a StandardTokenizer by passing a version into the factory. Unfortunately, this required passing a map: {code} return new StandardTokenizerFactory(ImmutableMap.of( AbstractAnalysisFactory.LUCENE_MATCH_VERSION_PARAM, Version.LUCENE_4_6_1.toString() )).create(); {code} This then fails: {noformat} java.lang.UnsupportedOperationException at com.google.common.collect.ImmutableMap.remove(ImmutableMap.java:338) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.get(AbstractAnalysisFactory.java:122) at org.apache.lucene.analysis.util.AbstractAnalysisFactory.init(AbstractAnalysisFactory.java:71) at org.apache.lucene.analysis.util.TokenizerFactory.init(TokenizerFactory.java:70) at org.apache.lucene.analysis.standard.StandardTokenizerFactory.init(StandardTokenizerFactory.java:42) {noformat} I suspect that someone put in a `remove` when it should have been a `get`... bit of a weird mistake to make, especially when you don't know whether the map will permit it. I haven't verified whether the same occurs in later versions but getting updated to 5.2.1 will probably be the next thing on my list. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3963) SOLR: map() does not allow passing sub-functions in 4,5 parameters
[ https://issues.apache.org/jira/browse/SOLR-3963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612839#comment-14612839 ] Bill Bell commented on SOLR-3963: - This is still a valid enhancement SOLR: map() does not allow passing sub-functions in 4,5 parameters -- Key: SOLR-3963 URL: https://issues.apache.org/jira/browse/SOLR-3963 Project: Solr Issue Type: Improvement Affects Versions: 4.0 Reporter: Bill Bell Assignee: Hoss Man Priority: Minor Attachments: SOLR-3963.2.patch I want to do: boost=map(achievement_count,1,1000,recip(achievement_count,-.5,10,25),1) I want to return recip(achievement_count,-.5,10,25) if achievement_count is between 1 and 1,000. FOr any other values I want to return 1. I cannot get it to work. I get the error below. Interesting this does work: boost=recip(map(achievement_count,0,0,-200),-.5,10,25) It almost appears that map() cannot take a function. Specified argument was out of the range of valid values. Parameter name: value Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: value Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: value] System.Web.HttpResponse.set_StatusDescription(String value) +5200522 FacilityService.Controllers.FacilityController.ActionCompleted(String actionName, IFacilityResults results) +265 FacilityService.Controllers.FacilityController.SearchByPointCompleted(IFacilityResults results) +25 lambda_method(Closure , ControllerBase , Object[] ) +114 System.Web.Mvc.Async.c__DisplayClass7.BeginExecuteb__5(IAsyncResult asyncResult) +283 System.Web.Mvc.Async.c__DisplayClass41.BeginInvokeAsynchronousActionMethodb__40(IAsyncResult asyncResult) +22 System.Web.Mvc.Async.c__DisplayClass3b.BeginInvokeActionMethodWithFiltersb__35() +120 System.Web.Mvc.Async.c__DisplayClass51.InvokeActionMethodFilterAsynchronouslyb__4b() +452 System.Web.Mvc.Async.c__DisplayClass39.BeginInvokeActionMethodWithFiltersb__38(IAsyncResult asyncResult) +15 System.Web.Mvc.Async.c__DisplayClass2c.BeginInvokeActionb__22() +33 System.Web.Mvc.Async.c__DisplayClass27.BeginInvokeActionb__24(IAsyncResult asyncResult) +240 System.Web.Mvc.c__DisplayClass19.BeginExecuteCoreb__14(IAsyncResult asyncResult) +28 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.Mvc.AsyncController.EndExecuteCore(IAsyncResult asyncResult) +63 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.Mvc.c__DisplayClassb.BeginProcessRequestb__4(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.c__DisplayClass4.MakeVoidDelegateb__3(IAsyncResult ar) +15 System.Web.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar) +282 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6651: -- Attachment: LUCENE-6651.patch Minor updates, adding Java 8 @FunctionalInterface to AttributeReflector. Wil commit this soon and provide a patch for branch_5x, with all attributes but preserving the reflection inside a AccessController.doPrivileged(). Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612169#comment-14612169 ] Uwe Schindler commented on LUCENE-6651: --- The only special case is CharTermAttributeImpl, because it needs to comply with CharSequence API. This is also one reason why we added explicit attribute reflection back in 3.x. Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_45) - Build # 4864 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4864/ Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.lucene.store.TestRateLimiter.testPause Error Message: we should sleep less than 2 seconds but did: 2869 millis Stack Trace: java.lang.AssertionError: we should sleep less than 2 seconds but did: 2869 millis at __randomizedtesting.SeedInfo.seed([E311D588FB15F5B7:85B191B67038ACB1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.store.TestRateLimiter.testPause(TestRateLimiter.java:41) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 829 lines...] [junit4] Suite: org.apache.lucene.store.TestRateLimiter [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestRateLimiter -Dtests.method=testPause -Dtests.seed=E311D588FB15F5B7 -Dtests.slow=true -Dtests.locale=pt_PT -Dtests.timezone=Asia/Kuala_Lumpur -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] FAILURE 2.65s J0 | TestRateLimiter.testPause [junit4] Throwable #1: java.lang.AssertionError: we should sleep less than 2 seconds but did: 2869 millis [junit4]at __randomizedtesting.SeedInfo.seed([E311D588FB15F5B7:85B191B67038ACB1]:0)
[jira] [Commented] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612259#comment-14612259 ] Anshum Gupta commented on SOLR-7692: Thanks Noble! this is much needed! I am yet to look at this, but can you separate out the 2 issues i.e. an authentication and an authorization? Also, let's not call it zkAuth* plugins as they don't authenticate zk, but just use zk for implementation. The name might mislead users. I'll take a look at the actual code over the weekend. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuth, users :{ john :{09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collectionadmin: { roles: [admin] }, coreadmin:{ roles:[admin] }, config-api: { //all collections roles: [admin] }, schema-api: { roles: [admin] }, update: { //all collections roles: null }, query:{ roles:null }, mycoll_update: { collection: mycoll, path:[/update/*], roles: [somebody]//create a dir called /keys/somebody and put in usr.pwd files } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7749) Schema API: commands (e.g. add-field-type) should fail if unknown params are supplied
Steve Rowe created SOLR-7749: Summary: Schema API: commands (e.g. add-field-type) should fail if unknown params are supplied Key: SOLR-7749 URL: https://issues.apache.org/jira/browse/SOLR-7749 Project: Solr Issue Type: Bug Reporter: Steve Rowe Priority: Minor On the solr-user mailing list, Søren reported trying to add a field type via the Schema API. The command partially succeeded by ignoring mistyped params - below I reproduced the problem using data_driven_schema_configs: {noformat} PROMPT$ curl -X POST http://localhost:8983/solr/gettingstarted/schema -H 'Content-type: application/json' -d '{ add-field-type:{ name:myTxtField, class:solr.TextField, positionIncrementGap:100, analyzer:{ charFilter: {class:solr.MappingCharFilterFactory, mapping:mapping-ISOLatin1Accent.txt}, filter: {class:solr.LowerCaseFilterFactory}, tokenizer: {class:solr.StandardTokenizerFactory} } } }' { responseHeader:{ status:0, QTime:68}} PROMPT$ curl http://localhost:8983/solr/gettingstarted/schema/fieldtypes/myTxtField; { responseHeader:{ status:0, QTime:123}, fieldType:{ name:myTxtField, class:solr.TextField, positionIncrementGap:100, analyzer:{ tokenizer:{ class:solr.StandardTokenizerFactory}}, fields:[], dynamicFields:[]}} {noformat} Only the tokenizer is included in the field type, because charFilter and filter are misspelled and have the wrong value type: both should be plural and should have array values. The above request succeeded by ignoring the misspelled params - no charFilter or filter was created in the analyzer. It really should have failed and sent back an error explaining the problem. The following succeeds for me (after first issuing a {{delete-field-type}} command and copying {{mapping-ISOLatin1Accent.txt}} into the {{gettingstarted/conf/}} directory): {noformat} curl -X POST http://localhost:8983/solr/gettingstarted/schema -H 'Content-type: application/json' -d '{ add-field-type:{ name:myTxtField, class:solr.TextField, positionIncrementGap:100, analyzer:{ charFilters: [{class:solr.MappingCharFilterFactory, mapping:mapping-ISOLatin1Accent.txt}], tokenizer: {class:solr.StandardTokenizerFactory}, filters: [{class:solr.LowerCaseFilterFactory}] } } }' {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612125#comment-14612125 ] Uwe Schindler edited comment on LUCENE-6651 at 7/2/15 4:02 PM: --- Minor updates, adding Java 8 @FunctionalInterface to AttributeReflector. Wil commit this soon and provide a patch for branch_5x, with all implementations but preserving the reflection inside a AccessController.doPrivileged() for backwards compatibility. was (Author: thetaphi): Minor updates, adding Java 8 @FunctionalInterface to AttributeReflector. Wil commit this soon and provide a patch for branch_5x, with all attributes but preserving the reflection inside a AccessController.doPrivileged(). Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612168#comment-14612168 ] Uwe Schindler commented on LUCENE-6651: --- toString() should not be implemented, because its done automatically using reflectWith(). Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6651: -- Attachment: LUCENE-6651-5x.patch Patch for 5.x, using AccessController.doPrivileged. If needed for Elasticsearch, I can backport just the changes in AttributeImpl to 5.2.2! Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2425 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2425/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestRebalanceLeaders.test Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:59851/o_ss, http://127.0.0.1:59857/o_ss, http://127.0.0.1:59863/o_ss, http://127.0.0.1:59843/o_ss, http://127.0.0.1:59835/o_ss] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:59851/o_ss, http://127.0.0.1:59857/o_ss, http://127.0.0.1:59863/o_ss, http://127.0.0.1:59843/o_ss, http://127.0.0.1:59835/o_ss] at __randomizedtesting.SeedInfo.seed([46F026DAD70DFE63:CEA4190079F1939B]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281) at org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:108) at org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 13119 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13119/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Captured an uncaught exception in thread: Thread[id=1656, name=collection0, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=1656, name=collection0, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:41152: Could not find collection : awholynewstresscollection_collection0_0 at __randomizedtesting.SeedInfo.seed([6FECDD9611379631]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894) Build Log: [...truncated 10240 lines...] [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest [junit4] 2 Creating dataDir: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest_6FECDD9611379631-001/init-core-data-001 [junit4] 2 258258 INFO (SUITE-CollectionsAPIDistributedZkTest-seed#[6FECDD9611379631]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) [junit4] 2 258258 INFO (SUITE-CollectionsAPIDistributedZkTest-seed#[6FECDD9611379631]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2 258261 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 258261 INFO (Thread-534) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 258261 INFO (Thread-534) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 258361 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.ZkTestServer start zk server on port:54698 [junit4] 2 258361 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 258362 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 258364 INFO (zkCallback-150-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@251bf193 name:ZooKeeperConnection Watcher:127.0.0.1:54698 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 258364 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 258364 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 258364 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2 258365 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 258366 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 258367 INFO (zkCallback-151-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@5c2e7fa3 name:ZooKeeperConnection Watcher:127.0.0.1:54698/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 258367 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[6FECDD9611379631]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 258367 INFO
[jira] [Commented] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612251#comment-14612251 ] Mike Drob commented on SOLR-7692: - {code} + public static AuthorizationResponse OK = new AuthorizationResponse(200); + public static AuthorizationResponse FORBIDDEN = new AuthorizationResponse(403); + public static AuthorizationResponse PROMPT = new AuthorizationResponse(401); {code} Please make these final. {code} + private static SetString EMPTY_NULL_SET; {code} Also final. {code} + @Override + public void init(MapString, Object initInfo) { +mapping.put(null, new WildCardSupportMap()); +Map map = (Map) initInfo.get(roles); +for (Object o : map.entrySet()) { + Map.Entry e = (Map.Entry) o; + String roleName = (String) e.getKey(); + usersVsRoles.put(roleName, readSet(map, roleName)); +} +map = (Map) initInfo.get(permissions); +for (Object o : map.entrySet()) { + Map.Entry e = (Map.Entry) o; + Permission p = new Permission((String) e.getKey(), (Map) e.getValue()); + permissions.add(p); + add2Mapping(p); +} + } {code} Is it possible to use generic types instead of doing a bunch of casts? There's a bunch of other places with raw {{Map}} as well. {code} + //check permissions for a collection + //return true = allowed, false=not allowed, null= resource requires a principal but none available + private MatchStatus checkCollPerm(MapString, ListPermission pathVsPerms, +AuthorizationContext context) { {code} This comment is misleading - probably left over from an earlier iteration. Please add a test case that uses the salt when authenticating. Do you think it would be reasonable to split out the dependency between BasicAuthPlugin and ZkAuthentication? I could imagine somebody wanting to do BasicAuth backed by a different store, were it available. Will continue to dive deeper in a bit. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuth, users :{ john :{09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collectionadmin: { roles: [admin] }, coreadmin:{ roles:[admin] }, config-api: { //all collections roles: [admin] }, schema-api: { roles: [admin] }, update: { //all collections roles: null }, query:{ roles:null }, mycoll_update: { collection: mycoll, path:[/update/*], roles: [somebody]//create a dir called /keys/somebody and put in usr.pwd files } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-6651. --- Resolution: Fixed Please repoen for backport of {{AttributeImpl#reflectWith(AttributeReflector)}} to 5.2.2! Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612183#comment-14612183 ] ASF subversion and git services commented on LUCENE-6651: - Commit 1688863 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1688863 ] LUCENE-6651: AttributeImpl#reflectWith(AttributeReflector)'s default Impl was deprecated in 5.x. All code should implement this. In addition the default impl is now using AccessController.doPrivileged() to do the accessibility changes. Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-5x.patch, LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4988 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4988/ Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.lucene.store.TestRateLimiter.testPause Error Message: we should sleep less than 2 seconds but did: 2220 millis Stack Trace: java.lang.AssertionError: we should sleep less than 2 seconds but did: 2220 millis at __randomizedtesting.SeedInfo.seed([F7DF3BD40A5FBA2C:917F7FEA8172E32A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.store.TestRateLimiter.testPause(TestRateLimiter.java:41) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 772 lines...] [junit4] Suite: org.apache.lucene.store.TestRateLimiter [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestRateLimiter -Dtests.method=testPause -Dtests.seed=F7DF3BD40A5FBA2C -Dtests.slow=true -Dtests.locale=sr_ME -Dtests.timezone=America/Boa_Vista -Dtests.asserts=true -Dtests.file.encoding=Cp1252 [junit4] FAILURE 2.27s J0 | TestRateLimiter.testPause [junit4] Throwable #1: java.lang.AssertionError: we should sleep less than 2 seconds but did: 2220 millis [junit4]at __randomizedtesting.SeedInfo.seed([F7DF3BD40A5FBA2C:917F7FEA8172E32A]:0)
timeAllowed parameter ignored edge-case bug?
Hello. Was looking at https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java code to better understand the various getDoc... methods and how collectors are combined. In the scenario outlined below, the second run's timeAllowed parameter is unexpectedly ignored. Could this be intentionally so somehow (q vs. fq processing?, Collector vs. LeafCollector?, DocList vs. DocSet?), or is it an edge-case bug? Regards, Christine --- solrconfig characteristics: * a queryResultsCache is configured * no filterCache is configured query characteristics: * q parameter present * at least one fq parameter present * sort parameter present (and does not require the score field) * GET_DOCSET flag is set e.g. via the StatsComponent i.e. stats=true parameter runtime characteristics: * first run of the query gets a queryResultsCache-miss and respects timeAllowed * second run gets a queryResultsCache-hit and ignores timeAllowed (but still makes use of the lucene IndexSearcher) code path execution details (first run): * SolrIndexSearcher.search calls getDocListC * getDocListC called queryResultCache.get which found nothing * getDocListC calls getDocListAndSetNC * getDocListAndSetNC calls buildAndRunCollectorChain * buildAndRunCollectorChain constructs TimeLimitingCollector code path execution details (second run): * SolrIndexSearcher.search calls getDocListC * getDocListC called queryResultCache.get which found something * getDocListC calls getDocSet(ListQuery queries) * getDocSet(ListQuery queries) iterates over IndexSearcher.leafContexts
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612148#comment-14612148 ] ASF subversion and git services commented on LUCENE-6651: - Commit 1688855 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1688855 ] LUCENE-6651: AttributeImpl#reflectWith(AttributeReflector) was made abstract and has no reflection-based default implementation anymore. Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6651) Remove private field reflection (setAccessible) in AttributeImpl#reflectWith
[ https://issues.apache.org/jira/browse/LUCENE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612149#comment-14612149 ] Mike Drob commented on LUCENE-6651: --- Nit: {{ComputedRangesAttributeImpl c = (ComputedRangesAttributeImpl) super.clone();;}} double semi-colon. Why remove {{UniqueFieldAttributeImpl.toString}}? Seems like it would be safe to leave it even if it is not strictly necessary, yes? Remove private field reflection (setAccessible) in AttributeImpl#reflectWith Key: LUCENE-6651 URL: https://issues.apache.org/jira/browse/LUCENE-6651 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 5.2.1 Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.3, Trunk Attachments: LUCENE-6651-MethodHandles.patch, LUCENE-6651.patch, LUCENE-6651.patch In AttributeImpl we currently have a default implementation of reflectWith (which is used by toString() and other methods) that uses reflection to list all private fields of the implementation class and reports them to the AttributeReflector (used by Solr and Elasticsearch to show analysis output). Unfortunately this default implementation needs to access private fields of a subclass, which does not work without doing Field#setAccessible(true). And this is done without AccessController#doPrivileged()! There are 2 solutions to solve this: - Reimplement the whole thing with MethodHandles. MethodHandles allow to access private fields, if you have a MethodHandles.Lookup object created from inside the subclass. The idea is to add a protected constructor taking a Lookup object (must come from same class). This Lookup object is then used to build methodHandles that can be executed to report the fields. Backside: We have to require subclasses that want this automatic reflection to pass a Lookup object in ctor's {{super(MethodHandles.lookup())}} call. This breaks backwards for implementors of AttributeImpls - The second idea is to remove the whole reflectWith default impl and make the method abstract. This would require a bit more work in tons of AttributeImpl classes, but you already have to implement something like this for equals/hashCode, so its just listing all fields. This would of couse break backwards, too. So my plan would be to implement the missing methods everywhere (as if it were abstract), but keep the default implementation in 5.x. We just would do AccessController.doPrivileged(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-7692: - Attachment: SOLR-7692.patch first patch with basic tests Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuth, users :{ john :{09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collectionadmin: { roles: [admin] }, coreadmin:{ roles:[admin] }, config-api: { //all collections roles: [admin] }, schema-api: { roles: [admin] }, update: { //all collections roles: null }, query:{ roles:null }, mycoll_update: { collection: mycoll, path:[/update/*], roles: [somebody]//create a dir called /keys/somebody and put in usr.pwd files } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2425 - Failure!
OK, maybe this is what Mark Miller was talking about in https://issues.apache.org/jira/browse/SOLR-6971. I see occasional issues like this in the lazy core tests. Both seem to have the no live servers error and I'm wondering if there's something underlying both (and, for that matter, other test failures) On Thu, Jul 2, 2015 at 1:00 PM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2425/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestRebalanceLeaders.test Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:59851/o_ss, http://127.0.0.1:59857/o_ss, http://127.0.0.1:59863/o_ss, http://127.0.0.1:59843/o_ss, http://127.0.0.1:59835/o_ss] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:59851/o_ss, http://127.0.0.1:59857/o_ss, http://127.0.0.1:59863/o_ss, http://127.0.0.1:59843/o_ss, http://127.0.0.1:59835/o_ss] at __randomizedtesting.SeedInfo.seed([46F026DAD70DFE63:CEA4190079F1939B]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281) at org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:108) at org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at
[jira] [Updated] (SOLR-7223) Tooltips admin panel get switched midway edismax
[ https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thanatos updated SOLR-7223: --- Attachment: SOLR-7223.patch Looks like the changes were not applied to the Angular UI so I rolled a new patch. Hints to test: * {{git clone https://github.com/apache/lucene-solr}} * {{cd lucene-solr/}} * {{git checkout branch_5x}} * {{git apply SOLR-7223.patch}} * {{cd solr/}} * {{ant server}} * {{./bin/solr -e schemaless}} * Go to both admin interfaces ** regular http://localhost:8983/solr/#/gettingstarted/query ** Angular http://localhost:8983/solr/index.html#/gettingstarted/query * Make sure tooltips are set on both input boxes and labels Maybe someone can confirm ? I'll update the pull request on Github Tooltips admin panel get switched midway edismax Key: SOLR-7223 URL: https://issues.apache.org/jira/browse/SOLR-7223 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.1 Reporter: Jelle Janssens Priority: Trivial Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png When hovering over the tooltips in SOLR admin, in the edismax section, the tooltip gets switched from being set on the input box to the label. This happens between bf and uf. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7223) Tooltips admin panel get switched midway edismax
[ https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612385#comment-14612385 ] ASF GitHub Bot commented on SOLR-7223: -- GitHub user Fengtan opened a pull request: https://github.com/apache/lucene-solr/pull/179 Make tooltips consistent on admin panel (regular UI Angular UI). https://issues.apache.org/jira/browse/SOLR-7223 You can merge this pull request into a Git repository by running: $ git pull https://github.com/Fengtan/lucene-solr SOLR-7223 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/179.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #179 commit 0717f11d46f13f1de102664dfdd9412962a3d8da Author: Fengtan feng...@847318.no-reply.drupal.org Date: 2015-07-02T19:01:34Z Make tooltips consistent on admin panel (regular UI Angular UI). Tooltips admin panel get switched midway edismax Key: SOLR-7223 URL: https://issues.apache.org/jira/browse/SOLR-7223 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.1 Reporter: Jelle Janssens Priority: Trivial Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png When hovering over the tooltips in SOLR admin, in the edismax section, the tooltip gets switched from being set on the input box to the label. This happens between bf and uf. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6646) make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free
[ https://issues.apache.org/jira/browse/LUCENE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612411#comment-14612411 ] ASF subversion and git services commented on LUCENE-6646: - Commit 1688894 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1688894 ] LUCENE-6646: Make EarlyTerminatingCollector SortingMergePolicy-free. Close #175 Close #178 make the EarlyTerminatingSortingCollector constructor SortingMergePolicy-free - Key: LUCENE-6646 URL: https://issues.apache.org/jira/browse/LUCENE-6646 Project: Lucene - Core Issue Type: Wish Reporter: Christine Poerschke Priority: Minor motivation and summary of proposed changes to follow via github pull request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9
[ https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612332#comment-14612332 ] Shai Erera commented on SOLR-7748: -- Thanks Shawn. I actually started to work with th j9 team over the past few weeks, on different aspects such as establishing a process for us to report bugs and have them rub Lucene/Solr tests on j9 builds in order to detect JVM issues. The team is also in contact with Uwe and Robert, and it seems like things get in the right direction. The issues that are known to cause corruption are suspected to have been fixed, but since they were not reproducible with the latest j9 builds, we can't say it for sure. Therfore the warnings are still found in the JavaBugs page, but I assume that after we have Jenkins passing builds for a while, we will at least document that these issues were not reproduced with version X and onwards. As for this particular issue I think that we should have bin/solr successfully start on j9. The problem that I've fixed is just passing the right gc log flag, which is different in j9. With this patch, we have a way to also detect particular build versions and block them if we know they are bad. I don't think the blocking j9 entirely is a good solution though, and definitely the script currently doesn't explicitly block j9, it just fails with a wrong flag passed error. From a community perspective, I feel that blocking a JVM vendor entirely is wrong, but maybe I'm biased. At least I can confirm that the team gives a high priority to resolving any outstanding Lucene/Solr issues, so blocking j9 in our scripts (and code) feels wrong. Fix bin/solr to work on IBM J9 -- Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk Attachments: solr-7748.patch bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7748) Fix bin/solr to work on IBM J9
[ https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612343#comment-14612343 ] Hoss Man edited comment on SOLR-7748 at 7/2/15 6:39 PM: bq. With this patch, we have a way to also detect particular build versions and block them if we know they are bad. I don't think the blocking j9 entirely is a good solution though, and definitely the script currently doesn't explicitly block j9, it just fails with a wrong flag passed error. +1 wether the script should explicitly fail/warn IBM J9 is known to have problems running Solr on startup is a question that should be discussed/answered in it's own issue -- it certainly makes sense to me to go ahead and fix the script to not fail in a weird and confusing way just -because your're- (edit) *by the coincidence of* _trying_ to use J9. (by corollary: we IndexWriter tests that give up if you try running on J9 because we know those tests are unreliably on J9, but that doesn't mean the IndexWriter class itself throws a RuntimeException if you try to instantiate it on a J9 JVM) was (Author: hossman): bq. With this patch, we have a way to also detect particular build versions and block them if we know they are bad. I don't think the blocking j9 entirely is a good solution though, and definitely the script currently doesn't explicitly block j9, it just fails with a wrong flag passed error. +1 wether the script should explicitly fail/warn IBM J9 is known to have problems running Solr on startup is a question that should be discussed/answered in it's own issue -- it certainly makes sense to me to go ahead and fix the script to not fail in a weird and confusing way just because you're _trying_ to use J9. (by corollary: we IndexWriter tests that give up if you try running on J9 because we know those tests are unreliably on J9, but that doesn't mean the IndexWriter class itself throws a RuntimeException if you try to instantiate it on a J9 JVM) Fix bin/solr to work on IBM J9 -- Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk Attachments: solr-7748.patch bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: SOLR-7223: Make tooltips consistent on a...
Github user Fengtan closed the pull request at: https://github.com/apache/lucene-solr/pull/151 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7223) Tooltips admin panel get switched midway edismax
[ https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612366#comment-14612366 ] ASF GitHub Bot commented on SOLR-7223: -- Github user Fengtan closed the pull request at: https://github.com/apache/lucene-solr/pull/151 Tooltips admin panel get switched midway edismax Key: SOLR-7223 URL: https://issues.apache.org/jira/browse/SOLR-7223 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.1 Reporter: Jelle Janssens Priority: Trivial Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png When hovering over the tooltips in SOLR admin, in the edismax section, the tooltip gets switched from being set on the input box to the label. This happens between bf and uf. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7223) Tooltips admin panel get switched midway edismax
[ https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612365#comment-14612365 ] ASF GitHub Bot commented on SOLR-7223: -- Github user Fengtan commented on the pull request: https://github.com/apache/lucene-solr/pull/151#issuecomment-118124610 Closing pull request: will re-open a new one targeted at branch_5x instead of master Tooltips admin panel get switched midway edismax Key: SOLR-7223 URL: https://issues.apache.org/jira/browse/SOLR-7223 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.1 Reporter: Jelle Janssens Priority: Trivial Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png When hovering over the tooltips in SOLR admin, in the edismax section, the tooltip gets switched from being set on the input box to the label. This happens between bf and uf. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: SOLR-7223: Make tooltips consistent on a...
Github user Fengtan commented on the pull request: https://github.com/apache/lucene-solr/pull/151#issuecomment-118124610 Closing pull request: will re-open a new one targeted at branch_5x instead of master --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: Make tooltips consistent on admin panel ...
GitHub user Fengtan opened a pull request: https://github.com/apache/lucene-solr/pull/179 Make tooltips consistent on admin panel (regular UI Angular UI). https://issues.apache.org/jira/browse/SOLR-7223 You can merge this pull request into a Git repository by running: $ git pull https://github.com/Fengtan/lucene-solr SOLR-7223 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/179.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #179 commit 0717f11d46f13f1de102664dfdd9412962a3d8da Author: Fengtan feng...@847318.no-reply.drupal.org Date: 2015-07-02T19:01:34Z Make tooltips consistent on admin panel (regular UI Angular UI). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9
[ https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612343#comment-14612343 ] Hoss Man commented on SOLR-7748: bq. With this patch, we have a way to also detect particular build versions and block them if we know they are bad. I don't think the blocking j9 entirely is a good solution though, and definitely the script currently doesn't explicitly block j9, it just fails with a wrong flag passed error. +1 wether the script should explicitly fail/warn IBM J9 is known to have problems running Solr on startup is a question that should be discussed/answered in it's own issue -- it certainly makes sense to me to go ahead and fix the script to not fail in a weird and confusing way just because you're _trying_ to use J9. (by corollary: we IndexWriter tests that give up if you try running on J9 because we know those tests are unreliably on J9, but that doesn't mean the IndexWriter class itself throws a RuntimeException if you try to instantiate it on a J9 JVM) Fix bin/solr to work on IBM J9 -- Key: SOLR-7748 URL: https://issues.apache.org/jira/browse/SOLR-7748 Project: Solr Issue Type: Bug Components: Server Reporter: Shai Erera Assignee: Shai Erera Fix For: 5.3, Trunk Attachments: solr-7748.patch bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 supports -Xverbosegclog. This prevents using bin/solr to start it on J9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org