Re: killing a Solr instance leaves the state in Zookeeper "active"
Is this documented anywhere outside of the JIRAs you mentioned Erick (or anyone else)? I can only speak for myself, but I don't think I would've expected/caught that as a potential Solr consumer, even though it is working as designed. If it doesn't make sense to actually this; ensuring this is covered by the documentation might be a good compromise/follow-up. On Fri, Oct 23, 2015 at 1:55 PM, Erick Ericksonwrote: > Not so much a problem as behavior I wasn't fully expecting. It does > seem a little trappy to have this thing that's supposed to be the > state of the collection but then require that another znode be > checked to see if state.json is telling the truth. > > In the particular case that came up, a monitoring system was trying > to generate alerts when a node went down by relying on the state.json > znode, but no alert was being generated in this case. > > BTW, this is 4.6, I suspect the eventual answer is to upgrade and > use the collections API CLUSTERSTATUS... > > I don't have strong feelings about this, mostly throwing it out for > discussion. I suppose the goal here is to keep any client from having > to directly look at the state.json file and provide APIs that conceal > this kind of thing. > > Your point about the complexity of publishing state for other nodes > is well taken... > > > On Fri, Oct 23, 2015 at 10:29 AM, Shalin Shekhar Mangar > wrote: >> This is expected and works as designed. We have enough complexity in >> publishing state for other nodes (LIR) and we shouldn't add any more. >> Besides what if the leader itself was killed, who changes the state >> then? >> >> What problem are you trying to solve? >> >> On Fri, Oct 23, 2015 at 10:19 PM, Erick Erickson >> wrote: >>> If I kill a replica with -9, the state.json node never gets updated, >>> the node shows as "active" >>> >>> There is code around that checks the live_nodes to see whether the >>> state.json node can be believed, and Varun pointed me at Solr JIRAs >>> for making sure CLUSTERSTATUS consults live_nodes, indicating that >>> this is something that's expected. But it seems trappy. >>> >>> My question is whether it's worth raising a JIRA. The leader could >>> notice a mismatch and update state.json or something like that. >>> >>> I'll raise a JIRA if it seems like something that should be discussed. >>> >>> Let me know, >>> Erick >>> >>> - >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >>> For additional commands, e-mail: dev-h...@lucene.apache.org >>> >> >> >> >> -- >> Regards, >> Shalin Shekhar Mangar. >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 830 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/830/ 4 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTests Error Message: There are still nodes recoverying - waited for 330 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 330 seconds at __randomizedtesting.SeedInfo.seed([1DD2ACE0381532AA:15B2D9CC371B1AA1]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTestOps(CdcrReplicationDistributedZkTest.java:430) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTests(CdcrReplicationDistributedZkTest.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970679#comment-14970679 ] ASF subversion and git services commented on SOLR-8157: --- Commit 1710150 from [~upayavira] in branch 'dev/trunk' [ https://svn.apache.org/r1710150 ] SOLR-8157 Make links between nodes work correctly > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Assignee: Upayavira >Priority: Minor > Labels: angularjs > Attachments: SOLR-8157.patch > > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira resolved SOLR-8157. - Resolution: Fixed > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Assignee: Upayavira >Priority: Minor > Labels: angularjs > Attachments: SOLR-8157.patch > > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970680#comment-14970680 ] ASF subversion and git services commented on SOLR-8157: --- Commit 1710151 from [~upayavira] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710151 ] SOLR-8157 Make links between nodes work correctly > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Assignee: Upayavira >Priority: Minor > Labels: angularjs > Attachments: SOLR-8157.patch > > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8074) LoadAdminUIServlet directly references admin.html
[ https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-8074. --- Resolution: Fixed Fix Version/s: Trunk > LoadAdminUIServlet directly references admin.html > - > > Key: SOLR-8074 > URL: https://issues.apache.org/jira/browse/SOLR-8074 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Mark Miller >Priority: Minor > Fix For: 5.4, Trunk > > Attachments: SOLR-8074.patch > > > The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning > it cannot be used in its current state to serve up the new admin UI. > An update is needed to this class to make it serve back whatever html file > was requested in the URL. There will, likely, only ever be two of them > mentioned in web.xml, but it would be really useful for changes to web.xml > not to require Java code changes also. > I'm hoping that someone with an up-and-running Java coding setup can make > this pretty trivial tweak. Any volunteers? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-8113) Accept replacement strings in CloneFieldUpdateProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man reassigned SOLR-8113: -- Assignee: Hoss Man Gus: I've been out sick most of this week, and am now way behind on a bunch of stuff -- but this issue is on my radar, and I will try to review ASAP. > Accept replacement strings in CloneFieldUpdateProcessorFactory > -- > > Key: SOLR-8113 > URL: https://issues.apache.org/jira/browse/SOLR-8113 > Project: Solr > Issue Type: Improvement > Components: update >Affects Versions: 5.3 >Reporter: Gus Heck >Assignee: Hoss Man > Attachments: SOLR-8113.patch, SOLR-8113.patch > > > Presently CloneFieldUpdateProcessorFactory accepts regular expressions to > select source fields, which mirrors wildcards in the source for copyField in > the schema. This patch adds a counterpart to copyField's wildcards in the > dest attribute by interpreting the dest parameter as a regex replacement > string. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8195) IndexFetcher download trace to include bytes-downloaded[-per-second]
Christine Poerschke created SOLR-8195: - Summary: IndexFetcher download trace to include bytes-downloaded[-per-second] Key: SOLR-8195 URL: https://issues.apache.org/jira/browse/SOLR-8195 Project: Solr Issue Type: Wish Reporter: Christine Poerschke Assignee: Christine Poerschke patch against trunk with proposed changes to follow -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8195) IndexFetcher download trace to include bytes-downloaded[-per-second]
[ https://issues.apache.org/jira/browse/SOLR-8195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-8195: -- Attachment: SOLR-8195.patch attaching proposed patch against trunk > IndexFetcher download trace to include bytes-downloaded[-per-second] > > > Key: SOLR-8195 > URL: https://issues.apache.org/jira/browse/SOLR-8195 > Project: Solr > Issue Type: Wish >Reporter: Christine Poerschke >Assignee: Christine Poerschke > Attachments: SOLR-8195.patch > > > patch against trunk with proposed changes to follow -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8191) CloudSolrStream close method NullPointerException
Kevin Risden created SOLR-8191: -- Summary: CloudSolrStream close method NullPointerException Key: SOLR-8191 URL: https://issues.apache.org/jira/browse/SOLR-8191 Project: Solr Issue Type: Bug Components: SolrJ Affects Versions: Trunk Reporter: Kevin Risden CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null yielding a NullPointerException in those cases when close() is called on it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream
[ https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971072#comment-14971072 ] Kevin Risden commented on SOLR-8190: Found that CloudSolrStream yields two NullPointerExceptions when being closed during tests for solrStreams and cloudSolrClient being null. Filed SOLR-8191 to address it. > Implement Closeable on TupleStream > -- > > Key: SOLR-8190 > URL: https://issues.apache.org/jira/browse/SOLR-8190 > Project: Solr > Issue Type: Bug > Components: SolrJ >Affects Versions: Trunk >Reporter: Kevin Risden >Priority: Minor > > Implementing Closeable on TupleStream provides the ability to use > try-with-resources > (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) > in tests and in practice. This prevents TupleStreams from being left open > when there is an error in the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: OOM on solr cloud 5.2.1, does not trigger oom_solr.sh
There have been a couple of threads lately discussing that some OOMs are not propagated appropriately and thus don't trigger the OOM killer. Does anyone think this should be a JIRA? On Fri, Oct 23, 2015 at 7:17 AM, Raja Pothugantiwrote: > Hi, > > Some times I see OOM happening on replicas,but does not trigger script > oom_solr.sh which was passed in as > -XX:OnOutOfMemoryError=/actualLocation/solr/bin/oom_solr.sh 8091. > > These OOM happened while DIH importing data from database. Is this known > issue? is there any quick fix? Sent yesterday day to users group, no > response yet. > > Here are stack traces when OOM happened > > > 1) > org.apache.solr.common.SolrException; null:java.lang.RuntimeException: > java.lang.OutOfMemoryError: Java heap space > at > org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java > :227) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java > :196) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle > r.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 > 3) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.jav > a:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.jav > a:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java > :185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java > :1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 > 1) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHan > dlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection > .java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java: > 97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java > :635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java: > 555) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Java heap space > > > > 2) > org.apache.solr.common.SolrException; > org.apache.solr.common.SolrException: Exception writing document id > R277453962 to the index; possible analysis error. > at > org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.jav > a:167) > at > org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdatePro > cessorFactory.java:69) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRe > questProcessor.java:51) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(Dist > ributedUpdateProcessor.java:955) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Dist > ributedUpdateProcessor.java:1110) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(Dist > ributedUpdateProcessor.java:706) > at > org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdatePro > cessorFactory.java:104) > at > org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:10 > 1) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterM > ostDocIterator(JavaBinUpdateRequestCodec.java:179) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterat > or(JavaBinUpdateRequestCodec.java:135) > at > org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:241) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedL > ist(JavaBinUpdateRequestCodec.java:121) > at > org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:206) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:126) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(Ja > vaBinUpdateRequestCodec.java:186) > at >
[jira] [Created] (SOLR-8192) SubFacets allBuckets not woring with measures on tokenized fields
Pablo Anzorena created SOLR-8192: Summary: SubFacets allBuckets not woring with measures on tokenized fields Key: SOLR-8192 URL: https://issues.apache.org/jira/browse/SOLR-8192 Project: Solr Issue Type: Bug Reporter: Pablo Anzorena Subfacets are not working when you ask for allBuckets on a tokenized fields with measures Here is the request: { hs: { field: hs, type: terms, allBuckets:true, sort: "mostrar_bill_price desc", facet:{ mostrar_bill_price: "sum(mostrar_bill_price)" } } } Here is the response: { "responseHeader": { "status": 500, "QTime": 92, "params": { "indent": "true", "q": "*:*", "json.facet": "{ hs: { field: hs, type: terms, allBuckets:true, sort: \"mostrar_bill_price desc\", facet:{ mostrar_bill_price: \"sum(mostrar_bill_price)\" } } }", "wt": "json", "rows": "0" } }, "response": { "numFound": 35422188, "start": 0, "docs": [] }, "error": { "trace": "java.lang.ArrayIndexOutOfBoundsException\n", "code": 500 } } hs fields is defined as: mostrar_bill_price is defined as: A part from text_ws, it also happens with text_classic (these are the only ones I've tested it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: OOM on solr cloud 5.2.1, does not trigger oom_solr.sh
Yes, let's create a JIRA ... looks like the OOM is getting wrapped which prevents it from propagating correctly to trigger the oom script: org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space On Fri, Oct 23, 2015 at 8:28 AM, Erick Ericksonwrote: > There have been a couple of threads lately discussing that some OOMs > are not propagated appropriately and thus don't trigger the OOM killer. > > Does anyone think this should be a JIRA? > > On Fri, Oct 23, 2015 at 7:17 AM, Raja Pothuganti > wrote: >> Hi, >> >> Some times I see OOM happening on replicas,but does not trigger script >> oom_solr.sh which was passed in as >> -XX:OnOutOfMemoryError=/actualLocation/solr/bin/oom_solr.sh 8091. >> >> These OOM happened while DIH importing data from database. Is this known >> issue? is there any quick fix? Sent yesterday day to users group, no >> response yet. >> >> Here are stack traces when OOM happened >> >> >> 1) >> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: >> java.lang.OutOfMemoryError: Java heap space >> at >> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593) >> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465) >> at >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java >> :227) >> at >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java >> :196) >> at >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle >> r.java:1652) >> at >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 >> 3) >> at >> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at >> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.jav >> a:223) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.jav >> a:1127) >> at >> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java >> :185) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java >> :1061) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 >> 1) >> at >> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHan >> dlerCollection.java:215) >> at >> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection >> .java:110) >> at >> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java: >> 97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at >> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) >> at >> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) >> at >> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java >> :635) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java: >> 555) >> at java.lang.Thread.run(Thread.java:745) >> Caused by: java.lang.OutOfMemoryError: Java heap space >> >> >> >> 2) >> org.apache.solr.common.SolrException; >> org.apache.solr.common.SolrException: Exception writing document id >> R277453962 to the index; possible analysis error. >> at >> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.jav >> a:167) >> at >> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdatePro >> cessorFactory.java:69) >> at >> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRe >> questProcessor.java:51) >> at >> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(Dist >> ributedUpdateProcessor.java:955) >> at >> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Dist >> ributedUpdateProcessor.java:1110) >> at >> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(Dist >> ributedUpdateProcessor.java:706) >> at >> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdatePro >> cessorFactory.java:104) >> at >> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:10 >> 1) >> at >> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterM >> ostDocIterator(JavaBinUpdateRequestCodec.java:179) >> at >> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterat >> or(JavaBinUpdateRequestCodec.java:135) >> at >>
[JENKINS] Solr-Artifacts-5.x - Build # 970 - Failure
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/970/ No tests ran. Build Log: [...truncated 13139 lines...] [javac] Compiling 856 source files to /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build/solr-core/classes/java [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/core/src/java/org/apache/solr/servlet/cache/HttpCacheHeaderUtil.java:59: error: incompatible types [javac] private static MapetagCoreCache = Collections.synchronizedMap(new WeakHashMap<>()); [javac] ^ [javac] required: Map [javac] found:Map
[jira] [Commented] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971221#comment-14971221 ] Shalin Shekhar Mangar commented on SOLR-7569: - Thanks Ishan but I think you missed the test in your latest patch? Its size has decreased from 36kb to 8kb. > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-7569: --- Attachment: SOLR-7569.patch Based on an offline conversation with Shalin (and the discussion above), I've removed that extra handling of the situation where: # there is no LIR involved # all replicas are down # there is no leader. This involved force marking the replica at the election queue head as a leader, which might have other unintended consequences. Hopefully, this situation never occurs in the real world. If it does, then we can tackle this in a separate issue. The following situation is still taken care of: # there is no LIR involved # all replicas are down [~shalinmangar] please review the changes. Thanks. > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8193) Data Import Handler OOM does not trigger the oom killer script
Erick Erickson created SOLR-8193: Summary: Data Import Handler OOM does not trigger the oom killer script Key: SOLR-8193 URL: https://issues.apache.org/jira/browse/SOLR-8193 Project: Solr Issue Type: Bug Affects Versions: 5.2.1 Reporter: Erick Erickson >From the user's list. Probably wrapping an OOM error like we've seen before. * Some times I see OOM happening on replicas,but does not trigger script oom_solr.sh which was passed in as -XX:OnOutOfMemoryError=/actualLocation/solr/bin/oom_solr.sh 8091. These OOM happened while DIH importing data from database. Is this known issue? is there any quick fix? Sent yesterday day to users group, no response yet. Here are stack traces when OOM happened 1) org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java :227) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java :196) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle r.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 3) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.jav a:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.jav a:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java :185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java :1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 1) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHan dlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection .java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java: 97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java: 555) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space 2) org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: Exception writing document id R277453962 to the index; possible analysis error. at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.jav a:167) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdatePro cessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRe questProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(Dist ributedUpdateProcessor.java:955) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Dist ributedUpdateProcessor.java:1110) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(Dist ributedUpdateProcessor.java:706) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdatePro cessorFactory.java:104) at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:10 1) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterM ostDocIterator(JavaBinUpdateRequestCodec.java:179) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterat or(JavaBinUpdateRequestCodec.java:135) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:241) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedL ist(JavaBinUpdateRequestCodec.java:121) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:206) at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:126) at
[jira] [Commented] (LUCENE-6854) Provide extraction of more metrics from confusion matrix
[ https://issues.apache.org/jira/browse/LUCENE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971212#comment-14971212 ] ASF subversion and git services commented on LUCENE-6854: - Commit 1710249 from [~teofili] in branch 'dev/trunk' [ https://svn.apache.org/r1710249 ] LUCENE-6854 - added precision, recall, f1 measure metrics to ConfusionMatrix > Provide extraction of more metrics from confusion matrix > > > Key: LUCENE-6854 > URL: https://issues.apache.org/jira/browse/LUCENE-6854 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/classification >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 6.0 > > > {{ConfusionMatrix}} only provides a general accuracy measure while it'd be > good to be able to extract more metrics from it, for specific classes, like > precision, recall, f-measure, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971226#comment-14971226 ] Mark Miller commented on SOLR-7569: --- It seems like what we really want is to make sure the last published state for each replica does not prevent it from becoming the leader? > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: OOM on solr cloud 5.2.1, does not trigger oom_solr.sh
done: https://issues.apache.org/jira/browse/SOLR-8193 On Fri, Oct 23, 2015 at 8:16 AM, Timothy Potterwrote: > Yes, let's create a JIRA ... looks like the OOM is getting wrapped > which prevents it from propagating correctly to trigger the oom > script: > > org.apache.solr.common.SolrException; null:java.lang.RuntimeException: > java.lang.OutOfMemoryError: Java heap space > > On Fri, Oct 23, 2015 at 8:28 AM, Erick Erickson > wrote: >> There have been a couple of threads lately discussing that some OOMs >> are not propagated appropriately and thus don't trigger the OOM killer. >> >> Does anyone think this should be a JIRA? >> >> On Fri, Oct 23, 2015 at 7:17 AM, Raja Pothuganti >> wrote: >>> Hi, >>> >>> Some times I see OOM happening on replicas,but does not trigger script >>> oom_solr.sh which was passed in as >>> -XX:OnOutOfMemoryError=/actualLocation/solr/bin/oom_solr.sh 8091. >>> >>> These OOM happened while DIH importing data from database. Is this known >>> issue? is there any quick fix? Sent yesterday day to users group, no >>> response yet. >>> >>> Here are stack traces when OOM happened >>> >>> >>> 1) >>> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: >>> java.lang.OutOfMemoryError: Java heap space >>> at >>> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593) >>> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465) >>> at >>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java >>> :227) >>> at >>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java >>> :196) >>> at >>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle >>> r.java:1652) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 >>> 3) >>> at >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.jav >>> a:223) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.jav >>> a:1127) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java >>> :185) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java >>> :1061) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 >>> 1) >>> at >>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHan >>> dlerCollection.java:215) >>> at >>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection >>> .java:110) >>> at >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java: >>> 97) >>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>> at >>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) >>> at >>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) >>> at >>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java >>> :635) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java: >>> 555) >>> at java.lang.Thread.run(Thread.java:745) >>> Caused by: java.lang.OutOfMemoryError: Java heap space >>> >>> >>> >>> 2) >>> org.apache.solr.common.SolrException; >>> org.apache.solr.common.SolrException: Exception writing document id >>> R277453962 to the index; possible analysis error. >>> at >>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.jav >>> a:167) >>> at >>> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdatePro >>> cessorFactory.java:69) >>> at >>> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRe >>> questProcessor.java:51) >>> at >>> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(Dist >>> ributedUpdateProcessor.java:955) >>> at >>> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Dist >>> ributedUpdateProcessor.java:1110) >>> at >>> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(Dist >>> ributedUpdateProcessor.java:706) >>> at >>> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdatePro >>> cessorFactory.java:104) >>> at >>> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:10 >>> 1) >>> at >>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterM
[jira] [Commented] (SOLR-8173) CLONE - Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover as well as lose updates that should have been reco
[ https://issues.apache.org/jira/browse/SOLR-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971207#comment-14971207 ] Matteo Grolla commented on SOLR-8173: - Yes, Unpacked zip Cloned server folder Started 2 node cluster using bin/solr script Created 'schemaless' collection using bin/solr script and ran the described test. > CLONE - Leader recovery process can select the wrong leader if all replicas > for a shard are down and trying to recover as well as lose updates that > should have been recovered. > --- > > Key: SOLR-8173 > URL: https://issues.apache.org/jira/browse/SOLR-8173 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: Matteo Grolla >Assignee: Mark Miller >Priority: Critical > Labels: leader, recovery > Fix For: 5.2.1 > > Attachments: solr_8983.log, solr_8984.log > > > I'm doing this test > collection test is replicated on two solr nodes running on 8983, 8984 > using external zk > 1)turn on solr 8984 > 2)add,commit a doc x con solr 8983 > 3)turn off solr 8983 > 4)turn on solr 8984 > 5)shortly after (leader still not elected) turn on solr 8983 > 6)8984 is elected as leader > 7)doc x is present on 8983 but not on 8984 (check issuing a query) > In attachment are the solr.log files of both instances -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6854) Provide extraction of more metrics from confusion matrix
Tommaso Teofili created LUCENE-6854: --- Summary: Provide extraction of more metrics from confusion matrix Key: LUCENE-6854 URL: https://issues.apache.org/jira/browse/LUCENE-6854 Project: Lucene - Core Issue Type: Improvement Components: modules/classification Reporter: Tommaso Teofili Assignee: Tommaso Teofili Fix For: 6.0 {{ConfusionMatrix}} only provides a general accuracy measure while it'd be good to be able to extract more metrics from it, for specific classes, like precision, recall, f-measure, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971223#comment-14971223 ] Mark Miller commented on SOLR-7569: --- bq. // Marking all live nodes as active. We do we do this manually like this? Shouldn't we allow this to happen naturally? > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6479) Create utility to generate Classifier's confusion matrix
[ https://issues.apache.org/jira/browse/LUCENE-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tommaso Teofili resolved LUCENE-6479. - Resolution: Fixed Fix Version/s: (was: Trunk) 6.0 > Create utility to generate Classifier's confusion matrix > > > Key: LUCENE-6479 > URL: https://issues.apache.org/jira/browse/LUCENE-6479 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/classification >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 6.0 > > > In order to debug and compare accuracy of {{Classifiers}} it's often useful > to print the related [confusion > matrix|http://en.wikipedia.org/wiki/Confusion_matrix] so it'd be good to > provide such an utility class/method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6239) HttpSolrServer: connection still allocated
[ https://issues.apache.org/jira/browse/SOLR-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970976#comment-14970976 ] David Smiley commented on SOLR-6239: N/P. Technically one synchronizes on a specific instance of an object. A field or variable is simply a pointer to a an object. The primitive "true" resolves to Boolean.TRUE when assigned to a Boolean variable. Given that Boolean.TRUE is a global object instance that comes with the JDK; it's a terrible choice to synchronize on. > HttpSolrServer: connection still allocated > -- > > Key: SOLR-6239 > URL: https://issues.apache.org/jira/browse/SOLR-6239 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.9 >Reporter: Sergio Fernández >Priority: Minor > > In scenarios where concurrency is aggressive, this exception could easily > appear: > {quote} > org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Invalid > use of BasicClientConnManager: connection still allocated. > Make sure to release the connection before allocating another one. > at > org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > {quote} > I wonder if there is any solution for it? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6239) HttpSolrServer: connection still allocated
[ https://issues.apache.org/jira/browse/SOLR-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970989#comment-14970989 ] Shawn Heisey commented on SOLR-6239: An audit of my code revealed that this was the only place I was synchronizing on a Boolean (or any other object that might be globally defined), so it was likely safe ... but based on what you said it's a bad practice, so I used "private static Object initSync;" instead. > HttpSolrServer: connection still allocated > -- > > Key: SOLR-6239 > URL: https://issues.apache.org/jira/browse/SOLR-6239 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.9 >Reporter: Sergio Fernández >Priority: Minor > > In scenarios where concurrency is aggressive, this exception could easily > appear: > {quote} > org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Invalid > use of BasicClientConnManager: connection still allocated. > Make sure to release the connection before allocating another one. > at > org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > {quote} > I wonder if there is any solution for it? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8189) eTag calculation during http Cache Validation uses unsynchronized WeakHashMap
[ https://issues.apache.org/jira/browse/SOLR-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971032#comment-14971032 ] ASF subversion and git services commented on SOLR-8189: --- Commit 1710219 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710219 ] SOLR-8189: eTag calculation during HTTP Cache Validation uses unsynchronized WeakHashMap causing threads to be stuck in runnable state > eTag calculation during http Cache Validation uses unsynchronized WeakHashMap > - > > Key: SOLR-8189 > URL: https://issues.apache.org/jira/browse/SOLR-8189 > Project: Solr > Issue Type: Bug > Components: search >Affects Versions: 4.10.4, 5.3 >Reporter: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-low > Fix For: 5.4, Trunk > > > I found this while looking into a recent jenkins failure where > TestDynamicLoading leaked 5 threads: > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ > {code} > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from > SUITE scope at org.apache.solr.core.TestDynamicLoading: >1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) >2) Thread[id=11445, name=qtp85907293-11445, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at >
[jira] [Commented] (SOLR-6239) HttpSolrServer: connection still allocated
[ https://issues.apache.org/jira/browse/SOLR-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970960#comment-14970960 ] Shawn Heisey commented on SOLR-6239: Interesting. Good to know, I was not aware that synchronization would follow the object rabbit hole down to the bottom, rather than use the specific instance (in this case, firstInstance). Thanks for the pointer, I will fix. > HttpSolrServer: connection still allocated > -- > > Key: SOLR-6239 > URL: https://issues.apache.org/jira/browse/SOLR-6239 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.9 >Reporter: Sergio Fernández >Priority: Minor > > In scenarios where concurrency is aggressive, this exception could easily > appear: > {quote} > org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Invalid > use of BasicClientConnManager: connection still allocated. > Make sure to release the connection before allocating another one. > at > org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > {quote} > I wonder if there is any solution for it? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 146 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/146/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([160856778C0F4672:D1E9A455E7BB7EDD]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance(BooleanPerceptronClassifierTest.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 5504 lines...] [junit4] Suite: org.apache.lucene.classification.BooleanPerceptronClassifierTest [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=BooleanPerceptronClassifierTest -Dtests.method=testPerformance -Dtests.seed=160856778C0F4672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=CET -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 1.05s J0 | BooleanPerceptronClassifierTest.testPerformance <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at
[jira] [Commented] (LUCENE-6852) Add DimensionalFormat to Codec
[ https://issues.apache.org/jira/browse/LUCENE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971043#comment-14971043 ] ASF subversion and git services commented on LUCENE-6852: - Commit 1710221 from [~mikemccand] in branch 'dev/branches/lucene6852' [ https://svn.apache.org/r1710221 ] LUCENE-6852: starting patch > Add DimensionalFormat to Codec > -- > > Key: LUCENE-6852 > URL: https://issues.apache.org/jira/browse/LUCENE-6852 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk > > > This is phase 2 for adding dimensional indexing in Lucene, so we can > (eventually) do efficient numeric range filtering, BigInteger/Decimal and > IPv6 support, and "point in shape" spatial searching (2D or 3D). > It's the follow-on from LUCENE-6825 (phase 1). > This issue "just" adds DimensionalFormat (and Reader/Writer) to Codec and the > IndexReader hierarchy, to IndexWriter and merging, and to document API > (DimensionalField). > I also implemented dimensional support for SimpleTextCodec, and added a test > case showing that you can in fact use SimpleTextCodec to do multidimensional > shape intersection (seems to pass a couple times!). > Phase 3 will be adding support to the default codec as well ("just" wrapping > BKDWriter/Reader), phase 4 is then fixing places that use the > sandbox/spatial3d BKD tree to use the codec instead and maybe exposing > sugar for numerics, things like BigInteger/Decimal, etc. > There are many nocommits still, but I think it's close-ish ... I'll commit to > a branch and iterate there. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6239) HttpSolrServer: connection still allocated
[ https://issues.apache.org/jira/browse/SOLR-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970943#comment-14970943 ] David Smiley commented on SOLR-6239: You are synchronizing on the "firstInstance" field, which will point to Boolean.TRUE. You *really* shouldn't synchronize on that. Instead, create an object just for synchronizing and then when you hold that luck, and only then, should a firstInstance (primitive) boolean be examined. That fixes this problem; although it's likely other simplifications could be done to clarify the pattern here, which looks like just a simple thread-safe lazy init which has been done before by many. > HttpSolrServer: connection still allocated > -- > > Key: SOLR-6239 > URL: https://issues.apache.org/jira/browse/SOLR-6239 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.9 >Reporter: Sergio Fernández >Priority: Minor > > In scenarios where concurrency is aggressive, this exception could easily > appear: > {quote} > org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Invalid > use of BasicClientConnManager: connection still allocated. > Make sure to release the connection before allocating another one. > at > org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > {quote} > I wonder if there is any solution for it? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6239) HttpSolrServer: connection still allocated
[ https://issues.apache.org/jira/browse/SOLR-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970989#comment-14970989 ] Shawn Heisey edited comment on SOLR-6239 at 10/23/15 1:52 PM: -- An audit of my code revealed that this was the only place I was synchronizing on a Boolean (or any other object that might be globally defined), so it was likely safe ... but based on what you said it's a bad practice, so I used "private static Object initSync;" instead. Edit: After further thought I realized that the code wasn't safe even thought it's the only instance where I'm using Boolean, because of the possibility of some instances synchronizing on Boolean.TRUE and others synchronizing on Boolean.FALSE. The initialization code isn't currently multi-threaded so that situation can't come up, but I wanted the object to be threadsafe, and it wasn't. Thank you for bringing my coding problem to my attention. was (Author: elyograg): An audit of my code revealed that this was the only place I was synchronizing on a Boolean (or any other object that might be globally defined), so it was likely safe ... but based on what you said it's a bad practice, so I used "private static Object initSync;" instead. > HttpSolrServer: connection still allocated > -- > > Key: SOLR-6239 > URL: https://issues.apache.org/jira/browse/SOLR-6239 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.9 >Reporter: Sergio Fernández >Priority: Minor > > In scenarios where concurrency is aggressive, this exception could easily > appear: > {quote} > org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Invalid > use of BasicClientConnManager: connection still allocated. > Make sure to release the connection before allocating another one. > at > org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at > org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) > ~[solr-solrj-4.9.0.jar:4.9.0 1604085 - rmuir - 2014-06-20 06:34:04] > {quote} > I wonder if there is any solution for it? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8189) eTag calculation during http Cache Validation uses unsynchronized WeakHashMap
[ https://issues.apache.org/jira/browse/SOLR-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971029#comment-14971029 ] ASF subversion and git services commented on SOLR-8189: --- Commit 1710218 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1710218 ] SOLR-8189: eTag calculation during HTTP Cache Validation uses unsynchronized WeakHashMap causing threads to be stuck in runnable state > eTag calculation during http Cache Validation uses unsynchronized WeakHashMap > - > > Key: SOLR-8189 > URL: https://issues.apache.org/jira/browse/SOLR-8189 > Project: Solr > Issue Type: Bug > Components: search >Affects Versions: 4.10.4, 5.3 >Reporter: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-low > Fix For: 5.4, Trunk > > > I found this while looking into a recent jenkins failure where > TestDynamicLoading leaked 5 threads: > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ > {code} > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from > SUITE scope at org.apache.solr.core.TestDynamicLoading: >1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) >2) Thread[id=11445, name=qtp85907293-11445, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at >
[jira] [Commented] (LUCENE-6852) Add DimensionalFormat to Codec
[ https://issues.apache.org/jira/browse/LUCENE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971040#comment-14971040 ] ASF subversion and git services commented on LUCENE-6852: - Commit 1710220 from [~mikemccand] in branch 'dev/branches/lucene6852' [ https://svn.apache.org/r1710220 ] LUCENE-6852: make branch > Add DimensionalFormat to Codec > -- > > Key: LUCENE-6852 > URL: https://issues.apache.org/jira/browse/LUCENE-6852 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk > > > This is phase 2 for adding dimensional indexing in Lucene, so we can > (eventually) do efficient numeric range filtering, BigInteger/Decimal and > IPv6 support, and "point in shape" spatial searching (2D or 3D). > It's the follow-on from LUCENE-6825 (phase 1). > This issue "just" adds DimensionalFormat (and Reader/Writer) to Codec and the > IndexReader hierarchy, to IndexWriter and merging, and to document API > (DimensionalField). > I also implemented dimensional support for SimpleTextCodec, and added a test > case showing that you can in fact use SimpleTextCodec to do multidimensional > shape intersection (seems to pass a couple times!). > Phase 3 will be adding support to the default codec as well ("just" wrapping > BKDWriter/Reader), phase 4 is then fixing places that use the > sandbox/spatial3d BKD tree to use the codec instead and maybe exposing > sugar for numerics, things like BigInteger/Decimal, etc. > There are many nocommits still, but I think it's close-ish ... I'll commit to > a branch and iterate there. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6852) Add DimensionalFormat to Codec
Michael McCandless created LUCENE-6852: -- Summary: Add DimensionalFormat to Codec Key: LUCENE-6852 URL: https://issues.apache.org/jira/browse/LUCENE-6852 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk This is phase 2 for adding dimensional indexing in Lucene, so we can (eventually) do efficient numeric range filtering, BigInteger/Decimal and IPv6 support, and "point in shape" spatial searching (2D or 3D). It's the follow-on from LUCENE-6825 (phase 1). This issue "just" adds DimensionalFormat (and Reader/Writer) to Codec and the IndexReader hierarchy, to IndexWriter and merging, and to document API (DimensionalField). I also implemented dimensional support for SimpleTextCodec, and added a test case showing that you can in fact use SimpleTextCodec to do multidimensional shape intersection (seems to pass a couple times!). Phase 3 will be adding support to the default codec as well ("just" wrapping BKDWriter/Reader), phase 4 is then fixing places that use the sandbox/spatial3d BKD tree to use the codec instead and maybe exposing sugar for numerics, things like BigInteger/Decimal, etc. There are many nocommits still, but I think it's close-ish ... I'll commit to a branch and iterate there. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7466) Allow optional leading wildcards in complexphrase
[ https://issues.apache.org/jira/browse/SOLR-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970963#comment-14970963 ] Jon Kjær Amundsen commented on SOLR-7466: - Hi Andy I'm from Denmark where we excel in compund words. Therefore your plugin could really be of use to me. If it's ready for test let me know. > Allow optional leading wildcards in complexphrase > - > > Key: SOLR-7466 > URL: https://issues.apache.org/jira/browse/SOLR-7466 > Project: Solr > Issue Type: Improvement > Components: query parsers >Affects Versions: 4.8 >Reporter: Andy hardin > Labels: complexPhrase, query-parser, wildcards > > Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms > in a phrase, but does not allow leading wildcards. I would like the option > to be able to search for terms with both trailing and leading wildcards. > For example with: > {!complexphrase allowLeadingWildcard=true} "j* *th" > would match "John Smith", "Jim Smith", but not "John Schmitt" -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8189) eTag calculation during http Cache Validation uses unsynchronized WeakHashMap
Shalin Shekhar Mangar created SOLR-8189: --- Summary: eTag calculation during http Cache Validation uses unsynchronized WeakHashMap Key: SOLR-8189 URL: https://issues.apache.org/jira/browse/SOLR-8189 Project: Solr Issue Type: Bug Components: search Affects Versions: 5.3, 4.10.4 Reporter: Shalin Shekhar Mangar Fix For: 5.4, Trunk I found this while looking into a recent jenkins failure where TestDynamicLoading leaked 5 threads: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ {code} Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE scope at org.apache.solr.core.TestDynamicLoading: 1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, group=TGRP-TestDynamicLoading] at java.util.WeakHashMap.get(WeakHashMap.java:403) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) 2) Thread[id=11445, name=qtp85907293-11445, state=RUNNABLE, group=TGRP-TestDynamicLoading] at java.util.WeakHashMap.get(WeakHashMap.java:403) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at
Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14630 - Failure!
This is a race condition in HttpCacheHeaderUtil. I opened SOLR-8189 https://issues.apache.org/jira/browse/SOLR-8189 On Fri, Oct 23, 2015 at 8:21 AM, Policeman Jenkins Serverwrote: > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ > Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC > > 4 tests failed. > FAILED: junit.framework.TestSuite.org.apache.solr.core.TestDynamicLoading > > Error Message: > 5 threads leaked from SUITE scope at org.apache.solr.core.TestDynamicLoading: > 1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, > group=TGRP-TestDynamicLoading] at > java.util.WeakHashMap.get(WeakHashMap.java:403) at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) >at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) >at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745)2) Thread[id=11445, > name=qtp85907293-11445, state=RUNNABLE, group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) >at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) >at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at >
[jira] [Resolved] (SOLR-8189) eTag calculation during http Cache Validation uses unsynchronized WeakHashMap
[ https://issues.apache.org/jira/browse/SOLR-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-8189. - Resolution: Fixed Assignee: Shalin Shekhar Mangar > eTag calculation during http Cache Validation uses unsynchronized WeakHashMap > - > > Key: SOLR-8189 > URL: https://issues.apache.org/jira/browse/SOLR-8189 > Project: Solr > Issue Type: Bug > Components: search >Affects Versions: 4.10.4, 5.3 >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-low > Fix For: 5.4, Trunk > > > I found this while looking into a recent jenkins failure where > TestDynamicLoading leaked 5 threads: > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ > {code} > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from > SUITE scope at org.apache.solr.core.TestDynamicLoading: >1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) >2) Thread[id=11445, name=qtp85907293-11445, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at >
[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14636 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14636/ Java: 32bit/jdk1.9.0-ea-b85 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([2A1980DAFF0EDEC8:EDF872F894BAE667]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance(BooleanPerceptronClassifierTest.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:747) Build Log: [...truncated 5532 lines...] [junit4] Suite: org.apache.lucene.classification.BooleanPerceptronClassifierTest [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=BooleanPerceptronClassifierTest -Dtests.method=testPerformance -Dtests.seed=2A1980DAFF0EDEC8 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ga -Dtests.timezone=Etc/GMT-12 -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.93s J2 | BooleanPerceptronClassifierTest.testPerformance <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at
[jira] [Commented] (LUCENE-6479) Create utility to generate Classifier's confusion matrix
[ https://issues.apache.org/jira/browse/LUCENE-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970891#comment-14970891 ] ASF subversion and git services commented on LUCENE-6479: - Commit 1710197 from [~teofili] in branch 'dev/trunk' [ https://svn.apache.org/r1710197 ] LUCENE-6479 - added a raw accuracy calculation to confusion matrix, minor adjustments to splitter > Create utility to generate Classifier's confusion matrix > > > Key: LUCENE-6479 > URL: https://issues.apache.org/jira/browse/LUCENE-6479 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/classification >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: Trunk > > > In order to debug and compare accuracy of {{Classifiers}} it's often useful > to print the related [confusion > matrix|http://en.wikipedia.org/wiki/Confusion_matrix] so it'd be good to > provide such an utility class/method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2826 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2826/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([5F3AE9BB0F41B5BC:D8BB54160B61CFB8]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1008 lines...] [junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestMergeSchedulerExternal -Dtests.method=testSubclassConcurrentMergeScheduler -Dtests.seed=5F3AE9BB0F41B5BC -Dtests.slow=true -Dtests.locale=da_DK -Dtests.timezone=Europe/Copenhagen -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] FAILURE 1.11s J1 | TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at
[jira] [Updated] (SOLR-8129) HdfsChaosMonkeyNothingIsSafeTest failures
[ https://issues.apache.org/jira/browse/SOLR-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-8129: --- Attachment: fail.151005_064958 OK, here's another failure I've been analyzing. It comes down to this: 1) leader is shutdown (CoreContainer.shutdown is called) 2) a single doc is sent from the leader to one replica successfully, but unsuccessfully to a different replica (rejected task from shutdown executor that the client is using to send) 3) tons of other updates are still being accepted and sent by the leader 4) much later, a peersync sees everything as OK since recent versions match up. One mystery is why ConcurrentUpdateSolrClient is trying to create a new Runner when there is obviously another runner already running (since it still accepts and sends new updates after that point). A general way to fix this is to make sure that shutdown happens much more quickly... we should stop reading and processing updates. {code} // good-doc comes into leader 53975 2> 43204 DEBUG (qtp1536362844-206) [n:127.0.0.1:53975__%2Fzl c:collection1 s:shard2 r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-476} {wt=javabin=2} // bad-doc comes into leader 53975 2> 43204 DEBUG (qtp1536362844-206) [n:127.0.0.1:53975__%2Fzl c:collection1 s:shard2 r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-477} {wt=javabin=2} // good-doc added to shard 57414 2> 43273 DEBUG (qtp702407469-354) [n:127.0.0.1:57414__%2Fzl c:collection1 s:shard2 r:core_node5 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-476} {update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} 2> 43280 INFO (qtp702407469-354) [n:127.0.0.1:57414__%2Fzl c:collection1 s:shard2 r:core_node5 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp=/_/zl path=/update params={update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} {add=[ft1-467 (1514187847514456064), ft1-470 (1514187847746191360), ft1-473 (1514187847754579968), ft1-476 (1514187847755628544)]} 0 42 // the leader is going to be stopped in the future 2> 43345 INFO (Thread-272) [] o.a.s.c.ChaosMonkey monkey: stop shard! 53975 2> 43345 INFO (Thread-272) [] o.a.s.c.CoreContainer Shutting down CoreContainer instance=171681388 // overseer gets state:down for leader 53975 2> 43378 INFO (OverseerStateUpdate-94636738141945860-127.0.0.1:49439__%2Fzl-n_00) [n:127.0.0.1:49439__%2Fzl] o.a.s.c.Overseer processMessage: queueSize: 1, message = { // BUT... 53975 appears to keep processing updates... there are ~736 more updates like the following, continuing another couple of seconds through to time 4 fail.151005_064958: 2> 43381 DEBUG (qtp1536362844-204) [n:127.0.0.1:53975__%2Fzl c:collection1 s:shard2 r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-1165} {wt=javabin=2} // good-doc is added to replica 44323 2> 43449 DEBUG (qtp1776514246-272) [n:127.0.0.1:44323__%2Fzl c:collection1 s:shard2 r:core_node3 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-476} {update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} 2> 43456 INFO (qtp1776514246-272) [n:127.0.0.1:44323__%2Fzl c:collection1 s:shard2 r:core_node3 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp=/_/zl path=/update params={update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} {add=[ft1-473 (1514187847754579968), ft1-476 (1514187847755628544)]} 0 108 // more signs of node being stopped 2> 43453 WARN (qtp1536362844-205) [n:127.0.0.1:53975__%2Fzl c:collection1 s:shard2 r:core_node1 x:collection1] o.e.j.s.ServletHandler /_/zl/collection1/update 2> org.apache.solr.common.SolrException: Error processing the request. CoreContainer is either not initialized or shutting down. // bad-doc is added to replica 44323 2> 43471 DEBUG (qtp1776514246-273) [n:127.0.0.1:44323__%2Fzl c:collection1 s:shard2 r:core_node3 x:collection1] o.a.s.u.p.LogUpdateProcessor PRE_UPDATE add{,id=ft1-477} {update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} 2> 43501 INFO (qtp1776514246-273) [n:127.0.0.1:44323__%2Fzl c:collection1 s:shard2 r:core_node3 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp=/_/zl path=/update params={update.distrib=FROMLEADER=http://127.0.0.1:53975/_/zl/collection1/=javabin=2} {add=[ft1-477 (1514187847778697216)]} 0 30 // This is the same update thread that has our bad-doc on the leader... it can't send because the update executor has been shut down 2> 43472 ERROR (qtp1536362844-206) [n:127.0.0.1:53975__%2Fzl c:collection1 s:shard2 r:core_node1 x:collection1] o.a.s.u.SolrCmdDistributor java.util.concurrent.RejectedExecutionException: Task
killing a Solr instance leaves the state in Zookeeper "active"
If I kill a replica with -9, the state.json node never gets updated, the node shows as "active" There is code around that checks the live_nodes to see whether the state.json node can be believed, and Varun pointed me at Solr JIRAs for making sure CLUSTERSTATUS consults live_nodes, indicating that this is something that's expected. But it seems trappy. My question is whether it's worth raising a JIRA. The leader could notice a mismatch and update state.json or something like that. I'll raise a JIRA if it seems like something that should be discussed. Let me know, Erick - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8074) LoadAdminUIServlet directly references admin.html
[ https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971402#comment-14971402 ] ASF subversion and git services commented on SOLR-8074: --- Commit 1710271 from [~markrmil...@gmail.com] in branch 'dev/trunk' [ https://svn.apache.org/r1710271 ] SOLR-8074: LoadAdminUIServlet directly references admin.html > LoadAdminUIServlet directly references admin.html > - > > Key: SOLR-8074 > URL: https://issues.apache.org/jira/browse/SOLR-8074 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Mark Miller >Priority: Minor > Fix For: 5.4 > > Attachments: SOLR-8074.patch > > > The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning > it cannot be used in its current state to serve up the new admin UI. > An update is needed to this class to make it serve back whatever html file > was requested in the URL. There will, likely, only ever be two of them > mentioned in web.xml, but it would be really useful for changes to web.xml > not to require Java code changes also. > I'm hoping that someone with an up-and-running Java coding setup can make > this pretty trivial tweak. Any volunteers? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: killing a Solr instance leaves the state in Zookeeper "active"
This is expected and works as designed. We have enough complexity in publishing state for other nodes (LIR) and we shouldn't add any more. Besides what if the leader itself was killed, who changes the state then? What problem are you trying to solve? On Fri, Oct 23, 2015 at 10:19 PM, Erick Ericksonwrote: > If I kill a replica with -9, the state.json node never gets updated, > the node shows as "active" > > There is code around that checks the live_nodes to see whether the > state.json node can be believed, and Varun pointed me at Solr JIRAs > for making sure CLUSTERSTATUS consults live_nodes, indicating that > this is something that's expected. But it seems trappy. > > My question is whether it's worth raising a JIRA. The leader could > notice a mismatch and update state.json or something like that. > > I'll raise a JIRA if it seems like something that should be discussed. > > Let me know, > Erick > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: killing a Solr instance leaves the state in Zookeeper "active"
Not so much a problem as behavior I wasn't fully expecting. It does seem a little trappy to have this thing that's supposed to be the state of the collection but then require that another znode be checked to see if state.json is telling the truth. In the particular case that came up, a monitoring system was trying to generate alerts when a node went down by relying on the state.json znode, but no alert was being generated in this case. BTW, this is 4.6, I suspect the eventual answer is to upgrade and use the collections API CLUSTERSTATUS... I don't have strong feelings about this, mostly throwing it out for discussion. I suppose the goal here is to keep any client from having to directly look at the state.json file and provide APIs that conceal this kind of thing. Your point about the complexity of publishing state for other nodes is well taken... On Fri, Oct 23, 2015 at 10:29 AM, Shalin Shekhar Mangarwrote: > This is expected and works as designed. We have enough complexity in > publishing state for other nodes (LIR) and we shouldn't add any more. > Besides what if the leader itself was killed, who changes the state > then? > > What problem are you trying to solve? > > On Fri, Oct 23, 2015 at 10:19 PM, Erick Erickson > wrote: >> If I kill a replica with -9, the state.json node never gets updated, >> the node shows as "active" >> >> There is code around that checks the live_nodes to see whether the >> state.json node can be believed, and Varun pointed me at Solr JIRAs >> for making sure CLUSTERSTATUS consults live_nodes, indicating that >> this is something that's expected. But it seems trappy. >> >> My question is whether it's worth raising a JIRA. The leader could >> notice a mismatch and update state.json or something like that. >> >> I'll raise a JIRA if it seems like something that should be discussed. >> >> Let me know, >> Erick >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > > > > -- > Regards, > Shalin Shekhar Mangar. > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-8194) Improve error reporting UpdateRequest
[ https://issues.apache.org/jira/browse/SOLR-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward reassigned SOLR-8194: --- Assignee: Alan Woodward > Improve error reporting UpdateRequest > - > > Key: SOLR-8194 > URL: https://issues.apache.org/jira/browse/SOLR-8194 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Markus Jelsma >Assignee: Alan Woodward >Priority: Trivial > Fix For: 5.4 > > > SolrJ throws NPE if null documents are added to UpdateRequest. It should > report a proper error message so i don't get confused the next time i skrew > up. Please see: > https://www.mail-archive.com/solr-user@lucene.apache.org/msg115074.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4854) Query elevation [elevated] field always false with java binary communication
[ https://issues.apache.org/jira/browse/SOLR-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971421#comment-14971421 ] Shalin Shekhar Mangar commented on SOLR-4854: - Thanks Ray, can you add a test? Or give some steps for us to reproduce the problem? > Query elevation [elevated] field always false with java binary communication > > > Key: SOLR-4854 > URL: https://issues.apache.org/jira/browse/SOLR-4854 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.3 > Environment: tomcat 6.0.33, java 1.6.0_26_x64, solrj 4.3 >Reporter: Istvan Hegedus > Attachments: SOLR-4854.patch > > > With XMLResponseParser there is no problem, but with default > BinaryResponseWriter [elevated] is always false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8194) Improve error reporting UpdateRequest
[ https://issues.apache.org/jira/browse/SOLR-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971422#comment-14971422 ] Alan Woodward commented on SOLR-8194: - Throwing NPE is fine, I think, but we should do that when add() is called, rather than later during processing. > Improve error reporting UpdateRequest > - > > Key: SOLR-8194 > URL: https://issues.apache.org/jira/browse/SOLR-8194 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Markus Jelsma >Assignee: Alan Woodward >Priority: Trivial > Fix For: 5.4 > > > SolrJ throws NPE if null documents are added to UpdateRequest. It should > report a proper error message so i don't get confused the next time i skrew > up. Please see: > https://www.mail-archive.com/solr-user@lucene.apache.org/msg115074.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4854) Query elevation [elevated] field always false with java binary communication
[ https://issues.apache.org/jira/browse/SOLR-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971426#comment-14971426 ] Gopal Patwa commented on SOLR-4854: --- Thanks Shalin, we will add test and steps reproduce soon > Query elevation [elevated] field always false with java binary communication > > > Key: SOLR-4854 > URL: https://issues.apache.org/jira/browse/SOLR-4854 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.3 > Environment: tomcat 6.0.33, java 1.6.0_26_x64, solrj 4.3 >Reporter: Istvan Hegedus > Attachments: SOLR-4854.patch > > > With XMLResponseParser there is no problem, but with default > BinaryResponseWriter [elevated] is always false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-7569: --- Attachment: SOLR-7569.patch Ah, missed out the test in my last patch. Here it is. > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8194) Improve error reporting UpdateRequest
Markus Jelsma created SOLR-8194: --- Summary: Improve error reporting UpdateRequest Key: SOLR-8194 URL: https://issues.apache.org/jira/browse/SOLR-8194 Project: Solr Issue Type: Bug Affects Versions: 5.3 Reporter: Markus Jelsma Priority: Trivial Fix For: 5.4 SolrJ throws NPE if null documents are added to UpdateRequest. It should report a proper error message so i don't get confused the next time i skrew up. Please see: https://www.mail-archive.com/solr-user@lucene.apache.org/msg115074.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8074) LoadAdminUIServlet directly references admin.html
[ https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971407#comment-14971407 ] ASF subversion and git services commented on SOLR-8074: --- Commit 1710272 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710272 ] SOLR-8074: LoadAdminUIServlet directly references admin.html > LoadAdminUIServlet directly references admin.html > - > > Key: SOLR-8074 > URL: https://issues.apache.org/jira/browse/SOLR-8074 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Mark Miller >Priority: Minor > Fix For: 5.4 > > Attachments: SOLR-8074.patch > > > The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning > it cannot be used in its current state to serve up the new admin UI. > An update is needed to this class to make it serve back whatever html file > was requested in the URL. There will, likely, only ever be two of them > mentioned in web.xml, but it would be really useful for changes to web.xml > not to require Java code changes also. > I'm hoping that someone with an up-and-running Java coding setup can make > this pretty trivial tweak. Any volunteers? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8129) HdfsChaosMonkeyNothingIsSafeTest failures
[ https://issues.apache.org/jira/browse/SOLR-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971436#comment-14971436 ] Yonik Seeley commented on SOLR-8129: {quote} One mystery is why ConcurrentUpdateSolrClient is trying to create a new Runner when there is obviously another runner already running (since it still accepts and sends new updates after that point). {quote} Mark pointed me at this comment in ConcurrentUpdateSolrClient: // see if queue is half full and we can add more runners // special case: if only using a threadCount of 1 and the queue // is filling up, allow 1 add'l runner to help process the queue [~thelabdude] It looks like you added that comment... but it's not clear to me how the code implements that special case. Thoughts? > HdfsChaosMonkeyNothingIsSafeTest failures > - > > Key: SOLR-8129 > URL: https://issues.apache.org/jira/browse/SOLR-8129 > Project: Solr > Issue Type: Bug >Reporter: Yonik Seeley > Attachments: fail.151005_064958, fail.151005_080319 > > > New HDFS chaos test in SOLR-8123 hits a number of types of failures, > including shard inconsistency. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4854) Query elevation [elevated] field always false with java binary communication
[ https://issues.apache.org/jira/browse/SOLR-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray updated SOLR-4854: -- Attachment: SOLR-4854.patch Upload patch, it works in our env > Query elevation [elevated] field always false with java binary communication > > > Key: SOLR-4854 > URL: https://issues.apache.org/jira/browse/SOLR-4854 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.3 > Environment: tomcat 6.0.33, java 1.6.0_26_x64, solrj 4.3 >Reporter: Istvan Hegedus > Attachments: SOLR-4854.patch > > > With XMLResponseParser there is no problem, but with default > BinaryResponseWriter [elevated] is always false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7569) Create an API to force a leader election between nodes
[ https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971406#comment-14971406 ] Shalin Shekhar Mangar commented on SOLR-7569: - bq. It seems like what we really want is to make sure the last published state for each replica does not prevent it from becoming the leader? Do you mean that removing blockers like LIR is enough? > Create an API to force a leader election between nodes > -- > > Key: SOLR-7569 > URL: https://issues.apache.org/jira/browse/SOLR-7569 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-medium, impact-high > Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, > SOLR-7569.patch, SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch > > > There are many reasons why Solr will not elect a leader for a shard e.g. all > replicas' last published state was recovery or due to bugs which cause a > leader to be marked as 'down'. While the best solution is that they never get > into this state, we need a manual way to fix this when it does get into this > state. Right now we can do a series of dance involving bouncing the node > (since recovery paths between bouncing and REQUESTRECOVERY are different), > but that is difficult when running a large cluster. Although it is possible > that such a manual API may lead to some data loss but in some cases, it is > the only possible option to restore availability. > This issue proposes to build a new collection API which can be used to force > replicas into recovering a leader while avoiding data loss on a best effort > basis. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7858: Attachment: SOLR-7858-2.patch correct "warning" patch > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858.patch, new ui link.png, > original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972294#comment-14972294 ] ASF subversion and git services commented on SOLR-7858: --- Commit 1710300 from [~upayavira] in branch 'dev/trunk' [ https://svn.apache.org/r1710300 ] SOLR-7858 Add a warning message to the angular UI > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858.patch, new ui link.png, > original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7858: Attachment: (was: SOLR-7858-2.patch) > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858.patch, new ui link.png, original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-8074) LoadAdminUIServlet directly references admin.html
[ https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira closed SOLR-8074. --- All good - works perfectly so I could complete the next stage of SOLR-7858. > LoadAdminUIServlet directly references admin.html > - > > Key: SOLR-8074 > URL: https://issues.apache.org/jira/browse/SOLR-8074 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Mark Miller >Priority: Minor > Fix For: 5.4, Trunk > > Attachments: SOLR-8074.patch > > > The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning > it cannot be used in its current state to serve up the new admin UI. > An update is needed to this class to make it serve back whatever html file > was requested in the URL. There will, likely, only ever be two of them > mentioned in web.xml, but it would be really useful for changes to web.xml > not to require Java code changes also. > I'm hoping that someone with an up-and-running Java coding setup can make > this pretty trivial tweak. Any volunteers? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7858: Attachment: SOLR-7858-2.patch Patch that adds this "warning" message to the top of the new UI, so as to distinguish it more clearly from the original one: "This is an experimental UI. Report bugs _here_. For the old UI click _here_" > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858.patch, new ui link.png, > original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7584) Add Joins to the Streaming API and Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972324#comment-14972324 ] Dennis Gove commented on SOLR-7584: --- Could you describe your use-case for joining on facets? I can imagine that a HashJoin (SOLR-8188) would be good for something like that because it removes the sort requirement. Yes, you can apply functions like sum and average on the joined data by wrapping the resulting joined stream in a RollupStream and using metrics. > Add Joins to the Streaming API and Streaming Expressions > > > Key: SOLR-7584 > URL: https://issues.apache.org/jira/browse/SOLR-7584 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Reporter: Dennis Gove >Priority: Minor > Labels: Streaming > Attachments: SOLR-7584.patch, SOLR-7584.patch, SOLR-7584.patch, > SOLR-7584.patch, SOLR-7584.patch > > > Add InnerJoinStream, LeftOuterJoinStream, and supporting classes to the > Streaming API to allow for joining between sub-streams. > At its basic, it would look something like this > {code} > innerJoin( > search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...), > search(collection2, q=*:*, fl="fieldA, fieldD, fieldE", ...), > on="fieldA=fieldA" > ) > {code} > or with multi-field on clauses > {code} > innerJoin( > search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...), > search(collection2, q=*:*, fl="fieldA, fieldD, fieldE", ...), > on="fieldA=fieldA, fieldB=fieldD" > ) > {code} > I'd also like to support the option of doing a hash join instead of the > default merge join but I haven't yet figured out the best way to express > that. I'd like to let the user tell us which sub-stream should be hashed (the > least-cost one). > Also, I've been thinking about field aliasing and might want to add a > SelectStream which serves the purpose of allowing us to limit the fields > coming out and rename fields. > Depends on SOLR-7554 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972325#comment-14972325 ] ASF subversion and git services commented on SOLR-7858: --- Commit 1710304 from [~upayavira] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710304 ] SOLR-7858 Switch index.html to use LoadAdminUIServlet on 5x branch > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858-3.patch, SOLR-7858-4.patch, > SOLR-7858.patch, new ui link.png, original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972323#comment-14972323 ] ASF subversion and git services commented on SOLR-7858: --- Commit 1710303 from [~upayavira] in branch 'dev/trunk' [ https://svn.apache.org/r1710303 ] SOLR-7858 Make Angular UI default in trunk > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858-3.patch, SOLR-7858-4.patch, > SOLR-7858.patch, new ui link.png, original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7993) json stopped working on 5.3.0
[ https://issues.apache.org/jira/browse/SOLR-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bill Bell updated SOLR-7993: Attachment: SOLR-7993-test.patch Test for [json] > json stopped working on 5.3.0 > - > > Key: SOLR-7993 > URL: https://issues.apache.org/jira/browse/SOLR-7993 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3, 5.3.1 >Reporter: Bill Bell > Attachments: SOLR-7993-test.patch, SOLR-7993.patch > > > This stopped working: > http://localhost:8983/solr/provider/select?q=*%3A*=json=provider_json:[json] > It now does not show the field 5.2.1 worked fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8198) Change ReducerStream to use StreamEqualitor instead of StreamComparator
Dennis Gove created SOLR-8198: - Summary: Change ReducerStream to use StreamEqualitor instead of StreamComparator Key: SOLR-8198 URL: https://issues.apache.org/jira/browse/SOLR-8198 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Dennis Gove Priority: Minor Currently the ReducerStream uses a StreamComparator to determine whether tuples are equal. StreamEqualitors are a simplified version of a comparator in that they do not require a sort to be provided. Using the function getStreamSort we are still able to validate the incoming stream's sort and pass that on up to any parent stream which might require it. This will simplify the use of the ReducerStream in join scenarios where the reducer is used to find like records. Such a scenario exists with Inner/Outer JoinStream, ComplementStream, and [Outer]HashJoinStreams. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7858: Attachment: SOLR-7858-4.patch Patch to make index.html use LoadAdminUIServlet. This gets it ${version} replacement and protection against click-jacking. > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858-3.patch, SOLR-7858-4.patch, > SOLR-7858.patch, new ui link.png, original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)
[ https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972307#comment-14972307 ] songwanging commented on SOLR-8033: --- Hi Christine Poerschke, could you help create a patch for this issue, thanks a lot, I am not familiar with the process of creating a patch. > useless if branch (commented out log.debug in HdfsTransactionLog constructor) > - > > Key: SOLR-8033 > URL: https://issues.apache.org/jira/browse/SOLR-8033 > Project: Solr > Issue Type: Improvement >Affects Versions: 5.0, 5.1 >Reporter: songwanging >Assignee: Christine Poerschke >Priority: Minor > > In method HdfsTransactionLog() of class HdfsTransactionLog > (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java) > The if branch presented in the following code snippet performs no actions, we > should add more code to handle this or directly delete this if branch. > HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection > globalStrings, boolean openExisting) { > ... > try { > if (debug) { > //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + > tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + > openExisting); > } > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7584) Add Joins to the Streaming API and Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dennis Gove updated SOLR-7584: -- Attachment: SOLR-7584.patch Part of this ticket is a change in comparators and equalitors to support differing field names on either side of the comparison (ie, fieldA = fieldB). Due to changes that have come into trunk between the creation of this patch and now it was required that I propagate those changes to a couple of other files. Note, I originally included this change in SOLR-7669 but realized today that it's actually necessary in this patch. Here's me regretting the decision to not create a separate ticket for the equalitor/comparator changes but this patch does also add support for distributed joins so there's that. Either way, description of change is below. Required a couple of changes in the SQL and FacetStream areas related to FieldComparator. The FieldComparator has been changed to support different field names on the left and right side. The SQL and FacetStream areas use FieldComparator for sorting (a totally valid use case) but do expect the left and right side field names to be equal. The changes I made go through and validate that assumption. In the future I think I may circle back around and create a new FieldComparator with a single field name so that on construction that assumption can be enforced. All tests pass. > Add Joins to the Streaming API and Streaming Expressions > > > Key: SOLR-7584 > URL: https://issues.apache.org/jira/browse/SOLR-7584 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Reporter: Dennis Gove >Priority: Minor > Labels: Streaming > Attachments: SOLR-7584.patch, SOLR-7584.patch, SOLR-7584.patch, > SOLR-7584.patch, SOLR-7584.patch > > > Add InnerJoinStream, LeftOuterJoinStream, and supporting classes to the > Streaming API to allow for joining between sub-streams. > At its basic, it would look something like this > {code} > innerJoin( > search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...), > search(collection2, q=*:*, fl="fieldA, fieldD, fieldE", ...), > on="fieldA=fieldA" > ) > {code} > or with multi-field on clauses > {code} > innerJoin( > search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...), > search(collection2, q=*:*, fl="fieldA, fieldD, fieldE", ...), > on="fieldA=fieldA, fieldB=fieldD" > ) > {code} > I'd also like to support the option of doing a hash join instead of the > default merge join but I haven't yet figured out the best way to express > that. I'd like to let the user tell us which sub-stream should be hashed (the > least-cost one). > Also, I've been thinking about field aliasing and might want to add a > SelectStream which serves the purpose of allowing us to limit the fields > coming out and rename fields. > Depends on SOLR-7554 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b85) - Build # 14644 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14644/ Java: 64bit/jdk1.9.0-ea-b85 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.client.solrj.embedded.JettyWebappTest.testAdminUI Error Message: Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([BF1D2DE333F380A3:87CFCEB054981B1C]:0) at org.apache.solr.client.solrj.embedded.JettyWebappTest.testAdminUI(JettyWebappTest.java:115) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:747) Build Log: [...truncated 11482 lines...] [junit4] Suite: org.apache.solr.client.solrj.embedded.JettyWebappTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.JettyWebappTest_BF1D2DE333F380A3-001/init-core-data-001 [junit4] 2> 46214 INFO (SUITE-JettyWebappTest-seed#[BF1D2DE333F380A3]-worker) [] o.a.s.SolrTestCaseJ4
[jira] [Commented] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972295#comment-14972295 ] ASF subversion and git services commented on SOLR-7858: --- Commit 1710301 from [~upayavira] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710301 ] SOLR-7858 Add a warning message to the angular UI > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858.patch, new ui link.png, > original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7858) Make Angular UI default
[ https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira updated SOLR-7858: Attachment: SOLR-7858-3.patch Patch that makes angular UI default in trunk. Before committing, another is needed that uses LoadAdminUIServlet to load index.html in 5.x, but doesn't (yet) make it the default. > Make Angular UI default > --- > > Key: SOLR-7858 > URL: https://issues.apache.org/jira/browse/SOLR-7858 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Upayavira >Assignee: Upayavira >Priority: Minor > Attachments: SOLR-7858-2.patch, SOLR-7858-3.patch, SOLR-7858.patch, > new ui link.png, original UI link.png > > > Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, > it should function well in most cases. I propose that, as soon as 5.3 has > been released, we make the Angular UI default, ready for the 5.4 release. We > can then fix any more bugs as they are found, but more importantly start > working on the features that were the reason for doing this work in the first > place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8195) IndexFetcher download trace to include bytes-downloaded[-per-second]
[ https://issues.apache.org/jira/browse/SOLR-8195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971674#comment-14971674 ] Shalin Shekhar Mangar commented on SOLR-8195: - +1 LGTM > IndexFetcher download trace to include bytes-downloaded[-per-second] > > > Key: SOLR-8195 > URL: https://issues.apache.org/jira/browse/SOLR-8195 > Project: Solr > Issue Type: Wish >Reporter: Christine Poerschke >Assignee: Christine Poerschke > Attachments: SOLR-8195.patch > > > patch against trunk with proposed changes to follow -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8196) TestMiniSolrCloudCluster.testStopAllStartAll case plus necessary MiniSolrCloudCluster tweak
Christine Poerschke created SOLR-8196: - Summary: TestMiniSolrCloudCluster.testStopAllStartAll case plus necessary MiniSolrCloudCluster tweak Key: SOLR-8196 URL: https://issues.apache.org/jira/browse/SOLR-8196 Project: Solr Issue Type: Test Reporter: Christine Poerschke Assignee: Christine Poerschke Background to this seemingly boring {{TestMiniSolrCloudCluster.testStopAllStartAll}} case is trying to reproduce leadership/election issues observed whilst evaluating 4.10.4 - neither branch_5x nor trunk had the issues but {{MiniSolrCloudCluster}} needed a little tweak to make the test case work: if the same solr/jetty home directory is used for multiple jetties then stopping and starting resulted in them all discovering the same cores ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8196) TestMiniSolrCloudCluster.testStopAllStartAll case plus necessary MiniSolrCloudCluster tweak
[ https://issues.apache.org/jira/browse/SOLR-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-8196: -- Attachment: SOLR-8196.patch Attaching proposed patch against trunk. The changes to {{MiniSolrCloudCluster}} are kept to a minimum. > TestMiniSolrCloudCluster.testStopAllStartAll case plus necessary > MiniSolrCloudCluster tweak > --- > > Key: SOLR-8196 > URL: https://issues.apache.org/jira/browse/SOLR-8196 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Assignee: Christine Poerschke > Attachments: SOLR-8196.patch > > > Background to this seemingly boring > {{TestMiniSolrCloudCluster.testStopAllStartAll}} case is trying to reproduce > leadership/election issues observed whilst evaluating 4.10.4 - neither > branch_5x nor trunk had the issues but {{MiniSolrCloudCluster}} needed a > little tweak to make the test case work: if the same solr/jetty home > directory is used for multiple jetties then stopping and starting resulted in > them all discovering the same cores ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8129) HdfsChaosMonkeyNothingIsSafeTest failures
[ https://issues.apache.org/jira/browse/SOLR-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971697#comment-14971697 ] Shalin Shekhar Mangar commented on SOLR-8129: - Thanks for the details, Yonik. bq. A general way to fix this is to make sure that shutdown happens much more quickly... we should stop reading and processing updates. maybe HttpSolrCall can return an error immediately if container has been shutdown? > HdfsChaosMonkeyNothingIsSafeTest failures > - > > Key: SOLR-8129 > URL: https://issues.apache.org/jira/browse/SOLR-8129 > Project: Solr > Issue Type: Bug >Reporter: Yonik Seeley > Attachments: fail.151005_064958, fail.151005_080319 > > > New HDFS chaos test in SOLR-8123 hits a number of types of failures, > including shard inconsistency. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)
[ https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke reassigned SOLR-8033: - Assignee: Christine Poerschke > useless if branch (commented out log.debug in HdfsTransactionLog constructor) > - > > Key: SOLR-8033 > URL: https://issues.apache.org/jira/browse/SOLR-8033 > Project: Solr > Issue Type: Improvement >Affects Versions: 5.0, 5.1 >Reporter: songwanging >Assignee: Christine Poerschke >Priority: Minor > > In method HdfsTransactionLog() of class HdfsTransactionLog > (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java) > The if branch presented in the following code snippet performs no actions, we > should add more code to handle this or directly delete this if branch. > HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection > globalStrings, boolean openExisting) { > ... > try { > if (debug) { > //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + > tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + > openExisting); > } > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)
[ https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971710#comment-14971710 ] Christine Poerschke commented on SOLR-8033: --- Hello [~songwang] - if you'd like to create and attach a patch for this change (removal or re-instatement), I'd be happy to apply and commit it. cc/fyi [~markrmil...@gmail.com] if you have any thoughts on removal vs. re-instatement of this logging, please let us know. Thank you. > useless if branch (commented out log.debug in HdfsTransactionLog constructor) > - > > Key: SOLR-8033 > URL: https://issues.apache.org/jira/browse/SOLR-8033 > Project: Solr > Issue Type: Improvement >Affects Versions: 5.0, 5.1 >Reporter: songwanging >Priority: Minor > > In method HdfsTransactionLog() of class HdfsTransactionLog > (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java) > The if branch presented in the following code snippet performs no actions, we > should add more code to handle this or directly delete this if branch. > HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection > globalStrings, boolean openExisting) { > ... > try { > if (debug) { > //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + > tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + > openExisting); > } > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 527 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/527/ 1 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest Error Message: Captured an uncaught exception in thread: Thread[id=3464, name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, group=TGRP-CdcrReplicationHandlerTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3464, name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, group=TGRP-CdcrReplicationHandlerTest] at __randomizedtesting.SeedInfo.seed([E157718DBCF3C95:A951CFBCB6742F2C]:0) Caused by: org.apache.solr.common.cloud.ZooKeeperException: at __randomizedtesting.SeedInfo.seed([E157718DBCF3C95]:0) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:232) Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_E157718DBCF3C95-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.006.1515850526787371008 (No such file or directory) at org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244) at org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173) at org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079) at org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579) at org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610) at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:534) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225) Caused by: java.io.FileNotFoundException: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_E157718DBCF3C95-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.006.1515850526787371008 (No such file or directory) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) at java.io.RandomAccessFile.(RandomAccessFile.java:243) at org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236) ... 7 more Build Log: [...truncated 9847 lines...] [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_E157718DBCF3C95-001/init-core-data-001 [junit4] 2> 425262 INFO (SUITE-CdcrReplicationHandlerTest-seed#[E157718DBCF3C95]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) [junit4] 2> 425262 INFO (SUITE-CdcrReplicationHandlerTest-seed#[E157718DBCF3C95]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /_/ev [junit4] 2> 425283 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 425284 INFO (Thread-2191) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 425284 INFO (Thread-2191) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 425384 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.ZkTestServer start zk server on port:33429 [junit4] 2> 425384 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 425397 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 425426 INFO (zkCallback-91-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@2efc863d name:ZooKeeperConnection Watcher:127.0.0.1:33429 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 425426 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 425426 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 425426 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2> 425433 INFO (TEST-CdcrReplicationHandlerTest.doTest-seed#[E157718DBCF3C95]) [] o.a.s.c.c.SolrZkClient
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b85) - Build # 14639 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14639/ Java: 64bit/jdk1.9.0-ea-b85 -XX:+UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt Error Message: 5 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=3073, name=changePwdReplayCache.data, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:747)2) Thread[id=3072, name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:516) at java.util.TimerThread.mainLoop(Timer.java:526) at java.util.TimerThread.run(Timer.java:505)3) Thread[id=3076, name=ou=system.data, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:747)4) Thread[id=3075, name=kdcReplayCache.data, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:747)5) Thread[id=3074, name=groupCache.data, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:747) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=3073, name=changePwdReplayCache.data, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at
[jira] [Commented] (SOLR-4854) Query elevation [elevated] field always false with java binary communication
[ https://issues.apache.org/jira/browse/SOLR-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971792#comment-14971792 ] Ray commented on SOLR-4854: --- Here is the test steps: 1. enable QueryElevationComponent in solr server, please refer to wiki https://cwiki.apache.org/confluence/display/solr/The+Query+Elevation+Component 2. call the solr with following parameters (I am using id as uniqueKey in my schema): q=foo=id,[elevated]=true=true==javabin=2 make sure the document of existed in your index. check the response to see if [elevated] value is true for document with id We also need to verify another case with elevate.xml defined, instead of passing elevateIds in API. Let me know if you need more information. > Query elevation [elevated] field always false with java binary communication > > > Key: SOLR-4854 > URL: https://issues.apache.org/jira/browse/SOLR-4854 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 4.3 > Environment: tomcat 6.0.33, java 1.6.0_26_x64, solrj 4.3 >Reporter: Istvan Hegedus > Attachments: SOLR-4854.patch > > > With XMLResponseParser there is no problem, but with default > BinaryResponseWriter [elevated] is always false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 995 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/995/ 5 tests failed. FAILED: org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR Error Message: Captured an uncaught exception in thread: Thread[id=484, name=coreZkRegister-215-thread-1, state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=484, name=coreZkRegister-215-thread-1, state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest] Caused by: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([604EA955F55DFDE5]:0) at org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133) at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:434) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346) at org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1113) at org.apache.solr.cloud.ZkController.register(ZkController.java:926) at org.apache.solr.cloud.ZkController.register(ZkController.java:881) at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:183) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ERROR: SolrIndexSearcher opens=51 closes=50 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50 at __randomizedtesting.SeedInfo.seed([604EA955F55DFDE5]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:468) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: file handle leaks:
[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14642 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14642/ Java: 32bit/jdk1.9.0-ea-b85 -server -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest Error Message: There are still nodes recoverying - waited for 330 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 330 seconds at __randomizedtesting.SeedInfo.seed([1D23C88E1F853101:BA67702A723E22B8]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319) at org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplication(CdcrReplicationHandlerTest.java:86) at org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:51) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-8192) SubFacets allBuckets not woring with measures on tokenized fields
[ https://issues.apache.org/jira/browse/SOLR-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14972263#comment-14972263 ] Yonik Seeley commented on SOLR-8192: Thanks Pablo, I've reproduced this issue and am looking into a fix. > SubFacets allBuckets not woring with measures on tokenized fields > -- > > Key: SOLR-8192 > URL: https://issues.apache.org/jira/browse/SOLR-8192 > Project: Solr > Issue Type: Bug >Reporter: Pablo Anzorena > > Subfacets are not working when you ask for allBuckets on a tokenized fields > with measures > Here is the request: > { > hs: { > field: hs, > type: terms, > allBuckets:true, > sort: "mostrar_bill_price desc", > facet:{ > mostrar_bill_price: "sum(mostrar_bill_price)" > } > } > } > Here is the response: > { > "responseHeader": { > "status": 500, > "QTime": 92, > "params": { > "indent": "true", > "q": "*:*", > "json.facet": "{ hs: { field: hs, type: terms, allBuckets:true, sort: > \"mostrar_bill_price desc\", facet:{ mostrar_bill_price: > \"sum(mostrar_bill_price)\" } } }", > "wt": "json", > "rows": "0" > } > }, > "response": { > "numFound": 35422188, > "start": 0, > "docs": [] > }, > "error": { > "trace": "java.lang.ArrayIndexOutOfBoundsException\n", > "code": 500 > } > } > hs fields is defined as: > required="false" multiValued="false" /> > mostrar_bill_price is defined as: > stored="false" required="false" multiValued="false" /> > A part from text_ws, it also happens with text_classic (these are the only > ones I've tested it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
OOM on solr cloud 5.2.1, does not trigger oom_solr.sh
Hi, Some times I see OOM happening on replicas,but does not trigger script oom_solr.sh which was passed in as -XX:OnOutOfMemoryError=/actualLocation/solr/bin/oom_solr.sh 8091. These OOM happened while DIH importing data from database. Is this known issue? is there any quick fix? Sent yesterday day to users group, no response yet. Here are stack traces when OOM happened 1) org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java :227) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java :196) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle r.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 3) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.jav a:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.jav a:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java :185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java :1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:14 1) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHan dlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection .java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java: 97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java: 555) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space 2) org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: Exception writing document id R277453962 to the index; possible analysis error. at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.jav a:167) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdatePro cessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRe questProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(Dist ributedUpdateProcessor.java:955) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Dist ributedUpdateProcessor.java:1110) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(Dist ributedUpdateProcessor.java:706) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdatePro cessorFactory.java:104) at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:10 1) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterM ostDocIterator(JavaBinUpdateRequestCodec.java:179) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterat or(JavaBinUpdateRequestCodec.java:135) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:241) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedL ist(JavaBinUpdateRequestCodec.java:121) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:206) at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:126) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(Ja vaBinUpdateRequestCodec.java:186) at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader .java:111) at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.ja va:98) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentS treamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase .java:143) at
[jira] [Created] (SOLR-8190) Implement Closeable on TupleStream
Kevin Risden created SOLR-8190: -- Summary: Implement Closeable on TupleStream Key: SOLR-8190 URL: https://issues.apache.org/jira/browse/SOLR-8190 Project: Solr Issue Type: Bug Components: SolrJ Affects Versions: Trunk Reporter: Kevin Risden Priority: Minor Implementing Closeable on TupleStream provides the ability to use try-with-resources (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) in tests and in practice. This prevents TupleStreams from being left open when there is an error in the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8190) Implement Closeable on TupleStream
[ https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-8190: --- Attachment: SOLR-8190.patch > Implement Closeable on TupleStream > -- > > Key: SOLR-8190 > URL: https://issues.apache.org/jira/browse/SOLR-8190 > Project: Solr > Issue Type: Bug > Components: SolrJ >Affects Versions: Trunk >Reporter: Kevin Risden >Priority: Minor > Attachments: SOLR-8190.patch > > > Implementing Closeable on TupleStream provides the ability to use > try-with-resources > (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) > in tests and in practice. This prevents TupleStreams from being left open > when there is an error in the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6829) OfflineSorter should use Directory API
[ https://issues.apache.org/jira/browse/LUCENE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6829. Resolution: Fixed > OfflineSorter should use Directory API > -- > > Key: LUCENE-6829 > URL: https://issues.apache.org/jira/browse/LUCENE-6829 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk > > Attachments: LUCENE-6829.patch, LUCENE-6829.patch, LUCENE-6829.patch, > LUCENE-6829.patch > > > I think this is a blocker for LUCENE-6825, because the block KD-tree makes > heavy use of OfflineSorter and we don't want to fill up tmp space ... > This should be a straightforward cutover, but there are some challenges, e.g. > the test was failing because virus checker blocked deleting of files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14636 - Failure!
logged LUCENE-6853 and currently disabled the failing check. Tommaso 2015-10-23 16:16 GMT+02:00 Policeman Jenkins Server: > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14636/ > Java: 32bit/jdk1.9.0-ea-b85 -server -XX:+UseG1GC > > 1 tests failed. > FAILED: > org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance > > Error Message: > > > Stack Trace: > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([2A1980DAFF0EDEC8:EDF872F894BAE667]:0) > at org.junit.Assert.fail(Assert.java:92) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertTrue(Assert.java:54) > at > org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance(BooleanPerceptronClassifierTest.java:97) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:520) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at java.lang.Thread.run(Thread.java:747) > > > > > Build Log: > [...truncated 5532 lines...] >[junit4] Suite: > org.apache.lucene.classification.BooleanPerceptronClassifierTest >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=BooleanPerceptronClassifierTest -Dtests.method=testPerformance > -Dtests.seed=2A1980DAFF0EDEC8 -Dtests.multiplier=3 -Dtests.slow=true >
[jira] [Commented] (LUCENE-6853) Boolean perceptron classifier is too sensitive to threshold
[ https://issues.apache.org/jira/browse/LUCENE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971129#comment-14971129 ] ASF subversion and git services commented on LUCENE-6853: - Commit 1710230 from [~teofili] in branch 'dev/trunk' [ https://svn.apache.org/r1710230 ] LUCENE-6853 - disabled accuracy check in BPC performance test until this is fixed > Boolean perceptron classifier is too sensitive to threshold > --- > > Key: LUCENE-6853 > URL: https://issues.apache.org/jira/browse/LUCENE-6853 > Project: Lucene - Core > Issue Type: Bug > Components: modules/classification >Affects Versions: 4.10.4, 5.3 >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 6.0 > > > {{BooleanPerceptronClassifier}} is too sensitive to the value of its > {{threshold}}, that should be weighted and adjusted against the classifier > inputs instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6853) Boolean perceptron classifier is too sensitive to threshold
Tommaso Teofili created LUCENE-6853: --- Summary: Boolean perceptron classifier is too sensitive to threshold Key: LUCENE-6853 URL: https://issues.apache.org/jira/browse/LUCENE-6853 Project: Lucene - Core Issue Type: Bug Components: modules/classification Affects Versions: 5.3, 4.10.4 Reporter: Tommaso Teofili Assignee: Tommaso Teofili Fix For: 6.0 {{BooleanPerceptronClassifier}} is too sensitive to the value of its {{threshold}}, that should be weighted and adjusted against the classifier inputs instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8189) eTag calculation during http Cache Validation uses unsynchronized WeakHashMap
[ https://issues.apache.org/jira/browse/SOLR-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971165#comment-14971165 ] ASF subversion and git services commented on SOLR-8189: --- Commit 1710240 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1710240 ] SOLR-8189: Fixed java7 compile issue > eTag calculation during http Cache Validation uses unsynchronized WeakHashMap > - > > Key: SOLR-8189 > URL: https://issues.apache.org/jira/browse/SOLR-8189 > Project: Solr > Issue Type: Bug > Components: search >Affects Versions: 4.10.4, 5.3 >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: difficulty-easy, impact-low > Fix For: 5.4, Trunk > > > I found this while looking into a recent jenkins failure where > TestDynamicLoading leaked 5 threads: > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/ > {code} > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from > SUITE scope at org.apache.solr.core.TestDynamicLoading: >1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) >2) Thread[id=11445, name=qtp85907293-11445, state=RUNNABLE, > group=TGRP-TestDynamicLoading] > at java.util.WeakHashMap.get(WeakHashMap.java:403) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102) > at > org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) > at
[jira] [Updated] (SOLR-8191) CloudSolrStream close method NullPointerException
[ https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-8191: --- Attachment: SOLR-8191.patch > CloudSolrStream close method NullPointerException > - > > Key: SOLR-8191 > URL: https://issues.apache.org/jira/browse/SOLR-8191 > Project: Solr > Issue Type: Bug > Components: SolrJ >Affects Versions: Trunk >Reporter: Kevin Risden > Attachments: SOLR-8191.patch > > > CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null > yielding a NullPointerException in those cases when close() is called on it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b85) - Build # 14347 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14347/ Java: 64bit/jdk1.9.0-ea-b85 -XX:+UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 9024 lines...] [javac] Compiling 856 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/cache/HttpCacheHeaderUtil.java:59: error: incompatible types: Map
[jira] [Commented] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?
[ https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971090#comment-14971090 ] Nicholas Knize commented on LUCENE-6780: ++. Merged to trunk and 5.4 > GeoPointDistanceQuery doesn't work with a large radius? > --- > > Key: LUCENE-6780 > URL: https://issues.apache.org/jira/browse/LUCENE-6780 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Attachments: LUCENE-6780-heap-used-hack.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch > > > I'm working on LUCENE-6698 but struggling with test failures ... > Then I noticed that TestGeoPointQuery's test never tests on large distances, > so I modified the test to sometimes do so (like TestBKDTree) and hit test > failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?
[ https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6780. Resolution: Fixed Fix Version/s: 5.4 Trunk > GeoPointDistanceQuery doesn't work with a large radius? > --- > > Key: LUCENE-6780 > URL: https://issues.apache.org/jira/browse/LUCENE-6780 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6780-heap-used-hack.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, > LUCENE-6780.patch, LUCENE-6780.patch > > > I'm working on LUCENE-6698 but struggling with test failures ... > Then I noticed that TestGeoPointQuery's test never tests on large distances, > so I modified the test to sometimes do so (like TestBKDTree) and hit test > failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Solr-Artifacts-5.x - Build # 970 - Failure
Sorry about that. Didn't test with Java7. I committed a fix. On Fri, Oct 23, 2015 at 8:54 PM, Apache Jenkins Serverwrote: > Build: https://builds.apache.org/job/Solr-Artifacts-5.x/970/ > > No tests ran. > > Build Log: > [...truncated 13139 lines...] > [javac] Compiling 856 source files to > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build/solr-core/classes/java > [javac] > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/core/src/java/org/apache/solr/servlet/cache/HttpCacheHeaderUtil.java:59: > error: incompatible types > [javac] private static Map etagCoreCache = > Collections.synchronizedMap(new WeakHashMap<>()); > [javac] > ^ > [javac] required: Map > [javac] found:Map
[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream
[ https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14971077#comment-14971077 ] Kevin Risden commented on SOLR-8190: Fixed stream tests to use assertEquals methods instead of assertTrue(boolean condition) since most of the conditions were for equals. > Implement Closeable on TupleStream > -- > > Key: SOLR-8190 > URL: https://issues.apache.org/jira/browse/SOLR-8190 > Project: Solr > Issue Type: Bug > Components: SolrJ >Affects Versions: Trunk >Reporter: Kevin Risden >Priority: Minor > Attachments: SOLR-8190.patch > > > Implementing Closeable on TupleStream provides the ability to use > try-with-resources > (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) > in tests and in practice. This prevents TupleStreams from being left open > when there is an error in the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org