[jira] [Commented] (SOLR-3284) StreamingUpdateSolrServer swallows exceptions

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728020#comment-15728020
 ] 

Mark Miller commented on SOLR-3284:
---

You can use those hacks for specific use cases, but the only great solution for 
the general user client is really doing the work of efficiently returning error 
information for what could be tons of failed updates. 

It's not a bad idea to offer the option of trying to quit on the first error. 
I'd make it a required construction param. Most users that I've seen that want 
to do this though, want to count on updates stopping after the first fail, so 
you can reason about how to handle the situation reasonably, but you can 
actually end up with a few updates beyond that in, so it's not as great as it 
sounds even when you do want that kind of behavior. 



> StreamingUpdateSolrServer swallows exceptions
> -
>
> Key: SOLR-3284
> URL: https://issues.apache.org/jira/browse/SOLR-3284
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 3.5, 4.0-ALPHA
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: SOLR-3284.patch
>
>
> StreamingUpdateSolrServer eats exceptions thrown by lower level code, such as 
> HttpClient, when doing adds.  It may happen with other methods, though I know 
> that query and deleteByQuery will throw exceptions.  I believe that this is a 
> result of the queue/Runner design.  That's what makes SUSS perform better, 
> but it means you sacrifice the ability to programmatically determine that 
> there was a problem with your update.  All errors are logged via slf4j, but 
> that's not terribly helpful except with determining what went wrong after the 
> fact.
> When using CommonsHttpSolrServer, I've been able to rely on getting an 
> exception thrown by pretty much any error, letting me use try/catch to detect 
> problems.
> There's probably enough dependent code out there that it would not be a good 
> idea to change the design of SUSS, unless there were alternate constructors 
> or additional methods available to configure new/old behavior.  Fixing this 
> is probably not trivial, so it's probably a better idea to come up with a new 
> server object based on CHSS.  This is outside my current skillset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9706) fetchIndex blocks incoming queries when issued on a replica in SolrCloud

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727984#comment-15727984
 ] 

Mark Miller commented on SOLR-9706:
---

It's really just never been a supported operation for the user. It's really an 
abuse of the system unless someone thinks out and designs support and tests for 
this type of thing.

> fetchIndex blocks incoming queries when issued on a replica in SolrCloud
> 
>
> Key: SOLR-9706
> URL: https://issues.apache.org/jira/browse/SOLR-9706
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>
> This is something of an edge case, but it's perfectly possible to issue a 
> fetchIndex command through the core admin API to a replica in SolrCloud. 
> While the fetch is going on, incoming queries are blocked. Then when the 
> fetch completes, all the queued-up queries execute.
> In the normal case, this is probably the proper behavior as a fetchIndex 
> during "normal" SolrCloud operation indicates that the replica's index is too 
> far out of date and _shouldn't_ serve queries, this is a special case.
> Why would one want to do this? Well, in _extremely_ high indexing throughput 
> situations, the additional time taken for the leader forwarding the query on 
> to a follower is too high. So there is an indexing cluster and a search 
> cluster and an external process that issues a fetchIndex to each replica in 
> the search cluster periodiclally.
> What do people think about an "expert" option for fetchIndex that would cause 
> a replica to behave like the old master/slave days and continue serving 
> queries while the fetchindex was going on? Or another solution?
> FWIW, here's the stack traces where the blocking is going on (6.3 about). 
> This is not hard to reproduce if you introduce an artificial delay in the 
> fetch command then submit a fetchIndex and try to query.
> Blocked query thread(s)
> DefaultSolrCoreState.loci(159)
> DefaultSolrCoreState.getIndexWriter (104)
> SolrCore.openNewSearcher(1781)
> SolrCore.getSearcher(1931)
> SolrCore.getSearchers(1677)
> SolrCore.getSearcher(1577)
> SolrQueryRequestBase.getSearcher(115)
> QueryComponent.process(308).
> The stack trace that releases this is
> DefaultSolrCoreState.createMainIndexWriter(240)
> DefaultSolrCoreState.changeWriter(203)
> DefaultSolrCoreState.openIndexWriter(228) // LOCK RELEASED 2 lines later
> IndexFetcher.fetchLatestIndex(493) (approx, I have debugging code in there. 
> It's in the "finally" clause anyway.)
> IndexFetcher.fetchLatestIndex(251).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9818) Solr admin UI rapidly retries any request(s) if it loses connection with the server

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727942#comment-15727942
 ] 

Mark Miller commented on SOLR-9818:
---

+1, not great behavior. A lot of these commands are not idempotent and the 
browser doesn't know what happened depending on the fail. 

> Solr admin UI rapidly retries any request(s) if it loses connection with the 
> server
> ---
>
> Key: SOLR-9818
> URL: https://issues.apache.org/jira/browse/SOLR-9818
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Ere Maijala
>
> It seems that whenever the Solr admin UI loses connection with the server, be 
> the reason that the server is too slow to answer or that it's gone away 
> completely, it starts hammering the server with the previous request until it 
> gets a success response, it seems. That can be especially bad if the last 
> attempted action was something like collection reload with a SolrCloud 
> instance. The admin UI will quickly add hundreds of reload commands to 
> overseer/collection-queue-work, which may essentially cause the replicas to 
> get overloaded when they're trying to handle all the reload commands.
> I believe the UI should never retry the previous command blindly when the 
> connection is lost, but instead just ping the server until it responds again.
> Steps to reproduce:
> 1.) Fire up Solr
> 2.) Open the admin UI in browser
> 3.) Open a web console in the browser to see the requests it sends
> 4.) Stop solr
> 5.) Try an action in the admin UI
> 6.) Observe the web console in browser quickly fill up with repeats of the 
> originally attempted request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727933#comment-15727933
 ] 

Mark Miller commented on SOLR-9824:
---

Hanging around for 250 ms after an update is pretty ugly. I'd like to think we 
can continue to find a better path. It's not that reasonable in the other 
direction. A single 10ms update shouldn't take 250ms. I refuse to believe this 
is what we have to do to fix this :)

There is normally a default of 250 because using that class *means* you want to 
blast documents in big batches. We are using it for large batches, small 
batches, and single updates though - the same default is not appropriate and 

> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727897#comment-15727897
 ] 

Mark Miller commented on SOLR-9824:
---

Nice find.

> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9815) Verbose Garbage Collection logging is on by default

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727866#comment-15727866
 ] 

Mark Miller commented on SOLR-9815:
---

+1

> Verbose Garbage Collection logging is on by default
> ---
>
> Key: SOLR-9815
> URL: https://issues.apache.org/jira/browse/SOLR-9815
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 6.3
>Reporter: Gethin James
>Priority: Minor
>
> There have been some excellent logging fixes in 6.3 
> (http://www.cominvent.com/2016/11/07/solr-logging-just-got-better/).  However 
> now, by default, Solr is logging a great deal of garbage collection 
> information.
> It seems that this logging is excessive, can we make the default logging to 
> not be verbose?
> For linux/mac setting GC_LOG_OPTS="" in solr.in.sh seems to work around the 
> issue, but looking at solr.cmd I don't think that will work for windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9826) Shutting down leader when it's sending updates makes another active node go into recovery

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727854#comment-15727854
 ] 

Mark Miller commented on SOLR-9826:
---

We should probably either stop interrupting the update executor on shutdown or 
special case an interruption so that the leader doesn't consider it a failed 
update to a replica. We probably want to fail the update to the user in that 
case. 

> Shutting down leader when it's sending updates makes another active node go 
> into recovery
> -
>
> Key: SOLR-9826
> URL: https://issues.apache.org/jira/browse/SOLR-9826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Ere Maijala
>  Labels: solrcloud
> Attachments: failure.log
>
>
> If the leader in SolrCloud is sending updates to a follower when it's shut 
> down, it forces the replica it can't communicate with (due to being shut 
> down, I assume) to go into recovery. I'll attach a log excerpt that shows the 
> related messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-12-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727716#comment-15727716
 ] 

Hoss Man commented on SOLR-5944:


bq. I saw consistent failures on...

I'm seeing consistent failures from most of the randomized tests.

bq. I haven't looked deep into why it could be failing, but a preliminary look 
at the logs lead me to believe that it is a test problem.

can you elaborate on what in the logs gave you that impression?

If it were a test bug -- ie: a bug in tracking the model state compared to the 
inplace atomic updes -- I would expect the failures to reproduce if you 
switched the test to use a regular (indexed+stored) long field instead of a DVO 
field -- ie: use the classic atomic update code instead of the inplace update 
code.

But when i tried toggling the field used (see comments in 
{{checkRandomReplay}}) I couldn't reproduce any failures.

I added some hackish logging to {{checkRandomReplay}} to get it to dump a short 
sequence that failed and turned that into a new test method 
({{testReplay_nocommit}}) and then i distilled what seems to be the key 
problematic bits into an even shorter test: 
{{testReplay_SetOverriddenWithNoValueThenInc}} ...

{code}
  public void testReplay_SetOverriddenWithNoValueThenInc() throws Exception {
final String inplaceField = "inplace_l_dvo"; 
// final String inplaceField = "inplace_nocommit_not_really_l"; // 
nocommit: "inplace_l_dvo"

checkReplay(inplaceField,
//
sdoc("id", "1", inplaceField, map("set", 555L)),
SOFTCOMMIT,
sdoc("id", "1", "regular_l", 666L), // NOTE: no inplaceField, 
regular add w/overwrite 
sdoc("id", "1", inplaceField, map("inc", -77)),
HARDCOMMIT);
  }
{code}

...all of that is now on the branch.

If you toggle the above code to use regular atomic updates, then it passes -- 
but as written, so it uses the new inplace update code paths, it fails like 
so...

{noformat}
   [junit4] FAILURE 0.54s | 
TestInPlaceUpdatesStandalone.testReplay_SetOverriddenWithNoValueThenInc <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<-77> but 
was:<478>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([9D6E895FCBA28315:6DDD2091B324AFF2]:0)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesStandalone.checkReplay(TestInPlaceUpdatesStandalone.java:920)
   [junit4]>at 
org.apache.solr.update.TestInPlaceUpdatesStandalone.testReplay_SetOverriddenWithNoValueThenInc(TestInPlaceUpdatesStandalone.java:590)
{noformat}

...looks like a genuine bug to me: when a regular update overwrites a doc that 
had a DVO field value, a subsequent "inc" operation on the DVO fields is 
picking up the _old_ value instead of operating against an implicit default of 
0.

(This kind of corner case is what makes randomized testing totally worth the 
time and effort)

bq. Btw, do you know how to enable commit notifications to show up here for the 
jira/solr-5944 branch?

IIRC comments about commits to jira/* branches are suppressed intentionally as 
noise, because it's expected that there will be lots of iteration on the 
branches, some of which might be thrown away, and for posterity what matters is 
only commits to main line dev branches

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> 

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-12-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727635#comment-15727635
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


[~hossman], I've created a branch (jira/solr-5944) for this, and committed your 
latest patch to the branch. Also, I've fixed the ClassCastException bug that 
you saw and also fixed a javadoc bug and removed unused imports. 
(https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c7a2a6). However, 
I saw consistent failures on 
TestInPlaceUpdatesStandalone#testReplay_Random_FewDocsManyShortSequences which 
you recently added. I haven't looked deep into why it could be failing, but a 
preliminary look at the logs lead me to believe that it is a test problem.

Btw, do you know how to enable commit notifications to show up here for the 
jira/solr-5944 branch?

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, defensive-checks.log.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9829) Solr cannot provide index service after a large GC pause but core state in ZK is still active

2016-12-06 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727630#comment-15727630
 ] 

Varun Thacker commented on SOLR-9829:
-

Hi Mark,

I think you are referring to SOLR-7956 which was fixed in Solr 5.4 ? This issue 
is marked as 5.3.2 

> Solr cannot provide index service after a large GC pause but core state in ZK 
> is still active
> -
>
> Key: SOLR-9829
> URL: https://issues.apache.org/jira/browse/SOLR-9829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 5.3.2
> Environment: Redhat enterprise server 64bit 
>Reporter: Forest Soup
>
> When Solr meets a large GC pause like 
> https://issues.apache.org/jira/browse/SOLR-9828 , the collections on it 
> cannot provide service and never come back until restart. 
> But in the ZooKeeper, the cores on that server still shows active. 
> Some /update requests got http 500 due to "IndexWriter is closed". Some gots 
> http 400 due to "possible analysis error." whose root cause is still 
> "IndexWriter is closed", which we think it should return 500 
> instead(documented in https://issues.apache.org/jira/browse/SOLR-9825).
> Our questions in this JIRA are:
> 1, should solr mark cores as down in zk when it cannot provide index service?
> 2, Is it possible solr re-open the IndexWriter to provide index service again?
> solr log snippets:
> 2016-11-22 20:47:37.274 ERROR (qtp2011912080-76) [c:collection12 s:shard1 
> r:core_node1 x:collection12_shard1_replica1] o.a.s.c.SolrCore 
> org.apache.solr.common.SolrException: Exception writing document id 
> Q049dXMxYjMtbWFpbDg4L089bGxuX3VzMQ==20841350!270CE4F9C032EC26002580730061473C 
> to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.CloneFieldUpdateProcessorFactory$1.processAdd(CloneFieldUpdateProcessorFactory.java:231)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:143)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)
>   at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:672)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:463)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:199)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> 

[jira] [Commented] (SOLR-9829) Solr cannot provide index service after a large GC pause but core state in ZK is still active

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727609#comment-15727609
 ] 

Mark Miller commented on SOLR-9829:
---

We should be more resilient in the face of some of these types of IO errors, 
but I'm surprised Caused by: java.nio.channels.ClosedByInterruptException 
happens in 5.3. We shouldn't be interrupting Lucene index code anymore, but 
perhaps it crept back in or I'm not remembering well and it was fixed after.

> Solr cannot provide index service after a large GC pause but core state in ZK 
> is still active
> -
>
> Key: SOLR-9829
> URL: https://issues.apache.org/jira/browse/SOLR-9829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 5.3.2
> Environment: Redhat enterprise server 64bit 
>Reporter: Forest Soup
>
> When Solr meets a large GC pause like 
> https://issues.apache.org/jira/browse/SOLR-9828 , the collections on it 
> cannot provide service and never come back until restart. 
> But in the ZooKeeper, the cores on that server still shows active. 
> Some /update requests got http 500 due to "IndexWriter is closed". Some gots 
> http 400 due to "possible analysis error." whose root cause is still 
> "IndexWriter is closed", which we think it should return 500 
> instead(documented in https://issues.apache.org/jira/browse/SOLR-9825).
> Our questions in this JIRA are:
> 1, should solr mark cores as down in zk when it cannot provide index service?
> 2, Is it possible solr re-open the IndexWriter to provide index service again?
> solr log snippets:
> 2016-11-22 20:47:37.274 ERROR (qtp2011912080-76) [c:collection12 s:shard1 
> r:core_node1 x:collection12_shard1_replica1] o.a.s.c.SolrCore 
> org.apache.solr.common.SolrException: Exception writing document id 
> Q049dXMxYjMtbWFpbDg4L089bGxuX3VzMQ==20841350!270CE4F9C032EC26002580730061473C 
> to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.CloneFieldUpdateProcessorFactory$1.processAdd(CloneFieldUpdateProcessorFactory.java:231)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:143)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)
>   at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:672)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:463)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:199)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> 

[jira] [Commented] (SOLR-4690) Highlighting Doesn't works when boost is used along with query

2016-12-06 Thread Vladimir Strugatsky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727574#comment-15727574
 ] 

Vladimir Strugatsky commented on SOLR-4690:
---

I was able to work around this problem by replacing {!type=edismax} with 
{!type=dismax}, but this assumes you don't use edismax-specific features (other 
than boost).

> Highlighting Doesn't works when boost is used along with query
> --
>
> Key: SOLR-4690
> URL: https://issues.apache.org/jira/browse/SOLR-4690
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.1
> Environment: windows and unix both
>Reporter: lukes shaw
>Priority: Critical
>
> Hi everyone, recently i was trying to have the boost in the query and 
> highlighting on in parallel. But if have the boost, highlighting doesn't 
> works, but the moment i remove the boost highlighting start working again.
> Below is the request i am sending.
> http://localhost:8983/solr/collection1/select?q=%2B_query_%3A%22
> {!type%3Dedismax+qf%3D%27body^1.0+title^10.0%27+pf%3D%27body^2%27+ps%3D36+pf2%3D%27body^2%27+pf3%3D%27body^2%27+v%3D%27apple%27+mm%3D100}%22=true=content_group_id_k=true=3=id%2Clanguage_k%2Clast_modified_date_dt%2Ctitle=20=1=200=body=title=true=%2B_query_%3A%22{!type%3Dedismax+qf%3D%27body^1.0+title^10.0%27+pf%3D%27body^2%27+ps%3D36+pf2%3D%27body^2%27+pf3%3D%27body^2%27+v%3D%27apple%27+mm%3D100}
> %22=true=json=true=1=200=body=title=true=boost_weight
> OR
> http://localhost:8983/solr/collection1/select?q=%2B_query_%3A%22
> {!type%3Dedismax+qf%3D%27body^1.0+title^10.0%27+pf%3D%27body^2%27+ps%3D36+pf2%3D%27body^2%27+pf3%3D%27body^2%27+v%3D%27apple%27+mm%3D100}
> %22=true=content_group_id_k=true=3=id%2Clanguage_k%2Clast_modified_date_dt%2Ctitle=20=1=200=body=title=true=true=json=true=1=200=body=title=true=boost_weight
> But if i do above two without the boost or use bf(additive) instead of 
> boost(multiplicative), things works but i don't get the boost(multiplicative).
> I am using SOLR4.1.0
> Any help in this is really appreciated.
> Regards,
> Lukes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3695 - Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3695/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([4C8F30C31FFB4969:8D47ED85BE9D98C0]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:652)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability(TestLBHttpSolrClient.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 607 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/607/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([CA8AF033B1AAC1FB:82FF8487B799EE6E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Mismatch in counts 

[jira] [Reopened] (LUCENE-7575) UnifiedHighlighter: add requireFieldMatch=false support

2016-12-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened LUCENE-7575:
--

6x backport (commit 4e7a7dbf) wasn't clean -- added a 7.0.0 section to 
CHANGES.txt ... not sure if anything else came along for the ride that wasn't 
suppose to.

> UnifiedHighlighter: add requireFieldMatch=false support
> ---
>
> Key: LUCENE-7575
> URL: https://issues.apache.org/jira/browse/LUCENE-7575
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE-7575.patch, LUCENE-7575.patch, LUCENE-7575.patch
>
>
> The UnifiedHighlighter (like the PostingsHighlighter) only supports 
> highlighting queries for the same fields that are being highlighted.  The 
> original Highlighter and FVH support loosening this, AKA 
> requireFieldMatch=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:branch_6x: LUCENE-7575: Add UnifiedHighlighter field matcher predicate (AKA requireFieldMatch=false)

2016-12-06 Thread Chris Hostetter

David: something went haywire with your backport -- it added a 7.0.0 
section to CHANGES.txt, which is breaking the smoketester jenkins

: Date: Mon,  5 Dec 2016 21:21:19 + (UTC)
: From: dsmi...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: lucene-solr:branch_6x: LUCENE-7575: Add UnifiedHighlighter field
: matcher predicate (AKA requireFieldMatch=false)
: 
: Repository: lucene-solr
: Updated Branches:
:   refs/heads/branch_6x cdce62108 -> 4e7a7dbf9
: 
: 
: LUCENE-7575: Add UnifiedHighlighter field matcher predicate (AKA 
requireFieldMatch=false)
: 
: (cherry picked from commit 2e948fe)
: 
: 
: Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
: Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4e7a7dbf
: Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4e7a7dbf
: Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4e7a7dbf
: 
: Branch: refs/heads/branch_6x
: Commit: 4e7a7dbf9a56468f41e89f5289833081b27f1b14
: Parents: cdce621
: Author: David Smiley 
: Authored: Mon Dec 5 16:11:57 2016 -0500
: Committer: David Smiley 
: Committed: Mon Dec 5 16:21:12 2016 -0500
: 
: --
:  lucene/CHANGES.txt  |  56 
:  .../uhighlight/MemoryIndexOffsetStrategy.java   |  10 +-
:  .../uhighlight/MultiTermHighlighting.java   |  37 +--
:  .../lucene/search/uhighlight/PhraseHelper.java  | 158 ---
:  .../search/uhighlight/UnifiedHighlighter.java   |  64 +++--
:  .../uhighlight/TestUnifiedHighlighter.java  | 275 +++
:  .../TestUnifiedHighlighterExtensibility.java|   3 +-
:  7 files changed, 519 insertions(+), 84 deletions(-)
: --
: 
: 
: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e7a7dbf/lucene/CHANGES.txt
: --
: diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
: index b0a5f9c..853f171 100644
: --- a/lucene/CHANGES.txt
: +++ b/lucene/CHANGES.txt
: @@ -3,6 +3,57 @@ Lucene Change Log
:  For more information on past and future Lucene versions, please see:
:  http://s.apache.org/luceneversions
:  
: +=== Lucene 7.0.0 ===
: +
: +API Changes
: +
: +* LUCENE-2605: Classic QueryParser no longer splits on whitespace by default.
: +  Use setSplitOnWhitespace(true) to get the old behavior.  (Steve Rowe)
: +
: +* LUCENE-7369: Similarity.coord and BooleanQuery.disableCoord are removed.
: +  (Adrien Grand)
: +
: +* LUCENE-7368: Removed query normalization. (Adrien Grand)
: +
: +* LUCENE-7355: AnalyzingQueryParser has been removed as its functionality has
: +  been folded into the classic QueryParser. (Adrien Grand)
: +
: +* LUCENE-7407: Doc values APIs have been switched from random access
: +  to iterators, enabling future codec compression improvements. (Mike
: +  McCandless)
: +
: +* LUCENE-7475: Norms now support sparsity, allowing to pay for what is
: +  actually used. (Adrien Grand)
: +
: +* LUCENE-7494: Points now have a per-field API, like doc values. (Adrien 
Grand)
: +
: +Bug Fixes
: +
: +Improvements
: +
: +* LUCENE-7489: Better storage of sparse doc-values fields with the default
: +  codec. (Adrien Grand)
: +
: +Optimizations
: +
: +* LUCENE-7416: BooleanQuery optimizes queries that have queries that occur 
both
: +  in the sets of SHOULD and FILTER clauses, or both in MUST/FILTER and 
MUST_NOT
: +  clauses. (Spyros Kapnissis via Adrien Grand, Uwe Schindler)
: +
: +* LUCENE-7506: FastTaxonomyFacetCounts should use CPU in proportion to
: +  the size of the intersected set of hits from the query and documents
: +  that have a facet value, so sparse faceting works as expected
: +  (Adrien Grand via Mike McCandless)
: +
: +* LUCENE-7519: Add optimized APIs to compute browse-only top level
: +  facets (Mike McCandless)
: +
: +Other
: +
: +* LUCENE-7328: Remove LegacyNumericEncoding from GeoPointField. (Nick Knize)
: +
: +* LUCENE-7360: Remove Explanation.toHtml() (Alan Woodward)
: +
:  === Lucene 6.4.0 ===
:  
:  API Changes
: @@ -73,6 +124,11 @@ Improvements
:  * LUCENE-7537: Index time sorting now supports multi-valued sorts
:using selectors (MIN, MAX, etc.) (Jim Ferenczi via Mike McCandless)
:  
: +* LUCENE-7575: UnifiedHighlighter can now highlight fields with queries that 
don't
: +  necessarily refer to that field (AKA requireFieldMatch==false). Disabled 
by default.
: +  See UH get/setFieldMatcher. (Jim Ferenczi via David Smiley)
: +
: +
:  Optimizations
:  
:  * LUCENE-7568: Optimize merging when index sorting is used but the
: 
: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e7a7dbf/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/MemoryIndexOffsetStrategy.java
: 

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 201 - Failure

2016-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/201/

No tests ran.

Build Log:
[...truncated 40550 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (13.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1472, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1416, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1451, in smokeTest
   [smoker] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 370, in checkSigs
   [smoker] testChanges(project, version, changesURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 418, in testChanges
   [smoker] checkChangesContent(s, version, changesURL, project, True)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 475, in checkChangesContent
   [smoker] raise RuntimeError('Future release %s is greater than %s in %s' 
% (release, version, name))
   [smoker] RuntimeError: Future release 7.0.0 is greater than 6.4.0 in 
file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene/changes/Changes.html

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/build.xml:561: 
exec returned: 1

Total time: 10 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6272 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6272/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001\testThreadSafety-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001\testThreadSafety-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_B687D1961DBCD97B-001

at __randomizedtesting.SeedInfo.seed([B687D1961DBCD97B]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([C82C79CF1F12C814:A0934CE5CF88DAF8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 

[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727033#comment-15727033
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit fcccd317ddb44a742a0b3265fcf32923649f38cd in lucene-solr's branch 
refs/heads/apiv2 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcccd31 ]

LUCENE-7576: detect when special case automaton is passed to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727034#comment-15727034
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit 8cbcbc9d956754de1fab2c626705aa6d6ab9f910 in lucene-solr's branch 
refs/heads/apiv2 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cbcbc9 ]

LUCENE-7576: fix other codecs to detect when special case automaton is passed 
to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727035#comment-15727035
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit 39c2f3d80fd585c7ae4a4a559d53a19a3f100061 in lucene-solr's branch 
refs/heads/apiv2 from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=39c2f3d ]

SOLR-9819: Add new line to the end of SHA


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7563) BKD index should compress unused leading bytes

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727037#comment-15727037
 ] 

ASF subversion and git services commented on LUCENE-7563:
-

Commit bd8b191505d92c89a483a6189497374238476a00 in lucene-solr's branch 
refs/heads/apiv2 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd8b191 ]

LUCENE-7563: remove redundant array copy in PackedIndexTree.clone


> BKD index should compress unused leading bytes
> --
>
> Key: LUCENE-7563
> URL: https://issues.apache.org/jira/browse/LUCENE-7563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7563-prefixlen-unary.patch, 
> LUCENE-7563-prefixlen-unary.patch, LUCENE-7563.patch, LUCENE-7563.patch, 
> LUCENE-7563.patch, LUCENE-7563.patch
>
>
> Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per 
> dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom 
> two bytes in a given segment, we shouldn't store all those leading 0s in the 
> index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9827) Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just SolrException for remote errors

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727040#comment-15727040
 ] 

ASF subversion and git services commented on SOLR-9827:
---

Commit c164f7e35e45d0bfa844cd450ffb4865c27fc4d5 in lucene-solr's branch 
refs/heads/apiv2 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c164f7e ]

SOLR-9827: Make ConcurrentUpdateSolrClient create RemoteSolrExceptions in case 
of remote errors instead of SolrException


> Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just 
> SolrException for remote errors
> --
>
> Key: SOLR-9827
> URL: https://issues.apache.org/jira/browse/SOLR-9827
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-9827.patch
>
>
> Also, improve the exception message to include the remote error message when 
> present. Specially when Solr is logging these errors (e.g. 
> DistributedUpdateProcessor), this should make it easier to understand that 
> the error was in the remote host and not in the one logging this exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727039#comment-15727039
 ] 

ASF subversion and git services commented on SOLR-9832:
---

Commit bf3a3137be8a70ceed884e87c3ada276e82b187b in lucene-solr's branch 
refs/heads/apiv2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf3a313 ]

SOLR-9832: Schema modifications are not immediately visible on the coordinating 
node


> Schema modifications are not immediately visible on the coordinating node
> -
>
> Key: SOLR-9832
> URL: https://issues.apache.org/jira/browse/SOLR-9832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9832.patch
>
>
> As noted on SOLR-9751, 
> {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing 
> on Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of 
> the time.  E.g. from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest 
> -Dtests.method=testAdd2Fields -Dtests.seed=CD72F125201C0C76 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=is 
> -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>   [junit4] ERROR   0.09s J0 | 
> PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
>   [junit4]> Throwable #1: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
> https://127.0.0.1:33343/solr/managed-preanalyzed]
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
>   [junit4]>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4]> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
> /schema/fields/field2
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7563) BKD index should compress unused leading bytes

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727036#comment-15727036
 ] 

ASF subversion and git services commented on LUCENE-7563:
-

Commit 5e8db2e068f2549b9619d5ac48a50c8032fc292b in lucene-solr's branch 
refs/heads/apiv2 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e8db2e ]

LUCENE-7563: use a compressed format for the in-heap BKD index


> BKD index should compress unused leading bytes
> --
>
> Key: LUCENE-7563
> URL: https://issues.apache.org/jira/browse/LUCENE-7563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7563-prefixlen-unary.patch, 
> LUCENE-7563-prefixlen-unary.patch, LUCENE-7563.patch, LUCENE-7563.patch, 
> LUCENE-7563.patch, LUCENE-7563.patch
>
>
> Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per 
> dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom 
> two bytes in a given segment, we shouldn't store all those leading 0s in the 
> index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7575) UnifiedHighlighter: add requireFieldMatch=false support

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727038#comment-15727038
 ] 

ASF subversion and git services commented on LUCENE-7575:
-

Commit 2e948fea300f883b7dfb586e303d5720d09b3210 in lucene-solr's branch 
refs/heads/apiv2 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e948fe ]

LUCENE-7575: Add UnifiedHighlighter field matcher predicate (AKA 
requireFieldMatch=false)


> UnifiedHighlighter: add requireFieldMatch=false support
> ---
>
> Key: LUCENE-7575
> URL: https://issues.apache.org/jira/browse/LUCENE-7575
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE-7575.patch, LUCENE-7575.patch, LUCENE-7575.patch
>
>
> The UnifiedHighlighter (like the PostingsHighlighter) only supports 
> highlighting queries for the same fields that are being highlighted.  The 
> original Highlighter and FVH support loosening this, AKA 
> requireFieldMatch=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5043) hostname lookup in SystemInfoHandler should be refactored so it's possible to not block core (re)load for long periouds on misconfigured systems

2016-12-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5043.

   Resolution: Fixed
 Assignee: Hoss Man
Fix Version/s: 6.4
   master (7.0)

bq.  Is anything keeping you from pushing this into one of the next updates? 

Sorry -- i lost track of it and didn't see your previous comment verifying hat 
the patch orked out for you.

bq. The only question I have is why you don't set the host name to the inet 
address (or maybe even the result of getHostName?) in the case when the DNS 
lookup is suppressed. ...

well, 2 reasons...

#1) conceptually, i don't like the idea of redefining what SystemInfoHandler 
reports as the {{host}} value ... this has always ment "Either The Canonical 
Hostname of null if it can't be determined" and I don't really like the idea 
that _sometimes_ it's something else -- partciularly when the primary usecase 
that might lead to "sometimes" is missconfigured DNS -- i don't want to give 
users the impression "The (canonical) hostname of this solr node is {{foobar}}" 
when {{foobar}} is just some locally configured hostname and that name can't 
actaully be used to connect to the solr node.

Adding distinct SystemInfoHandler variables for the IP Addr, or (locally 
configured) hostname might be conceptually ok -- but personally i don't see 
much value, and leads me to...

#2) the way the InetAddress API is designed, just calling 
{{InetAddress.getLocalHost()}} causes a DNS lookup to happen --  leading to the 
whole potential long pause delay this issue was opened about (perhaps not 
necessarily in the same misconfiguration situation you face, but it could in 
other misconfiguration situations).  Likewise, {{getHostName()}} will do a 
reverse lookup in some situations if there isn't any locally configured 
hostname.

The bottom line being: since the entire predicate for this issue is "Sometimes 
people have badly configure DNS and/or hostname settings, and we should give 
them a way to make life less painful" I didn't want to make too many 
assumptions about the specific nature of _how_ their DNS and/or hostname 
settings might be badly configured and/or introduce similar problems or more 
complexity in just trying to get the IP addr.


> hostname lookup in SystemInfoHandler should be refactored so it's possible to 
> not block core (re)load for long periouds on misconfigured systems
> 
>
> Key: SOLR-5043
> URL: https://issues.apache.org/jira/browse/SOLR-5043
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-5043-lazy.patch, SOLR-5043.patch, SOLR-5043.patch
>
>
> SystemInfoHandler currently lookups the hostname of the machine on it's init, 
> and caches for it's lifecycle -- there is a comment to the effect that the 
> reason for this is because on some machines (notably ones with wacky DNS 
> settings) looking up the hostname can take a long ass time in some JVMs...
> {noformat}
>   // on some platforms, resolving canonical hostname can cause the thread
>   // to block for several seconds if nameservices aren't available
>   // so resolve this once per handler instance 
>   //(ie: not static, so core reload will refresh)
> {noformat}
> But as we move forward with a lot more multi-core, solr-cloud, dynamically 
> updated instances, even paying this cost per core-reload is expensive.
> we should refactoring this so that SystemInfoHandler instances init 
> immediately, with some kind of lazy loading of the hostname info in a 
> background thread, (especially since hte only real point of having that info 
> here is for UI use so you cna keep track of what machine you are looking at)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18460 - Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18460/
Java: 32bit/jdk-9-ea+140 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([75B1E9D0B0BEB361:82C2078876561C87]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1329)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-5043) hostname lookup in SystemInfoHandler should be refactored so it's possible to not block core (re)load for long periouds on misconfigured systems

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726818#comment-15726818
 ] 

ASF subversion and git services commented on SOLR-5043:
---

Commit 135604f6327032d0258227aaa524369203d40822 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=135604f ]

SOLR-5043: New solr.dns.prevent.reverse.lookup system property that can be used 
to prevent long core (re)load delays on systems with missconfigured hostname/DNS

(cherry picked from commit 8b98b158ff9cc2a71216e12c894ca14352d31f0e)


> hostname lookup in SystemInfoHandler should be refactored so it's possible to 
> not block core (re)load for long periouds on misconfigured systems
> 
>
> Key: SOLR-5043
> URL: https://issues.apache.org/jira/browse/SOLR-5043
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-5043-lazy.patch, SOLR-5043.patch, SOLR-5043.patch
>
>
> SystemInfoHandler currently lookups the hostname of the machine on it's init, 
> and caches for it's lifecycle -- there is a comment to the effect that the 
> reason for this is because on some machines (notably ones with wacky DNS 
> settings) looking up the hostname can take a long ass time in some JVMs...
> {noformat}
>   // on some platforms, resolving canonical hostname can cause the thread
>   // to block for several seconds if nameservices aren't available
>   // so resolve this once per handler instance 
>   //(ie: not static, so core reload will refresh)
> {noformat}
> But as we move forward with a lot more multi-core, solr-cloud, dynamically 
> updated instances, even paying this cost per core-reload is expensive.
> we should refactoring this so that SystemInfoHandler instances init 
> immediately, with some kind of lazy loading of the hostname info in a 
> background thread, (especially since hte only real point of having that info 
> here is for UI use so you cna keep track of what machine you are looking at)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5043) hostname lookup in SystemInfoHandler should be refactored so it's possible to not block core (re)load for long periouds on misconfigured systems

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726819#comment-15726819
 ] 

ASF subversion and git services commented on SOLR-5043:
---

Commit 8b98b158ff9cc2a71216e12c894ca14352d31f0e in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b98b15 ]

SOLR-5043: New solr.dns.prevent.reverse.lookup system property that can be used 
to prevent long core (re)load delays on systems with missconfigured hostname/DNS


> hostname lookup in SystemInfoHandler should be refactored so it's possible to 
> not block core (re)load for long periouds on misconfigured systems
> 
>
> Key: SOLR-5043
> URL: https://issues.apache.org/jira/browse/SOLR-5043
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-5043-lazy.patch, SOLR-5043.patch, SOLR-5043.patch
>
>
> SystemInfoHandler currently lookups the hostname of the machine on it's init, 
> and caches for it's lifecycle -- there is a comment to the effect that the 
> reason for this is because on some machines (notably ones with wacky DNS 
> settings) looking up the hostname can take a long ass time in some JVMs...
> {noformat}
>   // on some platforms, resolving canonical hostname can cause the thread
>   // to block for several seconds if nameservices aren't available
>   // so resolve this once per handler instance 
>   //(ie: not static, so core reload will refresh)
> {noformat}
> But as we move forward with a lot more multi-core, solr-cloud, dynamically 
> updated instances, even paying this cost per core-reload is expensive.
> we should refactoring this so that SystemInfoHandler instances init 
> immediately, with some kind of lazy loading of the hostname info in a 
> background thread, (especially since hte only real point of having that info 
> here is for UI use so you cna keep track of what machine you are looking at)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 220 - Still Unstable

2016-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/220/

4 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([3A4D9D42047D4C9D:186244177DD98D7A]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries(CdcrReplicationDistributedZkTest.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9833) Update GenericHadoopAuthPlugin tests to use Hadoop 3 minikdc

2016-12-06 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9833:
--

 Summary: Update GenericHadoopAuthPlugin tests to use Hadoop 3 
minikdc
 Key: SOLR-9833
 URL: https://issues.apache.org/jira/browse/SOLR-9833
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hrishikesh Gadre






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 995 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/995/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.spelling.suggest.SuggesterTSTTest.testReload

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([D36B4F7EB370AC70:149B377D79335462]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:818)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:785)
at 
org.apache.solr.spelling.suggest.SuggesterTest.testReload(SuggesterTest.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='suggestions']/lst[@name='ac']/int[@name='numFound'][.='2']
xml response was: 

04


request 
was:q=ac=/suggest_tst=true=2=xml
at 

[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726604#comment-15726604
 ] 

ASF GitHub Bot commented on SOLR-9817:
--

Github user hgadre closed the pull request at:

https://github.com/apache/lucene-solr/pull/121


> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #121: [SOLR-9817] Make "working directory" for Solr...

2016-12-06 Thread hgadre
Github user hgadre closed the pull request at:

https://github.com/apache/lucene-solr/pull/121


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9827) Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just SolrException for remote errors

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726475#comment-15726475
 ] 

ASF subversion and git services commented on SOLR-9827:
---

Commit fdec7871a144fd30e98f141e83b24538da6f6fc8 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fdec787 ]

SOLR-9827: Make ConcurrentUpdateSolrClient create RemoteSolrExceptions in case 
of remote errors instead of SolrException


> Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just 
> SolrException for remote errors
> --
>
> Key: SOLR-9827
> URL: https://issues.apache.org/jira/browse/SOLR-9827
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-9827.patch
>
>
> Also, improve the exception message to include the remote error message when 
> present. Specially when Solr is logging these errors (e.g. 
> DistributedUpdateProcessor), this should make it easier to understand that 
> the error was in the remote host and not in the one logging this exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9513) Introduce a generic authentication plugin which delegates all functionality to Hadoop authentication framework

2016-12-06 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726472#comment-15726472
 ] 

Hrishikesh Gadre commented on SOLR-9513:


[~ichattopadhyaya] Thanks for the review. I have addressed all the comments and 
updated the PR. Can you please take a look?

> Introduce a generic authentication plugin which delegates all functionality 
> to Hadoop authentication framework
> --
>
> Key: SOLR-9513
> URL: https://issues.apache.org/jira/browse/SOLR-9513
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>
> Currently Solr kerberos authentication plugin delegates the core logic to 
> Hadoop authentication framework. But the configuration parameters required by 
> the Hadoop authentication framework are hardcoded in the plugin code itself. 
> https://github.com/apache/lucene-solr/blob/5b770b56d012279d334f41e4ef7fe652480fd3cf/solr/core/src/java/org/apache/solr/security/KerberosPlugin.java#L119
> The problem with this approach is that we need to make code changes in Solr 
> to expose new capabilities added in Hadoop authentication framework. e.g. 
> HADOOP-12082
> We should implement a generic Solr authentication plugin which will accept 
> configuration parameters via security.json (in Zookeeper) and delegate them 
> to Hadoop authentication framework. This will allow to utilize new features 
> in Hadoop without code changes in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9831) New Solr Admin UI logging tab does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726413#comment-15726413
 ] 

Shawn Heisey commented on SOLR-9831:


Here's the javadoc for the getMDC method used above:

https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/spi/LoggingEvent.html#getMDC(java.lang.String)

I wonder if maybe the logging event doesn't have a copy of the context at the 
time of the event creation, and therefore searches the current thread's MDC ... 
which won't have any core info.  If that's the case, then I do not know how to 
fix it.

> New Solr Admin UI logging tab does not include core name, has mysterious 
> "false" in the "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: SOLR-9831.patch, bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 561 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/561/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [NRTCachingDirectory, 
NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
  at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
  at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [NRTCachingDirectory, NRTCachingDirectory]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
at java.lang.Thread.run(Thread.java:745)

org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([D44CB9DFF96BC971]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9822) Improve faceting performance with FieldCache

2016-12-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726360#comment-15726360
 ] 

Yonik Seeley commented on SOLR-9822:


I tried yet another approach of bulk gathering the id+ord in an array and then 
looping over that in the calling code, but it was much slower than the lambda 
(although still faster than current master w/o patching by 10%).  Still slow 
enough I won't bother attaching the patch.

In the spirit of progress over perfection, we should probably just commit the 
first approach (since it gives a 50% speedup in those cases.), but limited to 
the two call sites in that patch (in FacetFieldProcessorByArrayDV).  We 
shouldn't over-generalize the results found here.  It may be that a lambda-type 
approach will work better in other contexts, and those will need to be tested.  
It's also the case that encapsulating this logic will make it easier to 
introduce/maintain additional optimizations such as actually using the skipping 
of the docvalues iterator when it's sparse vs our domain set).


> Improve faceting performance with FieldCache
> 
>
> Key: SOLR-9822
> URL: https://issues.apache.org/jira/browse/SOLR-9822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
> Attachments: SOLR-9822.patch, SOLR-9822_OrdValues.patch, 
> SOLR-9822_lambda.patch
>
>
> This issue will try to specifically address the performance regressions of 
> faceting on FieldCache fields observed in SOLR-9599.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9831) New Solr Admin UI logging tab does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726348#comment-15726348
 ] 

Shawn Heisey commented on SOLR-9831:


The information there is pulled from the /admin/info/logging handler, where the 
"core" value is empty.  That info handler ultimately gets the info from 
Log4JWatcher#toSolrDocument, where it is populated with this code:

{code:java}
doc.setField("core", event.getMDC(ZkStateReader.CORE_NAME_PROP));
doc.setField("collection", event.getMDC(ZkStateReader.COLLECTION_PROP));
doc.setField("replica", event.getMDC(ZkStateReader.REPLICA_PROP));
doc.setField("shard", event.getMDC(ZkStateReader.SHARD_ID_PROP));
{code}

I do not know why this isn't working, but I have to admit that I don't know 
very much about log4j.


> New Solr Admin UI logging tab does not include core name, has mysterious 
> "false" in the "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: SOLR-9831.patch, bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9827) Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just SolrException for remote errors

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726315#comment-15726315
 ] 

ASF subversion and git services commented on SOLR-9827:
---

Commit c164f7e35e45d0bfa844cd450ffb4865c27fc4d5 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c164f7e ]

SOLR-9827: Make ConcurrentUpdateSolrClient create RemoteSolrExceptions in case 
of remote errors instead of SolrException


> Make ConcurrentUpdateSolrClient create RemoteSolrException instead of just 
> SolrException for remote errors
> --
>
> Key: SOLR-9827
> URL: https://issues.apache.org/jira/browse/SOLR-9827
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-9827.patch
>
>
> Also, improve the exception message to include the remote error message when 
> present. Specially when Solr is logging these errors (e.g. 
> DistributedUpdateProcessor), this should make it easier to understand that 
> the error was in the remote host and not in the one logging this exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI logging tab does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9831:
---
Summary: New Solr Admin UI logging tab does not include core name, has 
mysterious "false" in the "Level" column  (was: New Solr Admin UI does not 
include core name, has mysterious "false" in the "Level" column)

> New Solr Admin UI logging tab does not include core name, has mysterious 
> "false" in the "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: SOLR-9831.patch, bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9751) PreAnalyzedField can cause managed schema corruption

2016-12-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-9751.
--
Resolution: Fixed

The test and fix in SOLR-9832 don't involve PreAnalyzedField, so this issue can 
be resolved.

> PreAnalyzedField can cause managed schema corruption
> 
>
> Key: SOLR-9751
> URL: https://issues.apache.org/jira/browse/SOLR-9751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 6.2, 6.3
>Reporter: liuyang
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9751.patch, SOLR-9751.patch, SOLR-9751.patch
>
>
> The exception as follows:
> Caused by: org.apache.solr.common.SolrException: Could not load conf for core 
> test_shard1_replica1: Can't load schema managed-schema: Plugin init failure 
> for [schema.xml] fieldType "preanalyzed": Cannot load analyzer: 
> org.apache.solr.schema.PreAnalyzedField$PreAnalyzedAnalyzer
> at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:85)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1031)
> ... 6 more
> Caused by: org.apache.solr.common.SolrException: Can't load schema 
> managed-schema: Plugin init failure for [schema.xml] fieldType "preanalyzed": 
> Cannot load analyzer: 
> org.apache.solr.schema.PreAnalyzedField$PreAnalyzedAnalyzer
> at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:600)
> at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
> at 
> org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
> at 
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
> at 
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
> at 
> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
> at 
> org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
> at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
> ... 7 more
> Test procedure:
> 1.create collection using sample_techproducts_configs;
> 2.add field in Solr web view;
> 3.add field again in Solr web view.
> manage-schema is modifyed as follows:
> 
>   
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726256#comment-15726256
 ] 

ASF subversion and git services commented on SOLR-9832:
---

Commit bf3a3137be8a70ceed884e87c3ada276e82b187b in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf3a313 ]

SOLR-9832: Schema modifications are not immediately visible on the coordinating 
node


> Schema modifications are not immediately visible on the coordinating node
> -
>
> Key: SOLR-9832
> URL: https://issues.apache.org/jira/browse/SOLR-9832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9832.patch
>
>
> As noted on SOLR-9751, 
> {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing 
> on Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of 
> the time.  E.g. from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest 
> -Dtests.method=testAdd2Fields -Dtests.seed=CD72F125201C0C76 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=is 
> -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>   [junit4] ERROR   0.09s J0 | 
> PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
>   [junit4]> Throwable #1: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
> https://127.0.0.1:33343/solr/managed-preanalyzed]
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
>   [junit4]>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4]> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
> /schema/fields/field2
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726245#comment-15726245
 ] 

Steve Rowe commented on SOLR-9832:
--

Staring at logs, coordinating ports/timings/zkversions, imagining transient 
failure modes, and a frosty beverage: heaven!!!

> Schema modifications are not immediately visible on the coordinating node
> -
>
> Key: SOLR-9832
> URL: https://issues.apache.org/jira/browse/SOLR-9832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9832.patch
>
>
> As noted on SOLR-9751, 
> {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing 
> on Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of 
> the time.  E.g. from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest 
> -Dtests.method=testAdd2Fields -Dtests.seed=CD72F125201C0C76 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=is 
> -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>   [junit4] ERROR   0.09s J0 | 
> PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
>   [junit4]> Throwable #1: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
> https://127.0.0.1:33343/solr/managed-preanalyzed]
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
>   [junit4]>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4]> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
> /schema/fields/field2
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726184#comment-15726184
 ] 

David Smiley commented on SOLR-9832:


Nice investigation; I bet this was a doozy to figure out.

> Schema modifications are not immediately visible on the coordinating node
> -
>
> Key: SOLR-9832
> URL: https://issues.apache.org/jira/browse/SOLR-9832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9832.patch
>
>
> As noted on SOLR-9751, 
> {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing 
> on Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of 
> the time.  E.g. from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest 
> -Dtests.method=testAdd2Fields -Dtests.seed=CD72F125201C0C76 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=is 
> -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>   [junit4] ERROR   0.09s J0 | 
> PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
>   [junit4]> Throwable #1: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
> https://127.0.0.1:33343/solr/managed-preanalyzed]
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
>   [junit4]>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4]> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
> /schema/fields/field2
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9832:
-
Attachment: SOLR-9832.patch

Patch with a new test that fails about 4% of the time on my Jenkins box, and a 
fix.  The new test doesn't involve PreAnalyzedField(Type).

The issue appears to be that a schema modification request will persist the new 
schema to ZooKeeper, then wait up to a configurable amount of time (10 minutes 
by default) for all replicas to get the new schema, and then return success.  A 
ZooKeeper watch on the coordinating core's config directory eventually triggers 
a core reload for the persisted schema changes, but sometimes a request sent to 
a different node will trigger a reload for an older schema.  As a result, after 
returning success for a schema modification, the coordinating node will briefly 
host an *older* schema.

The attached patch triggers an immediate core reload after schema changes are 
successfully persisted to ZooKeeper, and once that's finished, checks that 
other replicas have the new schema.  The patch also: forces a schema staleness 
check & update when the ZK watch is created for the new core's schema in 
{{ManagedIndexSchemaFactory.inform()}}; and removes an unnecessary schema 
download just prior to reloading the core in the listener created in 
{{SolrCore.getConfListener()}}.

I beasted the new test for 500 iterations with the patch, and it did not fail.  
I also beasted {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} for 
300 iterations with the patch, and it did not fail.

Precommit and all Solr tests pass with the patch; I think it's ready.

I'll commit to master and let it soak for a couple days.

> Schema modifications are not immediately visible on the coordinating node
> -
>
> Key: SOLR-9832
> URL: https://issues.apache.org/jira/browse/SOLR-9832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9832.patch
>
>
> As noted on SOLR-9751, 
> {{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing 
> on Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of 
> the time.  E.g. from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest 
> -Dtests.method=testAdd2Fields -Dtests.seed=CD72F125201C0C76 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=is 
> -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>   [junit4] ERROR   0.09s J0 | 
> PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
>   [junit4]> Throwable #1: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
> https://127.0.0.1:33343/solr/managed-preanalyzed]
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
>   [junit4]>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
>   [junit4]>   at 
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4]> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
> /schema/fields/field2
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   [junit4]>   at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
>   [junit4]>   at 
> 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 2355 - Still unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2355/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([3A32EC387A633C72:B266D3E2D49F518A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1143)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:609)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:595)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:294)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)

[jira] [Commented] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Michael Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726076#comment-15726076
 ] 

Michael Suzuki commented on SOLR-9831:
--

I have attached a patch to remove the tracing and fix the layout of the table. 
I can raise another issue relating to {code}{ event.core } {code} not 
displaying a value.

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: SOLR-9831.patch, bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Michael Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-9831:
-
Attachment: SOLR-9831.patch

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: SOLR-9831.patch, bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726031#comment-15726031
 ] 

Shawn Heisey edited comment on SOLR-9831 at 12/6/16 4:55 PM:
-

I saw this problem in Solr 6.3.0.

At first I thought the reporter's problem might be because the class doing the 
logging (AbstractTracker) is not part of Solr -- it's part of Alfresco ... but 
I duplicated the issue on a completely stock Solr.  Will attach screenshot.  In 
addition to the "Core" column showing nothing, the "Level" column has both 
"ERROR" and "false".


was (Author: elyograg):
I saw this problem in Solr 6.3.0.

At first I thought the reporter's problem might be because the class doing the 
logging (AbstractTracker) is not part of Solr -- it's part of Alfresco ... but 
I duplicated the issue on a completely stock Solr.  Will attach screenshot.

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9831:
---
Component/s: web gui

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9831:
---
Attachment: new-ui-6.3-missing-core.png

Screenshot showing the issue in Solr 6.3.0.  The core was named "foo".  In the 
old UI, the core name shows "null" instead of "foo".

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: bug-ui.png, new-ui-6.3-missing-core.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726031#comment-15726031
 ] 

Shawn Heisey commented on SOLR-9831:


I saw this problem in Solr 6.3.0.

At first I thought the reporter's problem might be because the class doing the 
logging (AbstractTracker) is not part of Solr -- it's part of Alfresco ... but 
I duplicated the issue on a completely stock Solr.  Will attach screenshot.

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: bug-ui.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9831:
---
Affects Version/s: 6.3

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Michael Suzuki
> Attachments: bug-ui.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI does not include core name, has mysterious "false" in the "Level" column

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9831:
---
Summary: New Solr Admin UI does not include core name, has mysterious 
"false" in the "Level" column  (was: New Solr Admin UI Logging Displaying Trace)

> New Solr Admin UI does not include core name, has mysterious "false" in the 
> "Level" column
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Suzuki
> Attachments: bug-ui.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9832) Schema modifications are not immediately visible on the coordinating node

2016-12-06 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-9832:


 Summary: Schema modifications are not immediately visible on the 
coordinating node
 Key: SOLR-9832
 URL: https://issues.apache.org/jira/browse/SOLR-9832
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


As noted on SOLR-9751, 
{{PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields()}} has been failing on 
Jenkins.  When I beast this test on my Jenkins box, it fails about 1% of the 
time.  E.g. from [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2247/]:

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=PreAnalyzedFieldManagedSchemaCloudTest -Dtests.method=testAdd2Fields 
-Dtests.seed=CD72F125201C0C76 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=is -Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
  [junit4] ERROR   0.09s J0 | 
PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields <<<
  [junit4]> Throwable #1: org.apache.solr.client.solrj.SolrServerException: 
No live SolrServers available to handle this 
request:[https://127.0.0.1:39011/solr/managed-preanalyzed, 
https://127.0.0.1:33343/solr/managed-preanalyzed]
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([CD72F125201C0C76:656743CEFC1A9F80]:0)
  [junit4]> at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
  [junit4]> at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
  [junit4]> at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
  [junit4]> at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
  [junit4]> at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
  [junit4]> at 
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
  [junit4]> at 
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
  [junit4]> at java.lang.Thread.run(Thread.java:745)
  [junit4]> Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:39011/solr/managed-preanalyzed: No such path 
/schema/fields/field2
  [junit4]> at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
  [junit4]> at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
  [junit4]> at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
  [junit4]> at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
  [junit4]> at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726001#comment-15726001
 ] 

Michael McCandless commented on LUCENE-7583:


bq. I think ByteArrayDataOutput is always a good idea to create "small" blobs 
of structured data.

Yeah I'm leaning towards just doing this for {{BKDWriter}} at this point.  I'll 
clean up that approach and post a patch.

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, 
> LUCENE-7583.fork-FastOutputStream.patch, LUCENE-7583.patch, 
> LUCENE-7583.private-IndexOutput.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9822) Improve faceting performance with FieldCache

2016-12-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725997#comment-15725997
 ] 

Yonik Seeley commented on SOLR-9822:


bq. I'm more interested in faceting performance without the FieldCache

I'm afraid that probably less can be done there.  The crux of the issue is that 
what was one virtual method call to get an ord for a document is now two.

> Improve faceting performance with FieldCache
> 
>
> Key: SOLR-9822
> URL: https://issues.apache.org/jira/browse/SOLR-9822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
> Attachments: SOLR-9822.patch, SOLR-9822_OrdValues.patch, 
> SOLR-9822_lambda.patch
>
>
> This issue will try to specifically address the performance regressions of 
> faceting on FieldCache fields observed in SOLR-9599.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7583:
---
Attachment: LUCENE-7583.fork-FastOutputStream.patch

OK, I forked Solrj's {{FastOutputStream.java}} into oal.store, and it
gets similar performance to forking {{BufferedOutputStream}} and
removing its {{synchronized}} keywords:

Sparse:

  * total: 61.584 sec

Dense:

  * total: 59.602 sec


> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, 
> LUCENE-7583.fork-FastOutputStream.patch, LUCENE-7583.patch, 
> LUCENE-7583.private-IndexOutput.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It never return to normal unless restart the Solr JVM

2016-12-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725965#comment-15725965
 ] 

Mark Miller commented on SOLR-9830:
---

bq. Too many open files in system

What to do though? Just keep trying to open a new indexwriter? It's not likely 
to work until someone addresses the file descriptor limit or something right?

> Once IndexWriter is closed due to some RunTimeException like 
> FileSystemException, It never return to normal unless restart the Solr JVM
> ---
>
> Key: SOLR-9830
> URL: https://issues.apache.org/jira/browse/SOLR-9830
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.2
> Environment: Red Hat 4.4.7-3,SolrCloud 
>Reporter: Daisy.Yuan
>
> 1. Collection coll_test, has 9 shards, each has two replicas in different 
> solr instances.
> 2. When update documens to the collection use Solrj, inject the exhausted 
> handle fault to one solr instance like solr1.
> 3. Update to col_test_shard3_replica1(It's leader) is failed due to 
> FileSystemException, and IndexWriter is closed.
> 4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
> cannot be updated documens and the numDocs is always less than the standby 
> replica.
> 5. After Solr instance restart, It can update documens and the numDocs is  
> consistent  between the two replicas.
> I think in this case in Solr Cloud mode, it should recovery itself and not 
> restart to recovery the solrcore update function.
>  2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit exception updating document | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
> updateDocument | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: all running merges have aborted | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: done finish merges | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
> numDocs=3798 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: done abort success=true | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 
> finishFullFlush success=false | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: 
> infos=_4g7(6.2.0):C59169/23684:delGen=4 _4gq(6.2.0):C67474/11636:delGen=1 
> _4gg(6.2.0):C64067/15664:delGen=2 _4gr(6.2.0):C13131 _4gs(6.2.0):C966 
> _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 _4gv(6.2.0):C2544 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [IW][commitScheduler-46-thread-1]: 

[jira] [Updated] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7583:
---
Attachment: LUCENE-7583.private-IndexOutput.patch

OK I made the 2 places where we hang onto the {{IndexOutput}} instance in a 
class instance private (see attached patch) but it looks like this didn't 
really help:

Sparse:

  * total: 64.457 sec

Dense:

  * total: 62.412 sec


> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch, 
> LUCENE-7583.private-IndexOutput.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9829) Solr cannot provide index service after a large GC pause but core state in ZK is still active

2016-12-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-9829.
--
Resolution: Invalid

Please bring things like this up on the user's list before opening a JIRA.

1> You have to look both at the live_nodes list as well as the state.json to 
know if a node is really up or not.

2> The error you're asking about has been seen many times as something that 
gets reported that isn't actually the root cause, you _must_ look in the Solr 
logs.

> Solr cannot provide index service after a large GC pause but core state in ZK 
> is still active
> -
>
> Key: SOLR-9829
> URL: https://issues.apache.org/jira/browse/SOLR-9829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 5.3.2
> Environment: Redhat enterprise server 64bit 
>Reporter: Forest Soup
>
> When Solr meets a large GC pause like 
> https://issues.apache.org/jira/browse/SOLR-9828 , the collections on it 
> cannot provide service and never come back until restart. 
> But in the ZooKeeper, the cores on that server still shows active. 
> Some /update requests got http 500 due to "IndexWriter is closed". Some gots 
> http 400 due to "possible analysis error." whose root cause is still 
> "IndexWriter is closed", which we think it should return 500 
> instead(documented in https://issues.apache.org/jira/browse/SOLR-9825).
> Our questions in this JIRA are:
> 1, should solr mark cores as down in zk when it cannot provide index service?
> 2, Is it possible solr re-open the IndexWriter to provide index service again?
> solr log snippets:
> 2016-11-22 20:47:37.274 ERROR (qtp2011912080-76) [c:collection12 s:shard1 
> r:core_node1 x:collection12_shard1_replica1] o.a.s.c.SolrCore 
> org.apache.solr.common.SolrException: Exception writing document id 
> Q049dXMxYjMtbWFpbDg4L089bGxuX3VzMQ==20841350!270CE4F9C032EC26002580730061473C 
> to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.CloneFieldUpdateProcessorFactory$1.processAdd(CloneFieldUpdateProcessorFactory.java:231)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:143)
>   at 
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)
>   at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:672)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:463)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:235)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:199)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> 

[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725878#comment-15725878
 ] 

Uwe Schindler commented on LUCENE-7583:
---

I think ByteArrayDataOutput is always a good idea to create "small" blobs of 
structured data. You have full control of the buffer and there is almost no 
checks and multi-buffer handling involved. It just writes to an byte array that 
you can reuse later or write to IndexOutput as block.

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9822) Improve faceting performance with FieldCache

2016-12-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9822:
---
Attachment: SOLR-9822_OrdValues.patch

Here's another approach (attached): an OrdValues wrapper (and factory method) 
that return specialized instances.  The loop is kept in the "client" code.  
Quick performance testing show only about a 3% improvement over the lambda 
approach.

> Improve faceting performance with FieldCache
> 
>
> Key: SOLR-9822
> URL: https://issues.apache.org/jira/browse/SOLR-9822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
> Attachments: SOLR-9822.patch, SOLR-9822_OrdValues.patch, 
> SOLR-9822_lambda.patch
>
>
> This issue will try to specifically address the performance regressions of 
> faceting on FieldCache fields observed in SOLR-9599.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725835#comment-15725835
 ] 

Michael McCandless commented on LUCENE-7583:


I also tried with {{ByteArrayDataOutput}} and it gets the fastest result, ~9.6% 
faster than trunk today:

Sparse:
  * total: 58.503 sec

Dense:
  * total: 57.227 sec


> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725830#comment-15725830
 ] 

Michael McCandless commented on LUCENE-7583:


bq. Are we sure that we do not open the IndexOutput in one thread and had it 
over to another one? 

Yeah, the {{IndexOutput}} is opened in {{Lucene60PointsWriter}}, and then that 
same thread goes and writes all points via {{writeField}}.  At IW flush time 
it's an indexing thread, and at merge time it's a merge thread, but it should 
only ever be a single thread touching that {{IndexOutput}}.  The benchmark I'm 
running only ever uses a single thread anyway ...

bq. we should also make all references to the IndexOutput private, so it cannot 
escape the current thread (to help hotspot). This means: no non-private fields 
holding the reference to the stream.

I'll try to do this; there's at least one place where it's protected, but 
that's way high up in the stack ({{Lucene60PointsWriter}}).

bq. If we are really required to fork the buffered stream, we may use: 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/FastOutputStream.java
 (but without the DataOutput interface impl).

I'll test that too.

Thanks [~thetaphi].

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7563) BKD index should compress unused leading bytes

2016-12-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7563:
-
Attachment: LUCENE-7563-prefixlen-unary.patch

Here is an updated patch that makes BKDWriter ensure each dimension has 16 
bytes at most, plus some minor tweaks to get a few bits back on average.

> BKD index should compress unused leading bytes
> --
>
> Key: LUCENE-7563
> URL: https://issues.apache.org/jira/browse/LUCENE-7563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7563-prefixlen-unary.patch, 
> LUCENE-7563-prefixlen-unary.patch, LUCENE-7563.patch, LUCENE-7563.patch, 
> LUCENE-7563.patch, LUCENE-7563.patch
>
>
> Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per 
> dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom 
> two bytes in a given segment, we shouldn't store all those leading 0s in the 
> index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9831) New Solr Admin UI Logging Displaying Trace

2016-12-06 Thread Michael Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-9831:
-
Attachment: bug-ui.png

> New Solr Admin UI Logging Displaying Trace
> --
>
> Key: SOLR-9831
> URL: https://issues.apache.org/jira/browse/SOLR-9831
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Suzuki
> Attachments: bug-ui.png
>
>
> The logging screen does not have the table layout set correctly and it does 
> not show the value for the core column, please see attached image of new ui 
> versus original UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9831) New Solr Admin UI Logging Displaying Trace

2016-12-06 Thread Michael Suzuki (JIRA)
Michael Suzuki created SOLR-9831:


 Summary: New Solr Admin UI Logging Displaying Trace
 Key: SOLR-9831
 URL: https://issues.apache.org/jira/browse/SOLR-9831
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Michael Suzuki


The logging screen does not have the table layout set correctly and it does not 
show the value for the core column, please see attached image of new ui versus 
original UI





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 606 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/606/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Captured an uncaught exception in thread: Thread[id=41849, name=Thread-3228, 
state=RUNNABLE, group=TGRP-CdcrBootstrapTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=41849, name=Thread-3228, state=RUNNABLE, 
group=TGRP-CdcrBootstrapTest]
at 
__randomizedtesting.SeedInfo.seed([1B13F576121EEFC2:CF56BE2FF5485C39]:0)
Caused by: java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([1B13F576121EEFC2]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:191)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1361)
at 
org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:705)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:945)
at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:765)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrBootstrapTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:332)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:641)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:847)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:775)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:779)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:88)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:377)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:365)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:156)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:152)
  at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:110)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:202)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 

[jira] [Comment Edited] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725796#comment-15725796
 ] 

Uwe Schindler edited comment on LUCENE-7583 at 12/6/16 3:19 PM:


If we are really required to fork the buffered stream, we may use: 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/FastOutputStream.java
 (but without the DataOutput interface impl).


was (Author: thetaphi):
If we are really required to fork the buffered stream, we may use: 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/FastOutputStream.java
 (bt without the DataOutput interface impl).

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725796#comment-15725796
 ] 

Uwe Schindler commented on LUCENE-7583:
---

If we are really required to fork the buffered stream, we may use: 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/FastOutputStream.java
 (bt without the DataOutput interface impl).

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725736#comment-15725736
 ] 

Uwe Schindler commented on LUCENE-7583:
---

Are we sure that we do not open the IndexOutput in one thread and had it over 
to another one? we should also make all references to the IndexOutput private, 
so it cannot escape the current thread (to help hotspot). This means: no 
non-private fields holding the reference to the stream.

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725732#comment-15725732
 ] 

Uwe Schindler commented on LUCENE-7583:
---

So this looks like a problem of the Hotspot VM that does not fully remove the 
stupid synchronized on this call stack. This should not happen, because most 
optimizations in the VM are there to fix IO, because every Input/OutputStream 
has an internal synchronized lock BufferedOutputStream just has an 
additional one.

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7582) "Cannot commit index writer" in some cases on windows

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725683#comment-15725683
 ] 

Uwe Schindler commented on LUCENE-7582:
---

In addition code that *writes* and *commits* indexes is the same for all 3 
directory types (mmap, nio and simple).

> "Cannot commit index writer" in some cases on windows
> -
>
> Key: LUCENE-7582
> URL: https://issues.apache.org/jira/browse/LUCENE-7582
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3.1
> Environment: Windows 10, 32 bits JVM
>Reporter: Kevin Senechal
>
> Hi!
> I've an error using lucene on windows. I already post a question on modeshape 
> forum (https://developer.jboss.org/thread/273070) and it looks that 
> NIOFSDirectory is not working well on windows as described in the java 
> documentation of this class.
> {quote}NOTE: NIOFSDirectory is not recommended on Windows because of a bug in 
> how FileChannel.read is implemented in Sun's JRE. Inside of the 
> implementation the position is apparently synchronized. See here for 
> details.{quote}
> After reading the linked java issue 
> (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6265734), it seems that 
> there is a workaround to solve it, use an AsynchronousFileChannel.
> Is it a choice that has been made to not use AsynchronousFileChannel or will 
> it be a good fix?
> You'll find the complete stacktrace below:
> {code:java}
> Caused by: org.modeshape.jcr.index.lucene.LuceneIndexException: Cannot commit 
> index writer  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:155) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexChangeAdapter.completeWorkspaceChanges(IndexChangeAdapter.java:104)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.change.ChangeSetAdapter.notify(ChangeSetAdapter.java:157)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexProvider$AtomicIndex.notify(IndexProvider.java:1493)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.bus.RepositoryChangeBus.notify(RepositoryChangeBus.java:190)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WorkspaceCache.changed(WorkspaceCache.java:333)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.txn.SynchronizedTransactions.updateCache(SynchronizedTransactions.java:223)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WritableSessionCache.save(WritableSessionCache.java:751)
>  ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.JcrSession.save(JcrSession.java:1171) 
> ~[dsdk-launcher.jar:na]  
>   ... 19 common frames omitted  
> Caused by: java.nio.file.FileSystemException: 
> C:\Users\Christopher\Infiltrea3CLOUDTEST8\christop...@dooapp.com\indexes\default\nodesByPath\_dc_Lucene50_0.doc:
>  The process cannot access the file because it is being used by another 
> process.  
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
>  ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_92]  
>   at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:393) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:281) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:226) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.sync(LockValidatingDirectoryWrapper.java:62)
>  ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4456) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2874)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2977) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2944) 
> ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:152) 
> ~[dsdk-launcher.jar:na] 
> {code}
> Thank you in advance for your help



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (LUCENE-7582) "Cannot commit index writer" in some cases on windows

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725636#comment-15725636
 ] 

Uwe Schindler commented on LUCENE-7582:
---

I checked the above stack trace and agree with Mike. The issue you see is 
completely unrelated to NIOFSDir. It may also happen with other directory 
implementations. It looks more to be "the" general windows issue with open 
files. Virus checkers is my first guess. Lucene needs full control on the file 
it opened for full commit safety. Any other process (like virus scanners) that 
prevents files from being deleted/opened may cause this. The problem is that 
lucene creates and deletes in short times so it is very likely that lucene 
creates a file, the virus checker is scanning it, but at same time lucene 
deletes it or renames it for committing.

In general on server side installations, the recommendation to users is to 
exclude all directories where Lucene writes its indexes (e.g. Solr installation 
dir) from virus checking. Of course this is an issue for desktop applications, 
but thats's not a common Lucene use case.

> "Cannot commit index writer" in some cases on windows
> -
>
> Key: LUCENE-7582
> URL: https://issues.apache.org/jira/browse/LUCENE-7582
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3.1
> Environment: Windows 10, 32 bits JVM
>Reporter: Kevin Senechal
>
> Hi!
> I've an error using lucene on windows. I already post a question on modeshape 
> forum (https://developer.jboss.org/thread/273070) and it looks that 
> NIOFSDirectory is not working well on windows as described in the java 
> documentation of this class.
> {quote}NOTE: NIOFSDirectory is not recommended on Windows because of a bug in 
> how FileChannel.read is implemented in Sun's JRE. Inside of the 
> implementation the position is apparently synchronized. See here for 
> details.{quote}
> After reading the linked java issue 
> (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6265734), it seems that 
> there is a workaround to solve it, use an AsynchronousFileChannel.
> Is it a choice that has been made to not use AsynchronousFileChannel or will 
> it be a good fix?
> You'll find the complete stacktrace below:
> {code:java}
> Caused by: org.modeshape.jcr.index.lucene.LuceneIndexException: Cannot commit 
> index writer  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:155) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexChangeAdapter.completeWorkspaceChanges(IndexChangeAdapter.java:104)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.change.ChangeSetAdapter.notify(ChangeSetAdapter.java:157)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexProvider$AtomicIndex.notify(IndexProvider.java:1493)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.bus.RepositoryChangeBus.notify(RepositoryChangeBus.java:190)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WorkspaceCache.changed(WorkspaceCache.java:333)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.txn.SynchronizedTransactions.updateCache(SynchronizedTransactions.java:223)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WritableSessionCache.save(WritableSessionCache.java:751)
>  ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.JcrSession.save(JcrSession.java:1171) 
> ~[dsdk-launcher.jar:na]  
>   ... 19 common frames omitted  
> Caused by: java.nio.file.FileSystemException: 
> C:\Users\Christopher\Infiltrea3CLOUDTEST8\christop...@dooapp.com\indexes\default\nodesByPath\_dc_Lucene50_0.doc:
>  The process cannot access the file because it is being used by another 
> process.  
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
>  ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_92]  
>   at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:393) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:281) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:226) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.sync(LockValidatingDirectoryWrapper.java:62)
>  ~[dsdk-launcher.jar:na]  
>   at 

[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725624#comment-15725624
 ] 

Michael McCandless commented on LUCENE-7583:


Hmm, this is interesting:

I temporarily forked {{BufferedOutputStream.java}} into oal.store as a
package private class, and changed {{OutputStreamIndexOutput}} to use
that version.  I then ran the benchmark again, and the times were the
same.

Then I remove {{synchronized}} from the 3 methods that have it now
(flush, and the two write methods) and the times improved quite a bit:

Sparse:
  * total: 61.591 sec

Dense:
  * total: 59.739 sec

Not quite as fast as the patch (to use {{RAMOutputStream}} to buffer writes) 
but close (~4.8% faster vs ~5.8%).

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725607#comment-15725607
 ] 

Michael McCandless commented on LUCENE-7583:


bq. Do we call flush at some places? I am sure you checked this, but maybe we 
missed some place.

That's a good idea (I hadn't check for it), but the only place we call 
{{OutputStream.flush}} is in {{OutputStreamIndexOutput.close}}.


> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9828) Very long young generation stop the world GC pause

2016-12-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-9828.
--
Resolution: Invalid

Support requests should happen on the mailing list or the IRC channel.  If 
discussion there determines that there really is a bug in Solr, then we can 
reopen the issue.

http://lucene.apache.org/solr/resources.html#mailing-lists
https://wiki.apache.org/solr/IRCChannels


> Very long young generation stop the world GC pause 
> ---
>
> Key: SOLR-9828
> URL: https://issues.apache.org/jira/browse/SOLR-9828
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2
> Environment: Linux Redhat 64bit
>Reporter: Forest Soup
>
> We are using oracle jdk8u92 64bit.
> The jvm memory related options:
> -Xms32768m 
> -Xmx32768m 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/mnt/solrdata1/log 
> -XX:+UseG1GC 
> -XX:+PerfDisableSharedMem 
> -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=8m 
> -XX:MaxGCPauseMillis=100 
> -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+AggressiveOpts 
> -XX:+AlwaysPreTouch 
> -XX:ConcGCThreads=16 
> -XX:ParallelGCThreads=18 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/mnt/solrdata1/log 
> -verbose:gc 
> -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails 
> -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution 
> -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/mnt/solrdata1/log/solr_gc.log
> It usually works fine. But recently we met very long stop the world young 
> generation GC pause. Some snippets of the gc log are as below:
> 2016-11-22T20:43:16.436+: 2942054.483: Total time for which application 
> threads were stopped: 0.0005510 seconds, Stopping threads took: 0.894 
> seconds
> 2016-11-22T20:43:16.463+: 2942054.509: Total time for which application 
> threads were stopped: 0.0029195 seconds, Stopping threads took: 0.804 
> seconds
> {Heap before GC invocations=2246 (full 0):
>  garbage-first heap   total 26673152K, used 4683965K [0x7f0c1000, 
> 0x7f0c108065c0, 0x7f141000)
>   region size 8192K, 162 young (1327104K), 17 survivors (139264K)
>  Metaspace   used 56487K, capacity 57092K, committed 58368K, reserved 
> 59392K
> 2016-11-22T20:43:16.555+: 2942054.602: [GC pause (G1 Evacuation Pause) 
> (young)
> Desired survivor size 88080384 bytes, new threshold 15 (max 15)
> - age   1:   28176280 bytes,   28176280 total
> - age   2:5632480 bytes,   33808760 total
> - age   3:9719072 bytes,   43527832 total
> - age   4:6219408 bytes,   49747240 total
> - age   5:4465544 bytes,   54212784 total
> - age   6:3417168 bytes,   57629952 total
> - age   7:5343072 bytes,   62973024 total
> - age   8:2784808 bytes,   65757832 total
> - age   9:6538056 bytes,   72295888 total
> - age  10:6368016 bytes,   78663904 total
> - age  11: 695216 bytes,   79359120 total
> , 97.2044320 secs]
>[Parallel Time: 19.8 ms, GC Workers: 18]
>   [GC Worker Start (ms): Min: 2942054602.1, Avg: 2942054604.6, Max: 
> 2942054612.7, Diff: 10.6]
>   [Ext Root Scanning (ms): Min: 0.0, Avg: 2.4, Max: 6.7, Diff: 6.7, Sum: 
> 43.5]
>   [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 15.9, Diff: 15.9, Sum: 54.0]
>  [Processed Buffers: Min: 0, Avg: 10.7, Max: 39, Diff: 39, Sum: 192]
>   [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.6]
>   [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 
> 0.0]
>   [Object Copy (ms): Min: 0.1, Avg: 9.2, Max: 13.4, Diff: 13.3, Sum: 
> 165.9]
>   [Termination (ms): Min: 0.0, Avg: 2.5, Max: 2.7, Diff: 2.7, Sum: 44.1]
>  [Termination Attempts: Min: 1, Avg: 1.5, Max: 3, Diff: 2, Sum: 27]
>   [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 
> 0.6]
>   [GC Worker Total (ms): Min: 9.0, Avg: 17.1, Max: 19.7, Diff: 10.6, Sum: 
> 308.7]
>   [GC Worker End (ms): Min: 2942054621.8, Avg: 2942054621.8, Max: 
> 2942054621.8, Diff: 0.0]
>[Code Root Fixup: 0.1 ms]
>[Code Root Purge: 0.0 ms]
>[Clear CT: 0.2 ms]
>[Other: 97184.3 ms]
>   [Choose CSet: 0.0 ms]
>   [Ref Proc: 8.5 ms]
>   [Ref Enq: 0.2 ms]
>   [Redirty Cards: 0.2 ms]
>   [Humongous Register: 0.1 ms]
>   [Humongous Reclaim: 0.1 ms]
>   [Free CSet: 0.4 ms]
>[Eden: 1160.0M(1160.0M)->0.0B(1200.0M) Survivors: 136.0M->168.0M Heap: 
> 4574.2M(25.4G)->3450.8M(26.8G)]
> Heap after GC invocations=2247 (full 0):
>  garbage-first heap   total 28049408K, used 3533601K [0x7f0c1000, 
> 0x7f0c10806b00, 0x7f141000)
>   region size 8192K, 21 young (172032K), 21 survivors (172032K)
>  Metaspace   used 56487K, capacity 57092K, committed 58368K, reserved 
> 

[jira] [Commented] (SOLR-9828) Very long young generation stop the world GC pause

2016-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725578#comment-15725578
 ] 

Shawn Heisey commented on SOLR-9828:


What exactly do you want the Solr project to do about this?  Long garbage 
collections are a fact of life in a Java program with a large heap.  32GB is a 
large heap.  

You are not running with the GC settings that version 5.3.2 shipped with.  If 
you change the GC tuning, you're on your own.  Any problems that result are not 
an indication of a bug in Solr, they are at most a bug in the implementation of 
Java that you're running.  This problem should have been mentioned on the 
mailing list or IRC channel, not in this issue tracker.  I'm going to close 
this issue.  A bug in Solr is VERY unlikely, and this issue tracker is not the 
correct place for support.

Part of the GC tuning that you've done is assign 34 threads to the garbage 
collector -- 16 for the concurrent collector, 18 for the parallel collector.  
How many CPU cores do you have in the machine (not counting hyperthreading)?  
Typical CPU counts for a modern server are somewhere between 4 and 16.  It's 
possible that all your CPU cores are involved in the garbage collection, so CPU 
usage of 100 percent would not be surprising.  Even with a CPU count of 32, the 
thread counts you've configured might cause this.

I have created a wiki page with some GC tuning parameters for Solr, both G1 and 
CMS:

https://wiki.apache.org/solr/ShawnHeisey

One thing you can do here that might help is to decrease your heap to 31GB.  
The effective amount of usable heap memory would actually probably go UP 
because Java will switch to 32-bit pointers.  With a 32GB heap, Java must use 
64-bit pointers, so each one takes twice as much memory.

I'm not sure your heap needs to be that high at all.  The GC log above shows 
that the long collection reduced the heap from 4.5GB to 3.4GB.  If these 
numbers are typical after Solr has been running for a while, a heap size of 
8GB, maybe less, might be enough.  You won't know for sure without long-term 
experimentation.

The following page discusses a way to determine how much heap is needed using 
jconsole graphing:

https://wiki.apache.org/solr/SolrPerformanceProblems

> Very long young generation stop the world GC pause 
> ---
>
> Key: SOLR-9828
> URL: https://issues.apache.org/jira/browse/SOLR-9828
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2
> Environment: Linux Redhat 64bit
>Reporter: Forest Soup
>
> We are using oracle jdk8u92 64bit.
> The jvm memory related options:
> -Xms32768m 
> -Xmx32768m 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/mnt/solrdata1/log 
> -XX:+UseG1GC 
> -XX:+PerfDisableSharedMem 
> -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=8m 
> -XX:MaxGCPauseMillis=100 
> -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+AggressiveOpts 
> -XX:+AlwaysPreTouch 
> -XX:ConcGCThreads=16 
> -XX:ParallelGCThreads=18 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/mnt/solrdata1/log 
> -verbose:gc 
> -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails 
> -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution 
> -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/mnt/solrdata1/log/solr_gc.log
> It usually works fine. But recently we met very long stop the world young 
> generation GC pause. Some snippets of the gc log are as below:
> 2016-11-22T20:43:16.436+: 2942054.483: Total time for which application 
> threads were stopped: 0.0005510 seconds, Stopping threads took: 0.894 
> seconds
> 2016-11-22T20:43:16.463+: 2942054.509: Total time for which application 
> threads were stopped: 0.0029195 seconds, Stopping threads took: 0.804 
> seconds
> {Heap before GC invocations=2246 (full 0):
>  garbage-first heap   total 26673152K, used 4683965K [0x7f0c1000, 
> 0x7f0c108065c0, 0x7f141000)
>   region size 8192K, 162 young (1327104K), 17 survivors (139264K)
>  Metaspace   used 56487K, capacity 57092K, committed 58368K, reserved 
> 59392K
> 2016-11-22T20:43:16.555+: 2942054.602: [GC pause (G1 Evacuation Pause) 
> (young)
> Desired survivor size 88080384 bytes, new threshold 15 (max 15)
> - age   1:   28176280 bytes,   28176280 total
> - age   2:5632480 bytes,   33808760 total
> - age   3:9719072 bytes,   43527832 total
> - age   4:6219408 bytes,   49747240 total
> - age   5:4465544 bytes,   54212784 total
> - age   6:3417168 bytes,   57629952 total
> - age   7:5343072 bytes,   62973024 total
> - age   8:2784808 bytes,   65757832 total
> - age   9:6538056 bytes,   72295888 total
> - age  10:6368016 bytes,   78663904 total
> - age  11: 695216 bytes,   79359120 total
> , 

[jira] [Updated] (LUCENE-7582) "Cannot commit index writer" in some cases on windows

2016-12-06 Thread Kevin Senechal (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Senechal updated LUCENE-7582:
---
Summary: "Cannot commit index writer" in some cases on windows  (was: 
NIOFSDirectory sometime doesn't work on windows)

> "Cannot commit index writer" in some cases on windows
> -
>
> Key: LUCENE-7582
> URL: https://issues.apache.org/jira/browse/LUCENE-7582
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3.1
> Environment: Windows 10, 32 bits JVM
>Reporter: Kevin Senechal
>
> Hi!
> I've an error using lucene on windows. I already post a question on modeshape 
> forum (https://developer.jboss.org/thread/273070) and it looks that 
> NIOFSDirectory is not working well on windows as described in the java 
> documentation of this class.
> {quote}NOTE: NIOFSDirectory is not recommended on Windows because of a bug in 
> how FileChannel.read is implemented in Sun's JRE. Inside of the 
> implementation the position is apparently synchronized. See here for 
> details.{quote}
> After reading the linked java issue 
> (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6265734), it seems that 
> there is a workaround to solve it, use an AsynchronousFileChannel.
> Is it a choice that has been made to not use AsynchronousFileChannel or will 
> it be a good fix?
> You'll find the complete stacktrace below:
> {code:java}
> Caused by: org.modeshape.jcr.index.lucene.LuceneIndexException: Cannot commit 
> index writer  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:155) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexChangeAdapter.completeWorkspaceChanges(IndexChangeAdapter.java:104)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.change.ChangeSetAdapter.notify(ChangeSetAdapter.java:157)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexProvider$AtomicIndex.notify(IndexProvider.java:1493)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.bus.RepositoryChangeBus.notify(RepositoryChangeBus.java:190)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WorkspaceCache.changed(WorkspaceCache.java:333)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.txn.SynchronizedTransactions.updateCache(SynchronizedTransactions.java:223)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WritableSessionCache.save(WritableSessionCache.java:751)
>  ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.JcrSession.save(JcrSession.java:1171) 
> ~[dsdk-launcher.jar:na]  
>   ... 19 common frames omitted  
> Caused by: java.nio.file.FileSystemException: 
> C:\Users\Christopher\Infiltrea3CLOUDTEST8\christop...@dooapp.com\indexes\default\nodesByPath\_dc_Lucene50_0.doc:
>  The process cannot access the file because it is being used by another 
> process.  
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
>  ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_92]  
>   at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:393) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:281) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:226) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.sync(LockValidatingDirectoryWrapper.java:62)
>  ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4456) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2874)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2977) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2944) 
> ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:152) 
> ~[dsdk-launcher.jar:na] 
> {code}
> Thank you in advance for your help



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 2354 - Failure!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2354/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 6119 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/32bit/jdk1.8.0_102/jre/bin/java -server 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=42646D525CBAE03 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.4.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=6.4.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/test/J2
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager -classpath 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/classes/test:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/test-framework/classes/java:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/classes/java:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/core/classes/java:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.4.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/home/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/home/jenkins/tools/java/32bit/jdk1.8.0_102/lib/tools.jar:/home/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.4.0.jar
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/test/temp/junit4-J2-20161206_135401_486.events
 
@/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/codecs/test/temp/junit4-J2-20161206_135401_486.suites
 -stdin
   [junit4] ERROR: JVM J2 ended with an exception: Quit event not received from 
the forked 

[jira] [Commented] (LUCENE-7582) NIOFSDirectory sometime doesn't work on windows

2016-12-06 Thread Kevin Senechal (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725549#comment-15725549
 ] 

Kevin Senechal commented on LUCENE-7582:


Thank you for your answers.

Considering [~mikemccand] answer, I'll rename this issue to point to the right 
problem. I'll also share the link of this issue to the modeshape team, which 
has implemented the lucene-modeshape connector and keep you in touch.

Thank you

> NIOFSDirectory sometime doesn't work on windows
> ---
>
> Key: LUCENE-7582
> URL: https://issues.apache.org/jira/browse/LUCENE-7582
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Affects Versions: 5.3.1
> Environment: Windows 10, 32 bits JVM
>Reporter: Kevin Senechal
>
> Hi!
> I've an error using lucene on windows. I already post a question on modeshape 
> forum (https://developer.jboss.org/thread/273070) and it looks that 
> NIOFSDirectory is not working well on windows as described in the java 
> documentation of this class.
> {quote}NOTE: NIOFSDirectory is not recommended on Windows because of a bug in 
> how FileChannel.read is implemented in Sun's JRE. Inside of the 
> implementation the position is apparently synchronized. See here for 
> details.{quote}
> After reading the linked java issue 
> (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6265734), it seems that 
> there is a workaround to solve it, use an AsynchronousFileChannel.
> Is it a choice that has been made to not use AsynchronousFileChannel or will 
> it be a good fix?
> You'll find the complete stacktrace below:
> {code:java}
> Caused by: org.modeshape.jcr.index.lucene.LuceneIndexException: Cannot commit 
> index writer  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:155) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexChangeAdapter.completeWorkspaceChanges(IndexChangeAdapter.java:104)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.change.ChangeSetAdapter.notify(ChangeSetAdapter.java:157)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.spi.index.provider.IndexProvider$AtomicIndex.notify(IndexProvider.java:1493)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.bus.RepositoryChangeBus.notify(RepositoryChangeBus.java:190)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WorkspaceCache.changed(WorkspaceCache.java:333)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.txn.SynchronizedTransactions.updateCache(SynchronizedTransactions.java:223)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.modeshape.jcr.cache.document.WritableSessionCache.save(WritableSessionCache.java:751)
>  ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.JcrSession.save(JcrSession.java:1171) 
> ~[dsdk-launcher.jar:na]  
>   ... 19 common frames omitted  
> Caused by: java.nio.file.FileSystemException: 
> C:\Users\Christopher\Infiltrea3CLOUDTEST8\christop...@dooapp.com\indexes\default\nodesByPath\_dc_Lucene50_0.doc:
>  The process cannot access the file because it is being used by another 
> process.  
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
> ~[na:1.8.0_92]  
>   at 
> sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
>  ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_92]  
>   at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_92]  
>   at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:393) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:281) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:226) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.sync(LockValidatingDirectoryWrapper.java:62)
>  ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4456) 
> ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2874)
>  ~[dsdk-launcher.jar:na]  
>   at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2977) 
> ~[dsdk-launcher.jar:na]  
>   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2944) 
> ~[dsdk-launcher.jar:na]  
>   at org.modeshape.jcr.index.lucene.LuceneIndex.commit(LuceneIndex.java:152) 
> ~[dsdk-launcher.jar:na] 
> {code}
> Thank you in advance for your help



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-9699) CoreStatus requests can fail if executed during a core reload

2016-12-06 Thread Daisy.Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725545#comment-15725545
 ] 

Daisy.Yuan commented on SOLR-9699:
--

I'm having the same problem as you. It's better to return to nomal automaticly 
by iteself.  Manual reload is the second option.

> CoreStatus requests can fail if executed during a core reload
> -
>
> Key: SOLR-9699
> URL: https://issues.apache.org/jira/browse/SOLR-9699
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>
> CoreStatus requests delegate some of their response down to a core's 
> IndexWriter.  If the core is being reloaded, then there's a race between 
> these calls and the IndexWriter being closed, which can lead to the request 
> failing with an AlreadyClosedException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6271 - Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6271/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=1661 sum(shards)=1660 cloudClient=1660

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=1661 
sum(shards)=1660 cloudClient=1660
at 
__randomizedtesting.SeedInfo.seed([D9B744CAEBE2F8EA:51E37B10451E9512]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1323)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Updated] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It never return to normal unless restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisy.Yuan updated SOLR-9830:
-
Summary: Once IndexWriter is closed due to some RunTimeException like 
FileSystemException, It never return to normal unless restart the Solr JVM  
(was: Once IndexWriter is closed due to some RunTimeException like 
FileSystemException, It never update unless restart the Solr JVM )

> Once IndexWriter is closed due to some RunTimeException like 
> FileSystemException, It never return to normal unless restart the Solr JVM
> ---
>
> Key: SOLR-9830
> URL: https://issues.apache.org/jira/browse/SOLR-9830
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.2
> Environment: Red Hat 4.4.7-3,SolrCloud 
>Reporter: Daisy.Yuan
>
> 1. Collection coll_test, has 9 shards, each has two replicas in different 
> solr instances.
> 2. When update documens to the collection use Solrj, inject the exhausted 
> handle fault to one solr instance like solr1.
> 3. Update to col_test_shard3_replica1(It's leader) is failed due to 
> FileSystemException, and IndexWriter is closed.
> 4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
> cannot be updated documens and the numDocs is always less than the standby 
> replica.
> 5. After Solr instance restart, It can update documens and the numDocs is  
> consistent  between the two replicas.
> I think in this case in Solr Cloud mode, it should recovery itself and not 
> restart to recovery the solrcore update function.
>  2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit exception updating document | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
> updateDocument | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: all running merges have aborted | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: done finish merges | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
> numDocs=3798 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: done abort success=true | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 
> finishFullFlush success=false | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: 
> infos=_4g7(6.2.0):C59169/23684:delGen=4 _4gq(6.2.0):C67474/11636:delGen=1 
> _4gg(6.2.0):C64067/15664:delGen=2 _4gr(6.2.0):C13131 _4gs(6.2.0):C966 
> _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 _4gv(6.2.0):C2544 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | 

[jira] [Updated] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It never update unless restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisy.Yuan updated SOLR-9830:
-
Summary: Once IndexWriter is closed due to some RunTimeException like 
FileSystemException, It never update unless restart the Solr JVM   (was: Once 
IndexWriter is closed due to some RunTimeException like FileSystemException, It 
should restart the Solr JVM )

> Once IndexWriter is closed due to some RunTimeException like 
> FileSystemException, It never update unless restart the Solr JVM 
> --
>
> Key: SOLR-9830
> URL: https://issues.apache.org/jira/browse/SOLR-9830
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.2
> Environment: Red Hat 4.4.7-3,SolrCloud 
>Reporter: Daisy.Yuan
>
> 1. Collection coll_test, has 9 shards, each has two replicas in different 
> solr instances.
> 2. When update documens to the collection use Solrj, inject the exhausted 
> handle fault to one solr instance like solr1.
> 3. Update to col_test_shard3_replica1(It's leader) is failed due to 
> FileSystemException, and IndexWriter is closed.
> 4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
> cannot be updated documens and the numDocs is always less than the standby 
> replica.
> 5. After Solr instance restart, It can update documens and the numDocs is  
> consistent  between the two replicas.
> I think in this case in Solr Cloud mode, it should recovery itself and not 
> restart to recovery the solrcore update function.
>  2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit exception updating document | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
> updateDocument | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: all running merges have aborted | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: done finish merges | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
> numDocs=3798 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: done abort success=true | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 
> finishFullFlush success=false | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: 
> infos=_4g7(6.2.0):C59169/23684:delGen=4 _4gq(6.2.0):C67474/11636:delGen=1 
> _4gg(6.2.0):C64067/15664:delGen=2 _4gr(6.2.0):C13131 _4gs(6.2.0):C966 
> _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 _4gv(6.2.0):C2544 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> 

[jira] [Commented] (SOLR-9513) Introduce a generic authentication plugin which delegates all functionality to Hadoop authentication framework

2016-12-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725467#comment-15725467
 ] 

Ishan Chattopadhyaya commented on SOLR-9513:


Sorry, it took me a while to review the patch. I think it looks good. Here are 
a few observations/suggestions:

# In GenericHadoopAuthPlugin, Class.forName() was used for loading the client 
builder. However, we've used SolrResourceLoader.newInstance() traditionally for 
loading resources from the classpath (for reference, see CoreContainer's 
initializeAuthorizationPlugin() method).
# GenericHadoopAuthPlugin implements HttpClientBuilderPlugin, and hence 
necessarily uses a specified client builder factory to be used for internode 
communication. This is fine in many cases, however this removes the possibility 
of using the internal PKIAuthentication for internode communication. Consider a 
scenario where a cluster needs to be configured to use a hadoop-auth based 
authentication mechanism for user < - > solr communication, but simple PKI 
based authentication for solr < - > solr communication.
I think we should give the users the option to use default authentication for 
internal communication (PKI authentication) or to use a client builder. I think 
what can be done is to somehow make the client builder factory optional, and 
use PKI based authentication where such a parameter is not passed in. This 
might mean that we have two concrete classes: one that implements 
HttpClientBuilderPlugin, one that doesn't.
# The Hadoop based tests tend to not work well on Windows. Unless you've tested 
on Windows and found them to be working well, I suggest lets disable them 
(TestSolrCloudWithHadoopAuthPlugin, TestDelegationWithHadoopAuth). Please see 
SOLR-9460 for reference.

> Introduce a generic authentication plugin which delegates all functionality 
> to Hadoop authentication framework
> --
>
> Key: SOLR-9513
> URL: https://issues.apache.org/jira/browse/SOLR-9513
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>
> Currently Solr kerberos authentication plugin delegates the core logic to 
> Hadoop authentication framework. But the configuration parameters required by 
> the Hadoop authentication framework are hardcoded in the plugin code itself. 
> https://github.com/apache/lucene-solr/blob/5b770b56d012279d334f41e4ef7fe652480fd3cf/solr/core/src/java/org/apache/solr/security/KerberosPlugin.java#L119
> The problem with this approach is that we need to make code changes in Solr 
> to expose new capabilities added in Hadoop authentication framework. e.g. 
> HADOOP-12082
> We should implement a generic Solr authentication plugin which will accept 
> configuration parameters via security.json (in Zookeeper) and delegate them 
> to Hadoop authentication framework. This will allow to utilize new features 
> in Hadoop without code changes in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5043) hostname lookup in SystemInfoHandler should be refactored so it's possible to not block core (re)load for long periouds on misconfigured systems

2016-12-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725453#comment-15725453
 ] 

Robert Krüger commented on SOLR-5043:
-

Is anything keeping you from pushing this into one of the next updates? It is 
still a big issue for us with no known workaround.

> hostname lookup in SystemInfoHandler should be refactored so it's possible to 
> not block core (re)load for long periouds on misconfigured systems
> 
>
> Key: SOLR-5043
> URL: https://issues.apache.org/jira/browse/SOLR-5043
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-5043-lazy.patch, SOLR-5043.patch, SOLR-5043.patch
>
>
> SystemInfoHandler currently lookups the hostname of the machine on it's init, 
> and caches for it's lifecycle -- there is a comment to the effect that the 
> reason for this is because on some machines (notably ones with wacky DNS 
> settings) looking up the hostname can take a long ass time in some JVMs...
> {noformat}
>   // on some platforms, resolving canonical hostname can cause the thread
>   // to block for several seconds if nameservices aren't available
>   // so resolve this once per handler instance 
>   //(ie: not static, so core reload will refresh)
> {noformat}
> But as we move forward with a lot more multi-core, solr-cloud, dynamically 
> updated instances, even paying this cost per core-reload is expensive.
> we should refactoring this so that SystemInfoHandler instances init 
> immediately, with some kind of lazy loading of the hostname info in a 
> background thread, (especially since hte only real point of having that info 
> here is for UI use so you cna keep track of what machine you are looking at)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It should restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisy.Yuan updated SOLR-9830:
-
Description: 
1. Collection coll_test, has 9 shards, each has two replicas in different solr 
instances.
2. When update documens to the collection use Solrj, inject the exhausted 
handle fault to one solr instance like solr1.
3. Update to col_test_shard3_replica1(It's leader) is failed due to 
FileSystemException, and IndexWriter is closed.
4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
cannot be updated documens and the numDocs is always less than the standby 
replica.
5. After Solr instance restart, It can update documens and the numDocs is  
consistent  between the two replicas.

I think in this case in Solr Cloud mode, it should recovery itself and not 
restart to recovery the solrcore update function.

 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit exception updating document | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
updateDocument | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: all running merges have aborted | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: done finish merges | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
numDocs=3798 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: done abort success=true | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 finishFullFlush 
success=false | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: infos=_4g7(6.2.0):C59169/23684:delGen=4 
_4gq(6.2.0):C67474/11636:delGen=1 _4gg(6.2.0):C64067/15664:delGen=2 
_4gr(6.2.0):C13131 _4gs(6.2.0):C966 _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 
_4gv(6.2.0):C2544 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[IW][commitScheduler-46-thread-1]: hit exception during NRT reader | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,967 | INFO  | http-nio-21101-exec-20 | 
[col_test_shard3_replica1]  webapp=/solr path=/update 
params={wt=javabin=2}{add=[55 (1552493084330164224), 245 
(1552493084330164225), 285 (1552493084331212800), 325 
(1552493084331212801), 445 (1552493084331212802), 465 
(1552493084331212803), 645 (1552493084331212804), 945 
(1552493084331212805), 1005 (1552493084331212806), 1195 
(1552493084331212807), ... (74 adds)]} 0 43 | 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:187)


at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2143)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:695)
at 

RE: Solr outer join / not join query

2016-12-06 Thread Akshay Vadher
With little modification, it worked.



fq=-{!join from=aId to=id fromIndex=b}-statusId:1



() wrapping –statusId:1 did not work.





Thanks

Akshay



From: Mikhail Khludnev [mailto:m...@apache.org]
Sent: Tuesday, December 06, 2016 5:12 PM
To: dev@lucene.apache.org
Subject: Re: Solr outer join / not join query



q=*:* -{!join from=aId to=id fromIndex=b}(-statusId:1)



On Tue, Dec 6, 2016 at 2:04 PM, Akshay Vadher  > wrote:

I have posted a stackoverflow question too - 
http://stackoverflow.com/questions/40993515/solr-outer-join-not-join-query.



I may be asking too much but I want to do a left outer join between two cores 
and get data from Aonly where B does not have related data.

Following is exactly my equivalent SQL query (for simplicity I have removed 
other conditions),

1. SELECT  A.* FROM A AS A

WHERE A.ID   NOT IN (SELECT B.A_ID FROM B AS B WHERE B.STATUS_ID 
!= 1)

I understand that solr join is actually subquery, I need data from only A.

It would be very easy if the not was not there in where condition for sub query.

For example,

2. SELECT  A.* FROM A AS A

WHERE A.ID   IN (SELECT B.A_ID FROM B AS B WHERE B.STATUS_ID != 1)

I can have q={!join from=aId to=id fromIndex=b}(-statusId:1).

How can I do a nagete here, i.e. solr query for 1





Regards,

Akshay Vadher

Senior Software Engineer

Synoverge Technologies Pvt. Ltd.

akshay.vad...@synoverge.com   |  
 +91 96620 59401



“You know, sometimes all you need is twenty seconds of insane courage. Just 
literally twenty seconds of just embarrassing bravery. And I promise you, 
something great will come of it.”
―   Benjamin Mee,  
 We Bought a Zoo











___

CONFIDENTIALITY NOTICE:

The contents of this email message and any attachments are intended solely for 
the addressee(s) and may contain confidential and/or privileged information and 
may be legally protected from disclosure. If you are not the intended recipient 
of this message, or if this message has been addressed to you in error, please 
immediately alert the sender by reply email and then delete this message and 
any attachments. If you are not the intended recipient, you are hereby notified 
that any use, dissemination, copying, or storage of this message or its 
attachments is strictly prohibited




--

Sincerely yours
Mikhail Khludnev












___

CONFIDENTIALITY NOTICE:

The contents of this email message and any attachments are intended solely for 
the addressee(s) and may contain confidential and/or privileged information and 
may be legally protected from disclosure. If you are not the intended recipient 
of this message, or if this message has been addressed to you in error, please 
immediately alert the sender by reply email and then delete this message and 
any attachments. If you are not the intended recipient, you are hereby notified 
that any use, dissemination, copying, or storage of this message or its 
attachments is strictly prohibited


[jira] [Updated] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It should restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisy.Yuan updated SOLR-9830:
-
Environment: Red Hat 4.4.7-3,SolrCloud   (was: Linux version 
2.6.32-358.el6.x86_64,SolrCloud )

> Once IndexWriter is closed due to some RunTimeException like 
> FileSystemException, It should restart the Solr JVM 
> -
>
> Key: SOLR-9830
> URL: https://issues.apache.org/jira/browse/SOLR-9830
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.2
> Environment: Red Hat 4.4.7-3,SolrCloud 
>Reporter: Daisy.Yuan
>
> 1. Collection coll_test, has 9 shards, each has two replicas in different 
> solr instances.
> 2. When update documens to the collection use Solrj, inject the exhausted 
> handle fault to one solr instance like solr1.
> 3. Update to col_test_shard3_replica1(It's leader) is failed due to 
> FileSystemException, and IndexWriter is closed.
> 4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
> cannot be updated documens and the numDocs is always less than the standby 
> replica.
> 5. After Solr instance restart, It can update documens and the numDocs is  
> consistent  between the two replicas.
>  2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [DWPT][http-nio-21101-exec-20]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit exception updating document | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
> updateDocument | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: all running merges have aborted | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: done finish merges | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
> numDocs=3798 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: now abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DWPT][commitScheduler-46-thread-1]: done abort | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [DW][http-nio-21101-exec-20]: done abort success=true | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 
> finishFullFlush success=false | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
> [IW][http-nio-21101-exec-20]: rollback: 
> infos=_4g7(6.2.0):C59169/23684:delGen=4 _4gq(6.2.0):C67474/11636:delGen=1 
> _4gg(6.2.0):C64067/15664:delGen=2 _4gr(6.2.0):C13131 _4gs(6.2.0):C966 
> _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 _4gv(6.2.0):C2544 | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
> [IW][commitScheduler-46-thread-1]: hit exception during NRT reader | 
> org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
> 2016-12-01 14:13:00,967 | INFO  | http-nio-21101-exec-20 | 
> [col_test_shard3_replica1]  webapp=/solr path=/update 
> params={wt=javabin=2}{add=[55 (1552493084330164224), 245 
> 

[jira] [Updated] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It should restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisy.Yuan updated SOLR-9830:
-
Description: 
1. Collection coll_test, has 9 shards, each has two replicas in different solr 
instances.
2. When update documens to the collection use Solrj, inject the exhausted 
handle fault to one solr instance like solr1.
3. Update to col_test_shard3_replica1(It's leader) is failed due to 
FileSystemException, and IndexWriter is closed.
4. And clear the fault, the col_test_shard3_replica1 (is leader) is always 
cannot be updated documens and the numDocs is always less than the standby 
replica.
5. After Solr instance restart, It can update documens and the numDocs is  
consistent  between the two replicas.

 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit exception updating document | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
updateDocument | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: all running merges have aborted | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: done finish merges | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
numDocs=3798 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: done abort success=true | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 finishFullFlush 
success=false | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: infos=_4g7(6.2.0):C59169/23684:delGen=4 
_4gq(6.2.0):C67474/11636:delGen=1 _4gg(6.2.0):C64067/15664:delGen=2 
_4gr(6.2.0):C13131 _4gs(6.2.0):C966 _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 
_4gv(6.2.0):C2544 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[IW][commitScheduler-46-thread-1]: hit exception during NRT reader | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,967 | INFO  | http-nio-21101-exec-20 | 
[col_test_shard3_replica1]  webapp=/solr path=/update 
params={wt=javabin=2}{add=[55 (1552493084330164224), 245 
(1552493084330164225), 285 (1552493084331212800), 325 
(1552493084331212801), 445 (1552493084331212802), 465 
(1552493084331212803), 645 (1552493084331212804), 945 
(1552493084331212805), 1005 (1552493084331212806), 1195 
(1552493084331212807), ... (74 adds)]} 0 43 | 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:187)


at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2143)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:695)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:471)
at 

[jira] [Issue Comment Deleted] (SOLR-4957) Audit format/plugin/markup problems in solr ref guide related to Confluence 5.x upgrade

2016-12-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-4957:
-
Comment: was deleted

(was: http://www.allingerie.com - Sexy lingerie black red White Lace open 
crotch bra G string sleepwear women underbust nightwear teddy lingerie set 
S-XXL Visit the image link )

> Audit format/plugin/markup problems in solr ref guide related to Confluence 
> 5.x upgrade
> ---
>
> Key: SOLR-4957
> URL: https://issues.apache.org/jira/browse/SOLR-4957
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> The Solr Ref guide donated by lucidworks is now live on the ASF's CWIKI 
> instance of Confluence -- but the CWIKI is in the process of being upgraded 
> to confluence 5.x (INFRA-6406)
> We need to audit the ref guide for markup/plugin/formatting problems that 
> need to be fixed, but we should avoid making any major changes to try and 
> address any problems like this until the Confluence 5.x upgrade is completed, 
> since that process will involve the pages being "converted" to the newer wiki 
> syntax at least twice, and may change the way some plugins work.
> We'll use this issue as a place for people to track any formating/plugin 
> porblems they see when browsing the wiki -- please include the URL of the 
> specific page(s) where problems are noticed, using relative anchors into 
> individual page sections if possible, and a description of the problem seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9830) Once IndexWriter is closed due to some RunTimeException like FileSystemException, It should restart the Solr JVM

2016-12-06 Thread Daisy.Yuan (JIRA)
Daisy.Yuan created SOLR-9830:


 Summary: Once IndexWriter is closed due to some RunTimeException 
like FileSystemException, It should restart the Solr JVM 
 Key: SOLR-9830
 URL: https://issues.apache.org/jira/browse/SOLR-9830
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: update
Affects Versions: 6.2
 Environment: Linux version 2.6.32-358.el6.x86_64,SolrCloud 
Reporter: Daisy.Yuan



1. Collection test, has 9 shards, each has two replicas in different solr 
instances.
2. When update documens to the collection use Solrj, inject the exhausted 
handle fault to one solr instance like solr1.
3.The updated is interrpted by the fault.
 2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[DWPT][http-nio-21101-exec-20]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,932 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit exception updating document | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: hit tragic FileSystemException inside 
updateDocument | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,933 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: all running merges have aborted | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: done finish merges | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,934 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,939 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: flush postings as segment _4h9 
numDocs=3798 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: now abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DWPT][commitScheduler-46-thread-1]: done abort | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[DW][http-nio-21101-exec-20]: done abort success=true | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[DW][commitScheduler-46-thread-1]: commitScheduler-46-thread-1 finishFullFlush 
success=false | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | http-nio-21101-exec-20 | 
[IW][http-nio-21101-exec-20]: rollback: infos=_4g7(6.2.0):C59169/23684:delGen=4 
_4gq(6.2.0):C67474/11636:delGen=1 _4gg(6.2.0):C64067/15664:delGen=2 
_4gr(6.2.0):C13131 _4gs(6.2.0):C966 _4gt(6.2.0):C4543 _4gu(6.2.0):C6960 
_4gv(6.2.0):C2544 | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,940 | INFO  | commitScheduler-46-thread-1 | 
[IW][commitScheduler-46-thread-1]: hit exception during NRT reader | 
org.apache.solr.update.LoggingInfoStream.message(LoggingInfoStream.java:34)
2016-12-01 14:13:00,967 | INFO  | http-nio-21101-exec-20 | 
[col_test_shard3_replica1]  webapp=/solr path=/update 
params={wt=javabin=2}{add=[55 (1552493084330164224), 245 
(1552493084330164225), 285 (1552493084331212800), 325 
(1552493084331212801), 445 (1552493084331212802), 465 
(1552493084331212803), 645 (1552493084331212804), 945 
(1552493084331212805), 1005 (1552493084331212806), 1195 
(1552493084331212807), ... (74 adds)]} 0 43 | 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:187)
2016-12-01 14:13:00,969 | ERROR | http-nio-21101-exec-20 | 
org.apache.solr.common.SolrException: ERROR adding document 
SolrInputDocument(fields: [id=10015, name=张三1001, features=test1001, 
price=1011.01, _version_=1552493084337504263])
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (LUCENE-7583) Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD leaf block?

2016-12-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725419#comment-15725419
 ] 

Uwe Schindler commented on LUCENE-7583:
---

Do we call flush at some places? I am sure you checked this, but maybe we 
missed some place.

> Can we improve OutputStreamIndexOutput's byte buffering when writing each BKD 
> leaf block?
> -
>
> Key: LUCENE-7583
> URL: https://issues.apache.org/jira/browse/LUCENE-7583
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7583-hardcode-writeVInt.patch, LUCENE-7583.patch
>
>
> When BKD writes its leaf blocks, it's essentially a lot of tiny writes (vint, 
> int, short, etc.), and I've seen deep thread stacks through our IndexOutput 
> impl ({{OutputStreamIndexOutput}}) when pulling hot threads while BKD is 
> writing.
> So I tried a small change, to have BKDWriter do its own buffering, by first 
> writing each leaf block into a {{RAMOutputStream}}, and then dumping that (in 
> 1 KB byte[] chunks) to the actual IndexOutput.
> This gives a non-trivial reduction (~6%) in the total time for BKD writing + 
> merging time on the 20M NYC taxis nightly benchmark (2 times each):
> Trunk, sparse:
>   - total: 64.691 sec
>   - total: 64.702 sec
> Patch, sparse:
>   - total: 60.820 sec
>   - total: 60.965 sec
> Trunk dense:
>   - total: 62.730 sec
>   - total: 62.383 sec
> Patch dense:
>   - total: 58.805 sec
>   - total: 58.742 sec
> The results seem to be consistent and reproducible.  I'm using Java 1.8.0_101 
> on a fast SSD on Ubuntu 16.04.
> It's sort of weird and annoying that this helps so much, because 
> {{OutputStreamIndexOutput}} already uses java's {{BufferedOutputStream}} 
> (default 8 KB buffer) to buffer writes.
> [~thetaphi] suggested maybe hotspot is failing to inline/optimize the 
> {{writeByte}} / the call stack just has too many layers.
> We could commit this patch (it's trivial) but it'd be nice to understand and 
> fix why buffering writes is somehow costly so any other Lucene codec 
> components that write lots of little things can be improved too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4957) Audit format/plugin/markup problems in solr ref guide related to Confluence 5.x upgrade

2016-12-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725348#comment-15725348
 ] 

leve şipal commented on SOLR-4957:
--

http://www.allingerie.com - Sexy lingerie black red White Lace open crotch bra 
G string sleepwear women underbust nightwear teddy lingerie set S-XXL Visit the 
image link 

> Audit format/plugin/markup problems in solr ref guide related to Confluence 
> 5.x upgrade
> ---
>
> Key: SOLR-4957
> URL: https://issues.apache.org/jira/browse/SOLR-4957
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> The Solr Ref guide donated by lucidworks is now live on the ASF's CWIKI 
> instance of Confluence -- but the CWIKI is in the process of being upgraded 
> to confluence 5.x (INFRA-6406)
> We need to audit the ref guide for markup/plugin/formatting problems that 
> need to be fixed, but we should avoid making any major changes to try and 
> address any problems like this until the Confluence 5.x upgrade is completed, 
> since that process will involve the pages being "converted" to the newer wiki 
> syntax at least twice, and may change the way some plugins work.
> We'll use this issue as a place for people to track any formating/plugin 
> porblems they see when browsing the wiki -- please include the URL of the 
> specific page(s) where problems are noticed, using relative anchors into 
> individual page sections if possible, and a description of the problem seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 2353 - Still Unstable!

2016-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2353/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9C90E5718DAF6E7B:D4E591C58B9C41EE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11764 lines...]
   [junit4] Suite: 

  1   2   >