[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1069 - Still Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1069/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testBuffering

Error Message:
expected:<6> but was:<10>

Stack Trace:
java.lang.AssertionError: expected:<6> but was:<10>
at 
__randomizedtesting.SeedInfo.seed([CDD53B8C5A210D86:D03B95A7FB78ACAD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11507 lines...]
   [junit4] Suite: org.apache.solr.search.TestRecovery
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1208 - Still Unstable

2017-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1208/

8 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([448FF0611105B7A1:967FBC824FAA1193]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Solr Ref Guide, Highlighting

2017-01-09 Thread David Smiley
Unfortunately, The Solr Ref Guide is only editable by committers.  In the
near future it's going to move to a different platform that will allow you
to contribute via pull-request; that will be very nice.  In the mean time,
your feedback is highly appreciated.

~ David

On Mon, Jan 9, 2017 at 6:21 PM Timothy Rodriguez (BLOOMBERG/ 120 PARK) <
trodrigue...@bloomberg.net> wrote:

> +1, I'll be happy to offer assistance with edits or some of the sections
> if needed. We're glad to see this out there.
>
> From: dev@lucene.apache.org At: 01/09/17 18:03:32
> To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
> Subject: Re:Solr Ref Guide, Highlighting
>
> Solr 6.4 is the first release to introduce the UnifiedHighlighter as a new
> highlighter option.  I want to get it documented reasonably well in the
> Solr Ref Guide.  The Highlighters section is here: Highlighting
>    (lets
> see if this formatted email expands to the URL when it lands on the list)
>
> Unless anyone objects, I'd like to rename the "Standard Highlighter" as
> "Original Highlighter" in the ref guide.  The original Highlighter has no
> actual name qualifications as it was indeed Lucene's original Highlighter.
>  "Standard Highlighter" as a name purely exists as-such within the Solr
> Reference Guide only.  In our code it's used by "DefaultSolrHighlighter"
> which is really a combo of the original Highlighter and
> FastVectorHighlighter.   DSH ought to be refactored perhaps... but I
> digress.
>
> For those that haven't read CHANGES.txt yet, there is a new "hl.method"
> parameter which can be used to pick your highlighter.  Here I purposely
> chose a possible value of "original" to choose the original Highlighter
> (not "standard").
>
> I haven't started documenting yet but I plan to refactor the highlighter
> docs a bit.  The intro page will better discuss the highlighter options and
> also how to configure both term vectors and offsets in postings.  Then the
> highlighter implementation specific pages will document the parameters and
> any configuration specific to them.  I'm a bit skeptical we need a page
> dedicated to the PostingsHighlighter as the UnifiedHighlighter is a
> derivative of it, supporting all it's options and more.  In that sense,
> maybe people are fine with it only being in the ref guide as a paragraph or
> two on the UH page describing how to activate it.  I suppose it's
> effectively deprecated.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-09 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813849#comment-15813849
 ] 

Koji Sekiguchi commented on SOLR-9918:
--

I think this is ready.

> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
>Assignee: Koji Sekiguchi
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Shalin Shekhar Mangar
Congratulations and welcome Đạt!

On Mon, Jan 9, 2017 at 9:27 PM, Joel Bernstein  wrote:
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links:
> .
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3768 - Still Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3768/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([66CC40C8C86C653B:EE987F12669008C3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813726#comment-15813726
 ] 

Cao Manh Dat commented on SOLR-9941:


Here are the reason to explain why Hoss's test fails :
- When the test add updates, it will be written to tlog-
- After the core is restarted, it will replay the updates from tlog-
- In the same time, the test add another DEQ, because the ulog is on buffering 
mode, DUP will write this DEQ to another file ( tlog-0001) and ignore apply 
this DEQ to IW.
- After LogReplay finish, we call a commit, which will write a commit update at 
the end of tlog-0001

So this DEQ update will never be replayed ( because it belong tlog-0001 not 
tlog- ), and it've never been applied to IW, so that DEQ update is lost.
Even we restart the core, to hoping that it will replay tlog-0001, but because 
we write an commit at the end of tlog-0001, tlog-0001 will never be replayed.

So I think this fail belong to another issue.

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-09 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9950:
-
Attachment: policeman-jenkins-master-windows-6347-failed-tests.log.gz

In case it's relevant, attaching the output from the two failing tests from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347/consoleText].

> TestRecovery.testBuffering() failure
> 
>
> Key: SOLR-9950
> URL: https://issues.apache.org/jira/browse/SOLR-9950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: policeman-jenkins-master-windows-6347-failed-tests.log.gz
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
> reproduces 100% for me on Linux:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
> was:<10>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
>  val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
> docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
> timezone=America/Rainy_River
>[junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
> (64-bit)/cpus=3,threads=1,free=213046664,total=411041792
> {noformat}
> Another test failure that on the same run doesn't reproduce for me, but these 
> two tests were running on the same JVM, and so may have somehow influenced 
> each other:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
> -Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
> -Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
> between replicas
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
>[junit4]>  at 
> org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18739 - Still Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18739/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:195)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:383)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
  at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:195)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:383)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:279)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([72735B1736BB6944]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11056 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_72735B1736BB6944-001/init-core-data-001
   [junit4]   2> 224631 INFO  

[jira] [Comment Edited] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813195#comment-15813195
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9941 at 1/10/17 2:37 AM:
-

{quote}
One question i have is: if the only code paths that call 
recoverFromLog(boolean) are "startup" paths that pass true why do we need the 
optional argument? why not just refactor the method to always use the new logic?
{quote}
My thought there was that if someone wanted to reuse the recoverFromLog() 
method later for use during a live node, and not during the startup of a 
node/core (for whatever reason that I cannot forsee right now), that they not 
end up clearing their deletes lists in the process. Though, given the current 
use of the method now, I am also open to eliminate that extra parameter if you 
suggest.


was (Author: ichattopadhyaya):
{quote}
One question i have is: if the only code paths that call 
recoverFromLog(boolean) are "startup" paths that pass true why do we need the 
optional argument? why not just refactor the method to always use the new logic?
{quote}
My thought there was that if someone wanted to reuse the doLogReplay() method 
later for use during a live node, and not during the startup of a node/core 
(for whatever reason that I cannot forsee right now), that they not end up 
clearing their deletes lists in the process. Though, given the current use of 
the method now, I am also open to eliminate that extra parameter if you suggest.

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9950) TestRecovery.testBuffering() failure

2017-01-09 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-9950:


 Summary: TestRecovery.testBuffering() failure
 Key: SOLR-9950
 URL: https://issues.apache.org/jira/browse/SOLR-9950
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


>From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347], 
>reproduces 100% for me on Linux:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
-Dtests.method=testBuffering -Dtests.seed=416C60950450F681 -Dtests.slow=true 
-Dtests.locale=no -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.10s J1 | TestRecovery.testBuffering <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<6> but 
was:<10>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
   [junit4]>at 
org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:284)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{_version_=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 val_i=PostingsFormat(name=Direct), id=PostingsFormat(name=Direct)}, 
docValues:{}, maxPointsInLeafNode=1974, maxMBSortInHeap=7.099504359147245, 
sim=RandomSimilarity(queryNorm=false): {}, locale=no, 
timezone=America/Rainy_River
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_112 
(64-bit)/cpus=3,threads=1,free=213046664,total=411041792
{noformat}

Another test failure that on the same run doesn't reproduce for me, but these 
two tests were running on the same JVM, and so may have somehow influenced each 
other:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RecoveryZkTest 
-Dtests.method=test -Dtests.seed=416C60950450F681 -Dtests.slow=true 
-Dtests.locale=da -Dtests.timezone=EAT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 12.2s J1 | RecoveryZkTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Mismatch in counts 
between replicas
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
   [junit4]>at 
org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
   [junit4]>at 
org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1607 - Unstable

2017-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1607/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.morphlines.solr.SolrMorphlineZkAliasTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.morphlines.solr.SolrMorphlineZkAliasTest: 1) Thread[id=118, 
name=OverseerHdfsCoreFailoverThread-97256223478382597-127.0.0.1:59554_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.morphlines.solr.SolrMorphlineZkAliasTest: 
   1) Thread[id=118, 
name=OverseerHdfsCoreFailoverThread-97256223478382597-127.0.0.1:59554_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([8B07CAFEDDDEE0B1]:0)




Build Log:
[...truncated 23468 lines...]
   [junit4] Suite: org.apache.solr.morphlines.solr.SolrMorphlineZkAliasTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/contrib/solr-morphlines-core/test/J1/temp/solr.morphlines.solr.SolrMorphlineZkAliasTest_8B07CAFEDDDEE0B1-001/init-core-data-001
   [junit4]   2> 0INFO  
(SUITE-SolrMorphlineZkAliasTest-seed#[8B07CAFEDDDEE0B1]-worker) [] 
o.e.j.u.log Logging initialized @3009ms
   [junit4]   2> 22   INFO  
(SUITE-SolrMorphlineZkAliasTest-seed#[8B07CAFEDDDEE0B1]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 372  INFO  
(SUITE-SolrMorphlineZkAliasTest-seed#[8B07CAFEDDDEE0B1]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/contrib/solr-morphlines-core/test/J1/temp/solr.morphlines.solr.SolrMorphlineZkAliasTest_8B07CAFEDDDEE0B1-001/tempDir-001
   [junit4]   2> 380  INFO  
(SUITE-SolrMorphlineZkAliasTest-seed#[8B07CAFEDDDEE0B1]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 412  INFO  (Thread-1) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 412  INFO  (Thread-1) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 
environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 
environment:host.name=lucene1-us-west.apache.org
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 
environment:java.version=1.8.0_102
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 
environment:java.vendor=Oracle Corporation
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 
environment:java.home=/usr/local/asfpackages/java/jdk1.8.0_102/jre
   [junit4]   2> 436  INFO  (Thread-1) [] o.a.z.s.ZooKeeperServer Server 

Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Erick Erickson
Hey Dat:

When I got my invitation to become a committer my first reaction (it
was "live" at Lucene Revolution) was "This is a joke, right?". Of
course I said "yes", but I understand being surprised!

On Mon, Jan 9, 2017 at 7:52 PM, Đạt Cao Mạnh  wrote:
> Thanks to everyone! I still can not believe this is true.
>
> Brief intro:
> I started out with Lucene since I was a student at university, by writing a
> Vietnamese Analyzer for Lucene. But I didn't realize how great is open
> source/ apache projects.
>
> After college, I joined Viettel and worked on building a search engine for
> Vietnamese. Then I found a bug on Lucene (LUCENE-6558) and decided to
> contribute back to the community. By contributing the first patch to Lucene
> (LUCENE-6558). I realized that, how wonderful to work with open source and
> how nicely people at Lucene/Solr are. Especially Joel who help me a lot at
> complete SOLR-9252.
>
> After that experience I joined Lucidworks, everything is great here. Since
> then I've worked on SolrCloud issues and helped a lot by Shalin.
>
> P/S : I still can not believe this is true.
>
> On Tue, Jan 10, 2017 at 7:30 AM Yonik Seeley  wrote:
>>
>> Congrats Dat!
>>
>> -Yonik
>>
>>
>> On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein 
>> wrote:
>> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
>> > PMC's invitation to become a committer.
>> >
>> > Dat, it's tradition that you introduce yourself with a brief bio.
>> >
>> > Your account has been added to the “lucene" LDAP group, so you
>> > now have commit privileges. Please test this by adding yourself to the
>> > committers section of the Who We Are page on the website:
>> >  (instructions here
>> > ).
>> >
>> > The ASF dev page also has lots of useful links:
>> > .
>> >
>> >
>> > Joel Bernstein
>> > http://joelsolr.blogspot.com/
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Đạt Cao Mạnh
Thanks to everyone! I still can not believe this is true.

Brief intro:
I started out with Lucene since I was a student at university, by writing a
Vietnamese Analyzer for Lucene. But I didn't realize how great is open
source/ apache projects.

After college, I joined Viettel and worked on building a search engine for
Vietnamese. Then I found a bug on Lucene (LUCENE-6558) and decided to
contribute back to the community. By contributing the first patch to Lucene
(LUCENE-6558). I realized that, how wonderful to work with open source and
how nicely people at Lucene/Solr are. Especially Joel who help me a lot at
complete SOLR-9252.

After that experience I joined Lucidworks, everything is great here. Since
then I've worked on SolrCloud issues and helped a lot by Shalin.

P/S : I still can not believe this is true.

On Tue, Jan 10, 2017 at 7:30 AM Yonik Seeley  wrote:

> Congrats Dat!
>
> -Yonik
>
>
> On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein 
> wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6347 - Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6347/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Mismatch in counts between replicas

Stack Trace:
java.lang.AssertionError: Mismatch in counts between replicas
at 
__randomizedtesting.SeedInfo.seed([416C60950450F681:C9385F4FAAAC9B79]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.search.TestRecovery.testBuffering

Error Message:
expected:<6> but was:<10>

Stack Trace:
java.lang.AssertionError: expected:<6> but was:<10>
at 
__randomizedtesting.SeedInfo.seed([416C60950450F681:5C82CEBEA50957AA]:0)
at 

Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Koji Sekiguchi

Welcome and congrats Dat!

Koji

On 2017/01/10 0:57, Joel Bernstein wrote:

I'm pleased to announce that Cao Manh Dat has accepted the Lucene
PMC's invitation to become a committer.

Dat, it's tradition that you introduce yourself with a brief bio.

Your account has been added to the “lucene" LDAP group, so you
now have commit privileges. Please test this by adding yourself to the
committers section of the Who We Are page on the website:
> (instructions here
>).

The ASF dev page also has lots of useful links: .


Joel Bernstein
http://joelsolr.blogspot.com/




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Mark Miller
Welcome!

On Mon, Jan 9, 2017 at 4:30 PM Yonik Seeley  wrote:

> Congrats Dat!
>
> -Yonik
>
>
> On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein 
> wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Resolved] (SOLR-6566) Document query timeAllowed during term iterations

2017-01-09 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6566.

   Resolution: Fixed
Fix Version/s: 6.4

> Document query timeAllowed during term iterations
> -
>
> Key: SOLR-6566
> URL: https://issues.apache.org/jira/browse/SOLR-6566
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 6.4
>
>
> Need to document Query timeout during TermsEnumeration (SOLR-5986).
> Query can now be made to timeout during requests that involve 
> TermsEnumeration as opposed to only Doc Collection i..e During search as well 
> as MLT handler usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Yonik Seeley
Congrats Dat!

-Yonik


On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein  wrote:
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links:
> .
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 666 - Still Unstable

2017-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/666/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([9BE488B3F0EA0049:22655E6CDC0004C3]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:821)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:814)
... 40 more




Build Log:
[...truncated 11418 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-09 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9644:
---
Fix Version/s: 6.4
   master (7.0)

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813309#comment-15813309
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9941 at 1/10/17 12:14 AM:
--

{quote}
Seems to me that updates arriving during the log replay (in TestRecovery) are 
being silently dropped.
{quote}
One possible explanation came to mind: During log replay, the state of the core 
is REPLAYING. Hence, perhaps, incoming updates are not applied (and dropped) 
until the recovery has finished and state is back to ACTIVE?


was (Author: ichattopadhyaya):
{quote}
Seems to me that updates arriving during the log replay (in TestRecovery) are 
being silently dropped.
{quote}
One possible explanation came to mind: During log replay, the state of the core 
is REPLAYING. Hence, perhaps, incoming updates are not applied until the 
recovery has finished and state is back to ACTIVE?

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813309#comment-15813309
 ] 

Ishan Chattopadhyaya commented on SOLR-9941:


{quote}
Seems to me that updates arriving during the log replay (in TestRecovery) are 
being silently dropped.
{quote}
One possible explanation came to mind: During log replay, the state of the core 
is REPLAYING. Hence, perhaps, incoming updates are not applied until the 
recovery has finished and state is back to ACTIVE?

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-09 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813306#comment-15813306
 ] 

Timothy M. Rodriguez commented on LUCENE-7620:
--

Me too!

> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813256#comment-15813256
 ] 

Ishan Chattopadhyaya commented on SOLR-9941:


{quote}
I wanted to beef up Ishan's testLogReplayWithReorderedDBQ to prove that if 
another (probably ordered) DBQ arrived during log replay it would correctly be 
applied – even if some affected docs hadn't been added yet as part of replay. 
(ie: prove that "RecentUpdates" was still being used during replay)
{quote}

Seems to me that updates arriving *during* the log replay (in TestRecovery) are 
being silently dropped. There's definitely something fishy about it, but I 
think that is another issue altogether, since even adds during the log replay 
are not persisting after the replay. Here's [0] a modified version of your test 
that applies the DBQ *after* recovery has finished. Of course, not nearly as 
effective in proving anything, and also doesn't test for the premise you were 
after: {{prove that "RecentUpdates" was still being used during replay}}.

[0] - https://paste.fedoraproject.org/524485/05608148

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Erick Erickson
Welcome Dat!

On Mon, Jan 9, 2017 at 3:20 PM, Varun Thacker  wrote:
> Congratulations Dat!
>
> On Mon, Jan 9, 2017 at 12:09 PM, Mikhail Khludnev  wrote:
>>
>> Welcome, Dat!
>>
>> On Mon, Jan 9, 2017 at 6:57 PM, Joel Bernstein  wrote:
>>>
>>> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
>>> PMC's invitation to become a committer.
>>>
>>> Dat, it's tradition that you introduce yourself with a brief bio.
>>>
>>> Your account has been added to the “lucene" LDAP group, so you
>>> now have commit privileges. Please test this by adding yourself to the
>>> committers section of the Who We Are page on the website:
>>>  (instructions here
>>> ).
>>>
>>> The ASF dev page also has lots of useful links:
>>> .
>>>
>>>
>>> Joel Bernstein
>>> http://joelsolr.blogspot.com/
>>
>>
>>
>>
>> --
>> Sincerely yours
>> Mikhail Khludnev
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+147) - Build # 18738 - Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18738/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:195)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:475)  
at org.apache.solr.core.SolrCore.(SolrCore.java:918)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:841)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1188)  at 
org.apache.solr.core.TestCoreDiscovery.testTooManyTransientCores(TestCoreDiscovery.java:211)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)  at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:538)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
  at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
  at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
  at java.base/java.lang.Thread.run(Thread.java:844)  

[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813201#comment-15813201
 ] 

Mike Drob commented on SOLR-9934:
-

Thanks for giving a few examples of when the difference matters. The only other 
consideration I can think of would be if the new method is potentially more 
performant by virtue of using a deeper (more direct?) method. Not having 
measured using JSON v XML handlers, I don't know which one could make the whole 
test suite faster.

Another difference that I noticed is that we don't check for success on 
{{clearIndex}} like we did on {{assertU}}, but maybe this doesn't matter.

> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813195#comment-15813195
 ] 

Ishan Chattopadhyaya commented on SOLR-9941:


{quote}
One question i have is: if the only code paths that call 
recoverFromLog(boolean) are "startup" paths that pass true why do we need the 
optional argument? why not just refactor the method to always use the new logic?
{quote}
My thought there was that if someone wanted to reuse the doLogReplay() method 
later for use during a live node, and not during the startup of a node/core 
(for whatever reason that I cannot forsee right now), that they not end up 
clearing their deletes lists in the process. Though, given the current use of 
the method now, I am also open to eliminate that extra parameter if you suggest.

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:Solr Ref Guide, Highlighting

2017-01-09 Thread Timothy Rodriguez (BLOOMBERG/ 120 PARK)
+1, I'll be happy to offer assistance with edits or some of the sections if 
needed.  We're glad to see this out there.

From: dev@lucene.apache.org At: 01/09/17 18:03:32
To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
Subject: Re:Solr Ref Guide, Highlighting

Solr 6.4 is the first release to introduce the UnifiedHighlighter as a new 
highlighter option.  I want to get it documented reasonably well in the Solr 
Ref Guide.  The Highlighters section is here: Highlighting   (lets see if this 
formatted email expands to the URL when it lands on the list)

Unless anyone objects, I'd like to rename the "Standard Highlighter" as 
"Original Highlighter" in the ref guide.  The original Highlighter has no 
actual name qualifications as it was indeed Lucene's original Highlighter.  
"Standard Highlighter" as a name purely exists as-such within the Solr 
Reference Guide only.  In our code it's used by "DefaultSolrHighlighter" which 
is really a combo of the original Highlighter and FastVectorHighlighter.   DSH 
ought to be refactored perhaps... but I digress.  

For those that haven't read CHANGES.txt yet, there is a new "hl.method" 
parameter which can be used to pick your highlighter.  Here I purposely chose a 
possible value of "original" to choose the original Highlighter (not 
"standard").

I haven't started documenting yet but I plan to refactor the highlighter docs a 
bit.  The intro page will better discuss the highlighter options and also how 
to configure both term vectors and offsets in postings.  Then the highlighter 
implementation specific pages will document the parameters and any 
configuration specific to them.  I'm a bit skeptical we need a page dedicated 
to the PostingsHighlighter as the UnifiedHighlighter is a derivative of it, 
supporting all it's options and more.  In that sense, maybe people are fine 
with it only being in the ref guide as a paragraph or two on the UH page 
describing how to activate it.  I suppose it's effectively deprecated.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com



[jira] [Resolved] (SOLR-9937) StandardDirectoryFactory::move never uses atomic implementation

2017-01-09 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved SOLR-9937.
-
Resolution: Duplicate

> StandardDirectoryFactory::move never uses atomic implementation
> ---
>
> Key: SOLR-9937
> URL: https://issues.apache.org/jira/browse/SOLR-9937
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9937.patch
>
>
> {noformat}
>   Path path1 = ((FSDirectory) 
> baseFromDir).getDirectory().toAbsolutePath();
>   Path path2 = ((FSDirectory) 
> baseFromDir).getDirectory().toAbsolutePath();
>   
>   try {
> Files.move(path1.resolve(fileName), path2.resolve(fileName), 
> StandardCopyOption.ATOMIC_MOVE);
>   } catch (AtomicMoveNotSupportedException e) {
> Files.move(path1.resolve(fileName), path2.resolve(fileName));
>   }
> {noformat}
> Because {{path1 == path2}} this code never does anything and move always 
> defaults to the less efficient implementation in DirectoryFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Ref Guide, Highlighting

2017-01-09 Thread David Smiley
Solr 6.4 is the first release to introduce the UnifiedHighlighter as a new
highlighter option.  I want to get it documented reasonably well in the
Solr Ref Guide.  The Highlighters section is here: Highlighting
   (lets see
if this formatted email expands to the URL when it lands on the list)

Unless anyone objects, I'd like to rename the "Standard Highlighter" as
"Original Highlighter" in the ref guide.  The original Highlighter has no
actual name qualifications as it was indeed Lucene's original Highlighter.
 "Standard Highlighter" as a name purely exists as-such within the Solr
Reference Guide only.  In our code it's used by "DefaultSolrHighlighter"
which is really a combo of the original Highlighter and
FastVectorHighlighter.   DSH ought to be refactored perhaps... but I
digress.

For those that haven't read CHANGES.txt yet, there is a new "hl.method"
parameter which can be used to pick your highlighter.  Here I purposely
chose a possible value of "original" to choose the original Highlighter
(not "standard").

I haven't started documenting yet but I plan to refactor the highlighter
docs a bit.  The intro page will better discuss the highlighter options and
also how to configure both term vectors and offsets in postings.  Then the
highlighter implementation specific pages will document the parameters and
any configuration specific to them.  I'm a bit skeptical we need a page
dedicated to the PostingsHighlighter as the UnifiedHighlighter is a
derivative of it, supporting all it's options and more.  In that sense,
maybe people are fine with it only being in the ref guide as a paragraph or
two on the UH page describing how to activate it.  I suppose it's
effectively deprecated.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9876) Reuse CountSlotArrAcc internal array for same level subFacets

2017-01-09 Thread Rustam Hashimov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813124#comment-15813124
 ] 

Rustam Hashimov commented on SOLR-9876:
---

Feedbacks to improve the code would be much appreciated!

> Reuse CountSlotArrAcc internal array for same level subFacets
> -
>
> Key: SOLR-9876
> URL: https://issues.apache.org/jira/browse/SOLR-9876
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: master (7.0)
>Reporter: Rustam Hashimov
>Priority: Minor
> Fix For: master (7.0)
>
>
> All facet processors are processed sequentially. We can reuse CountSlotArrAcc 
> internal array across same level facet processors instead of reallocating new 
> array for each.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9868) RangeFacet : Use DocValues for accs and docSet collection instead of RangeQuery

2017-01-09 Thread Rustam Hashimov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813119#comment-15813119
 ] 

Rustam Hashimov commented on SOLR-9868:
---

Feedbacks to improve the code and close the gaps for contribution would be much 
appreciated!

> RangeFacet : Use DocValues for accs and docSet collection instead of 
> RangeQuery
> ---
>
> Key: SOLR-9868
> URL: https://issues.apache.org/jira/browse/SOLR-9868
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: master (7.0)
>Reporter: Rustam Hashimov
> Fix For: master (7.0)
>
>
> RangeFacet initiates a range query for each range bucket to get the docSet. 
> DocSet later used for accs collection.
> For singleValued numeric fields, we can use docValues to find the matching 
> slots for each doc to collect accumulators while iterating over base docSet. 
> If there is a subFacet, docSet per range bucket can be collected from base 
> docSet as well. 
> Gains :
> - One iteration over base docSet vs querying over baseDocSet for each range 
> bucket
> - Memory saving If there is no subFacet, since per bucket docSet is not needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813106#comment-15813106
 ] 

Anshum Gupta commented on SOLR-9644:


[~emaijala] I cherry-picked and made some changes to the tests instead of 
changing the underlying class and diverging them from each other on master and 
branch_6x. Can you test this out when you get a chance?

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813099#comment-15813099
 ] 

ASF subversion and git services commented on SOLR-9644:
---

Commit dcb836500a8d5f8dd0d59264ad0061e5a2926c20 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dcb8365 ]

SOLR-9644: Fixed SimpleMLTQParser and CloudMLTQParser to handle boosts properly 
and CloudMLTQParser to only extract actual values from IndexableField type 
fields to the filtered document.


> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 608 - Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/608/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([F4BBD27FF6AE0C83:9C04E75526341E6F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (SOLR-9461) DELETENODE, REPLACENODE should pass down the 'async' param to subcommands

2017-01-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-9461.
---
   Resolution: Fixed
Fix Version/s: 6.2

>From the comments, it seems this has been done since 6.2 - please reopen if I 
>am incorrect on that.

> DELETENODE, REPLACENODE should pass down the 'async' param to subcommands 
> --
>
> Key: SOLR-9461
> URL: https://issues.apache.org/jira/browse/SOLR-9461
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.2
>
>
> The {{async}} param is used to make async calls to core admin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813055#comment-15813055
 ] 

David Smiley commented on SOLR-7495:


FieldCache "insanity" is when the same field is uninverted into memory multiple 
ways for different types (i.e. as a number and also as a string).  The 
FieldCache is today also known as UninvertingReader.  Obviously something to be 
avoided and signified possible Solr usage error.  I recall it was easier to 
trigger this than nowadays.

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> 

[jira] [Commented] (SOLR-9856) Collect metrics for shard replication and tlog replay on replicas

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812973#comment-15812973
 ] 

ASF subversion and git services commented on SOLR-9856:
---

Commit af2ac8376d1a1e4123d55f101bf9d519d45332e5 in lucene-solr's branch 
refs/heads/branch_6x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af2ac83 ]

SOLR-9856 Collect metrics for shard replication and tlog replay on replicas.


> Collect metrics for shard replication and tlog replay on replicas
> -
>
> Key: SOLR-9856
> URL: https://issues.apache.org/jira/browse/SOLR-9856
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9856.patch
>
>
> Using API from SOLR-4735 add metrics for tracking outgoing replication from 
> leader to shard replicas, and for tracking transaction log processing on 
> replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9902) StandardDirectoryFactory should use Files API for it's move implementation.

2017-01-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812950#comment-15812950
 ] 

Mark Miller commented on SOLR-9902:
---

I think it probably makes sense to throw an exception. For index integrity we 
really need a move rather than creating a new file unless the directory factory 
is ephemeral - so we don't want to easily hide move not working when using 
local fs or make it a normal path that we try multiple ways to move a file 
(beyond attempting an atomic move first). If neither an atomic or std move 
work, something should be very wrong.

> StandardDirectoryFactory should use Files API for it's move implementation.
> ---
>
> Key: SOLR-9902
> URL: https://issues.apache.org/jira/browse/SOLR-9902
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9902.patch
>
>
> It's done in a platform independent way as opposed to the old File API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812939#comment-15812939
 ] 

Anshum Gupta commented on SOLR-9644:


[~emaijala] I just noticed that the patches for master and branch_6x are 
different but they shouldn't be. There isn't any real difference between the 
two.
Also, the tests seem different.

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5170) Spatial multi-value distance sort via DocValues

2017-01-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812936#comment-15812936
 ] 

David Smiley commented on SOLR-5170:


The fastest is very likely "LatLonDocValuesField", currently hiding out in 
Lucene sandbox.  There are some really clever tricks it does.

Interested in adding a Solr adapter for it?

> Spatial multi-value distance sort via DocValues
> ---
>
> Key: SOLR-5170
> URL: https://issues.apache.org/jira/browse/SOLR-5170
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch.txt
>
>
> The attached patch implements spatial multi-value distance sorting.  In other 
> words, a document can have more than one point per field, and using a 
> provided function query, it will return the distance to the closest point.  
> The data goes into binary DocValues, and as-such it's pretty friendly to 
> realtime search requirements, and it only uses 8 bytes per point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9902) StandardDirectoryFactory should use Files API for it's move implementation.

2017-01-09 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812925#comment-15812925
 ] 

Mike Drob commented on SOLR-9902:
-

One more clarification I'd be interested in here...

If {{Files.move}} fails for whatever reason, would it make sense to fall back 
to the {{super.move}} implementation or is throwing the exception sufficient 
for a best effort attempt here?

> StandardDirectoryFactory should use Files API for it's move implementation.
> ---
>
> Key: SOLR-9902
> URL: https://issues.apache.org/jira/browse/SOLR-9902
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9902.patch
>
>
> It's done in a platform independent way as opposed to the old File API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 665 - Unstable

2017-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/665/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:47895/solr/awhollynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 510HTTP ERROR: 510 Problem 
accessing /solr/awhollynewcollection_0/select. Reason: 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:5},code=510}
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:47895/solr/awhollynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 510 


HTTP ERROR: 510
Problem accessing /solr/awhollynewcollection_0/select. Reason:

{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:5},code=510}
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([DF4DDB78FE40A1DB:9738AFCCF8738E4E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1198)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1198)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1198)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1198)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1198)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 

[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812891#comment-15812891
 ] 

ASF subversion and git services commented on SOLR-9644:
---

Commit 2b4e3dd941a7a88274f2a86f18ea57a9d95e4364 in lucene-solr's branch 
refs/heads/master from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2b4e3dd ]

SOLR-9644: Fixed SimpleMLTQParser and CloudMLTQParser to handle boosts properly 
and CloudMLTQParser to only extract actual values from IndexableField type 
fields to the filtered document.


> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812865#comment-15812865
 ] 

Hoss Man edited comment on SOLR-9934 at 1/9/17 9:23 PM:


There might be *some* places where delQ could/should be replaced with 
cleraIndex based on the *intent* of the call, but it shouldn't really be 
causing any _correctness_ issues.

* if a test is doing a delQ to simulate an external user doing a delQ then 
that's a valid and correct usage.
* if a test is doing a delQ to "reset" test state to emulate a completely 
pristine solr collection, that's where cleraIndex is (now) a better choice -- 
*but it's not more correct*

For 99% of all tests the diff is academic, but places where there *is* a diff 
are when tests muck with version numbers synthetically (ie: the original reason 
for this special syntax), have very specific assumptions about low level 
internal term stats (ie: the new updatable doc values tests, or perhaps some 
luke-esque tests) etc...

if you think there are tests that would be improved by switching from delQ to 
clearIndex in their test scafolding (ie: in Before/After methods, or when 
reseting some state) then sure -- go ahead and open a new issue for those.  But 
tests that do "normal" user requests, and do "normal" delq(matchalldocs) as 
part of that are just fine and certainly don't need .changed.


*EDIT:* after a few more minutes thought, added some more clarification about 
the correctness question above, and this followup comment ...

Personally: I don't know that it's worth the effort to go looking for places to 
make this change.  My main concern was simply that if/when people write *new* 
tests, that _may_ involve dependencies/assumptions on having a pristine index 
in each test method, having clearIndex work the way it does now is good, and 
will automatically save people headaches like the ones Ishan and I had recently.



was (Author: hossman):
There might be *some* places where delQ could/should be replaced with 
cleraIndex based on the *intent* of the call, but it shouldn't really be 
causing any _correctness_ issues.

* if a test is doing a delQ to simulate an external user doing a delQ then 
that's a valid and correct usage.
* if a test is doing a delQ to "reset" test state to emulate a completely 
pristine solr collection, that's where cleraIndex is (now) a better choice.

For 99% of all tests the diff is academic, but places where there *is* a diff 
are when tests muck with version numbers synthetically (ie: the original reason 
for this special syntax), have very specific assumptions about low level 
internal term stats (ie: the new updatable doc values tests, or perhaps some 
luke-esque tests) etc...

if you think there are tests that would be improved by switching from delQ to 
clearIndex in their test scafolding (ie: in Before/After methods, or when 
reseting some state) then sure -- go ahead and open a new issue for those.  But 
tests that do "normal" user requests, and do "normal" delq(matchalldocs) as 
part of that are just fine and certainly don't need .changed


> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-09 Thread Scott Stults (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812880#comment-15812880
 ] 

Scott Stults commented on SOLR-7495:


[~rcmuir] added that line at the same time as the Insanity wrapper itself as 
part of LUCENE-5666, but I'll take a crack at an explanation. There's only a 
couple of cases outlined Insanity where we need to wrap the field, essentially 
returning null instead of the docValues. When the collector returns null the 
stored values of the field are used instead of docValues. Since stored values 
are slower than docValues we only want to wrap the particular field type that's 
problematic. 

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> 

[jira] [Commented] (SOLR-5170) Spatial multi-value distance sort via DocValues

2017-01-09 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812871#comment-15812871
 ] 

Jeff Wartes commented on SOLR-5170:
---

It's coming up on two years, and I'm aware there have been some significant 
changes to areas like docvalues and geospatial since the last update to this 
issue. 

What's the state of the world now? 
If you have entities with multiple locations, and you want to filter and sort, 
is this patch still the highest-performance option available? I'm more willing 
to give up on the real-time-friendliness these days, if that changes the answer.

> Spatial multi-value distance sort via DocValues
> ---
>
> Key: SOLR-5170
> URL: https://issues.apache.org/jira/browse/SOLR-5170
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch.txt
>
>
> The attached patch implements spatial multi-value distance sorting.  In other 
> words, a document can have more than one point per field, and using a 
> provided function query, it will return the distance to the closest point.  
> The data goes into binary DocValues, and as-such it's pretty friendly to 
> realtime search requirements, and it only uses 8 bytes per point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812865#comment-15812865
 ] 

Hoss Man commented on SOLR-9934:


There might be *some* places where delQ could/should be replaced with 
cleraIndex based on the *intent* of the call, but it shouldn't really be 
causing any _correctness_ issues.

* if a test is doing a delQ to simulate an external user doing a delQ then 
that's a valid and correct usage.
* if a test is doing a delQ to "reset" test state to emulate a completely 
pristine solr collection, that's where cleraIndex is (now) a better choice.

For 99% of all tests the diff is academic, but places where there *is* a diff 
are when tests muck with version numbers synthetically (ie: the original reason 
for this special syntax), have very specific assumptions about low level 
internal term stats (ie: the new updatable doc values tests, or perhaps some 
luke-esque tests) etc...

if you think there are tests that would be improved by switching from delQ to 
clearIndex in their test scafolding (ie: in Before/After methods, or when 
reseting some state) then sure -- go ahead and open a new issue for those.  But 
tests that do "normal" user requests, and do "normal" delq(matchalldocs) as 
part of that are just fine and certainly don't need .changed


> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1068 - Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1068/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:864)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:841)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:559)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1014)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:841)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:559)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:195)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:475)  
at org.apache.solr.core.SolrCore.(SolrCore.java:918)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:841)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:559)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.MetricsDirectoryFactory.get(MetricsDirectoryFactory.java:195)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:97)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:739)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:924)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:841)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:914)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:559)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 

[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812848#comment-15812848
 ] 

Mike Drob commented on SOLR-9934:
-

[~hossman] - is it worth converting all of the other invocations of 
{{assertU(delQ("*:*"))}} into calls to {{clearIndex()}}? Based on your 
description, it sounds like there might be a correctness bug lurking, but I'm 
not sure if it's actual or theoretical.

I can create a new issue or upload a patch to this JIRA if you think it's 
worthwhile.

> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Varun Thacker
Congratulations Dat!

On Mon, Jan 9, 2017 at 12:09 PM, Mikhail Khludnev  wrote:

> Welcome, Dat!
>
> On Mon, Jan 9, 2017 at 6:57 PM, Joel Bernstein  wrote:
>
>> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
>> PMC's invitation to become a committer.
>>
>> Dat, it's tradition that you introduce yourself with a brief bio.
>>
>> Your account has been added to the “lucene" LDAP group, so you
>> now have commit privileges. Please test this by adding yourself to the
>> committers section of the Who We Are page on the website:
>>  (instructions here
>> ).
>>
>> The ASF dev page also has lots of useful links: <
>> http://www.apache.org/dev/>.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


[jira] [Commented] (SOLR-9939) Ping handler logs each request twice

2017-01-09 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812733#comment-15812733
 ] 

Mikhail Khludnev commented on SOLR-9939:


well, yes. But proper testing is challenging.  

> Ping handler logs each request twice
> 
>
> Key: SOLR-9939
> URL: https://issues.apache.org/jira/browse/SOLR-9939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-9939.patch, SOLR-9939.patch
>
>
> Requests to the ping handler are being logged twice.  The first line has 
> "hits" and the second one doesn't, but other than that they have the same 
> info.
> These lines are from a 5.3.2-SNAPSHOT version.  In the IRC channel, 
> [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT.
> {noformat}
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> hits=400271103 status=0 QTime=4
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> status=0 QTime=4
> {noformat}
> Unless there's a good reason to have it that I'm not aware of, the second log 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8396) Add support for PointFields in Solr

2017-01-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-8396:

Attachment: SOLR-8396.patch

Uploading a patch updated to current master. I think it’s mostly done, so this 
is a good time to review if you are interested in the feature. I plan to commit 
to master soon and let it bake there some time before moving to branch 6_x. 
There are no big issues with compatibility so I think it should be fine to 
backport at some point. The last changes are not in the branch, I’m trying to 
avoid an avalanche of “commit emails” and possibly updates to Jiras due to the 
recent merge, so please review the patch. I’m leaving some tasks for followup 
Jiras that can be fixed/discussed separately:

* LukeRequestHandler doesn’t populate docFreq for PointFields
* Implement DatePointField
* Implement support for MV DocValues in PointFields
* Add method toInternalByteRef to FieldType and possibly deprecate toInternal()
* Add support for PointFields in FacetModule (JSON Facets)
* Add PointFields as pField in example schemas
* Add support for facet method “fc” with PointFields (only “FCS” is currently 
supported for field faceting)
* Add support for grouping with PointFIelds
* Add support for pivot faceting with PointFields
* Add support for ExpandComponent with PointFIelds
* Add support for CollapseQParser with PointFields


bq. ...SOLR-9786 should cause the query parser to automatically delegate to 
FieldType.getSetQuery() for queries on more than one point (
Great. I had added a {{getSetQuery}} method in PointField class, I removed it 
and I’m now using super’s (implemented in the different Point FieldType 
classes). Also added validation in {{TestSolrQueryParser.java}}

bq. The first time we went through this transition, "int" was renamed to "pint" 
in the example schema, and then a new "int" was created to use trie (numeric)….
+1. But in any case, since I’m leaving the changes to the example 
{{schema.xml}} out of this patch, this can be further discussed in followup 
Jira if anyone has concerns with the approach.

Not sure if the “solr.tests.preferPointFields” changes I did are implemented in 
the correct way, I’ll review that before committing. Feel free to comment on 
that too.


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Mikhail Khludnev
Welcome, Dat!

On Mon, Jan 9, 2017 at 6:57 PM, Joel Bernstein  wrote:

> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links: <
> http://www.apache.org/dev/>.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>



-- 
Sincerely yours
Mikhail Khludnev


[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 671 - Still Failing

2017-01-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/671/

No tests ran.

Build Log:
[...truncated 41983 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (32.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.5 MB in 0.03 sec (1162.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.0 MB in 0.06 sec (1104.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.9 MB in 0.07 sec (1073.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (32.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.1 MB in 0.50 sec (80.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 140.5 MB in 1.48 sec (94.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 150.1 MB in 0.38 sec (398.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=28539). Happy searching!
   [smoker] 

Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread David Smiley
Congrats and welcome Dat!

On Mon, Jan 9, 2017 at 10:57 AM Joel Bernstein  wrote:

> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links: <
> http://www.apache.org/dev/>.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9856) Collect metrics for shard replication and tlog replay on replicas

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812697#comment-15812697
 ] 

ASF subversion and git services commented on SOLR-9856:
---

Commit b8383db06ee194b9195cd95f058dc820cb70baf8 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b8383db ]

SOLR-9856 Collect metrics for shard replication and tlog replay on replicas.


> Collect metrics for shard replication and tlog replay on replicas
> -
>
> Key: SOLR-9856
> URL: https://issues.apache.org/jira/browse/SOLR-9856
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9856.patch
>
>
> Using API from SOLR-4735 add metrics for tracking outgoing replication from 
> leader to shard replicas, and for tracking transaction log processing on 
> replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Anshum Gupta
Congratulations and welcome Dat!

On Mon, Jan 9, 2017 at 11:39 AM Shai Erera  wrote:

> Welcome!
>
> On Mon, Jan 9, 2017, 21:37 Michael McCandless 
> wrote:
>
> Welcome!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein 
> wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Assigned] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-9949:
--

Assignee: Mikhail Khludnev

> Non-serializable AlreadyClosedException returned by MBeanServer
> ---
>
> Key: SOLR-9949
> URL: https://issues.apache.org/jira/browse/SOLR-9949
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JMX
>Affects Versions: 4.8
>Reporter: Oliver Bates
>Assignee: Mikhail Khludnev
>Priority: Minor
> Fix For: 5.3.1
>
> Attachments: SOLR-9949.diff
>
>
>  Solr JMX monitoring agent is throwing InvalidClassException when trying to 
> deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.
> Stack trace:
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
> java.io.InvalidClassException: 
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> The serialVersionUID value computed by java at runtime changed when a new 
> constructor was added with a 'cause' field.
> AlreadyClosedExceptions can be thrown by the MBean server if a remote 
> instance is trying to access stats on a recently deleted core for instance. 
> In this case, the exception is serialized/deserialized by the MBean handler 
> which can cause InvalidClassExceptions if the monitoring service is using a 
> different version of lucene. Since Lucene doesn't want to implement 
> Serializable, these exceptions should not be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Oliver Bates (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812677#comment-15812677
 ] 

Oliver Bates commented on SOLR-9949:


It could be, but this can be operationally challenging if one monitoring 
service covers several clusters. It's difficult to ensure that those clusters 
all run the same version. Granted this is not likely to be that common, but if 
AlreadyClosedException doesn't implement Serializable (which I understand is 
something Lucene intentionally avoids), then it seems like it shouldn't be 
allowed to propagate up to the MBeanServer anyway (from a 'good practice' 
perspective). If those exceptions are going away though, then this whole point 
is moot :)

> Non-serializable AlreadyClosedException returned by MBeanServer
> ---
>
> Key: SOLR-9949
> URL: https://issues.apache.org/jira/browse/SOLR-9949
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JMX
>Affects Versions: 4.8
>Reporter: Oliver Bates
>Priority: Minor
> Fix For: 5.3.1
>
> Attachments: SOLR-9949.diff
>
>
>  Solr JMX monitoring agent is throwing InvalidClassException when trying to 
> deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.
> Stack trace:
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
> java.io.InvalidClassException: 
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> The serialVersionUID value computed by java at runtime changed when a new 
> constructor was added with a 'cause' field.
> AlreadyClosedExceptions can be thrown by the MBean server if a remote 
> instance is trying to access stats on a recently deleted core for instance. 
> In this case, the exception is serialized/deserialized by the MBean handler 
> which can cause InvalidClassExceptions if the monitoring service is using a 
> different version of lucene. Since Lucene doesn't want to implement 
> Serializable, these exceptions should not be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812651#comment-15812651
 ] 

Mikhail Khludnev commented on SOLR-9949:


# I wonder if it can be fixed with aligning lucene jar versions between jvms 
# I hope AlreadyClosedException is gone after SOLR-9330 

> Non-serializable AlreadyClosedException returned by MBeanServer
> ---
>
> Key: SOLR-9949
> URL: https://issues.apache.org/jira/browse/SOLR-9949
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JMX
>Affects Versions: 4.8
>Reporter: Oliver Bates
>Priority: Minor
> Fix For: 5.3.1
>
> Attachments: SOLR-9949.diff
>
>
>  Solr JMX monitoring agent is throwing InvalidClassException when trying to 
> deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.
> Stack trace:
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
> java.io.InvalidClassException: 
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> The serialVersionUID value computed by java at runtime changed when a new 
> constructor was added with a 'cause' field.
> AlreadyClosedExceptions can be thrown by the MBean server if a remote 
> instance is trying to access stats on a recently deleted core for instance. 
> In this case, the exception is serialized/deserialized by the MBean handler 
> which can cause InvalidClassExceptions if the monitoring service is using a 
> different version of lucene. Since Lucene doesn't want to implement 
> Serializable, these exceptions should not be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Shai Erera
Welcome!

On Mon, Jan 9, 2017, 21:37 Michael McCandless 
wrote:

> Welcome!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein 
> wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Michael McCandless
Welcome!

Mike McCandless

http://blog.mikemccandless.com


On Mon, Jan 9, 2017 at 10:57 AM, Joel Bernstein  wrote:
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links:
> .
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Tomás Fernández Löbbe
Welcome Dat!

On Mon, Jan 9, 2017 at 4:28 PM, Dawid Weiss  wrote:

> Welcome Dat!
>
> On Mon, Jan 9, 2017 at 7:21 PM, Mike Drob  wrote:
> > Congratulations!
> >
> >
> > On Monday, January 9, 2017, Uwe Schindler  wrote:
> >>
> >> Welcome Dat!
> >>
> >>
> >>
> >> Uwe
> >>
> >>
> >>
> >> -
> >>
> >> Uwe Schindler
> >>
> >> Achterdiek 19, D-28357 Bremen
> >>
> >> http://www.thetaphi.de
> >>
> >> eMail: u...@thetaphi.de
> >>
> >>
> >>
> >> From: Joel Bernstein [mailto:joels...@gmail.com]
> >> Sent: Monday, January 9, 2017 4:57 PM
> >> To: lucene dev 
> >> Subject: Welcome Cao Manh Dat as a Lucene/Solr committer
> >>
> >>
> >>
> >> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> >> PMC's invitation to become a committer.
> >>
> >> Dat, it's tradition that you introduce yourself with a brief bio.
> >>
> >> Your account has been added to the “lucene" LDAP group, so you
> >> now have commit privileges. Please test this by adding yourself to the
> >> committers section of the Who We Are page on the website:
> >>  (instructions here
> >> ).
> >>
> >> The ASF dev page also has lots of useful links:
> >> .
> >>
> >>
> >>
> >>
> >>
> >> Joel Bernstein
> >>
> >> http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Oliver Bates (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Bates updated SOLR-9949:
---
Description: 
 Solr JMX monitoring agent is throwing InvalidClassException when trying to 
deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.

Stack trace:

org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"
java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
java.io.InvalidClassException: 
org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"

The serialVersionUID value computed by java at runtime changed when a new 
constructor was added with a 'cause' field.

AlreadyClosedExceptions can be thrown by the MBean server if a remote instance 
is trying to access stats on a recently deleted core for instance. In this 
case, the exception is serialized/deserialized by the MBean handler which can 
cause InvalidClassExceptions if the monitoring service is using a different 
version of lucene. Since Lucene doesn't want to implement Serializable, these 
exceptions should not be propagated up to the MBeanServer.

  was:
Solr JMX monitoring agent is throwing InvalidClassException when trying to 
deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.

Stack trace:

org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"
java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
java.io.InvalidClassException: 
org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"

The serialVersionUID value computed by java at runtime changed when a new 
constructor was added with a 'cause' field.

AlreadyClosedExceptions can be thrown by the MBean server if a remote instance 
is trying to access stats on a recently deleted core for instance. In this 
case, the exception is serialized/deserialized by the MBean handler which can 
cause InvalidClassExceptions if the monitoring service is using a different 
version of lucene. Since Lucene doesn't want to implement Serializable, these 
exceptions should be propagated up to the MBeanServer.


> Non-serializable AlreadyClosedException returned by MBeanServer
> ---
>
> Key: SOLR-9949
> URL: https://issues.apache.org/jira/browse/SOLR-9949
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JMX
>Affects Versions: 4.8
>Reporter: Oliver Bates
>Priority: Minor
> Fix For: 5.3.1
>
> Attachments: SOLR-9949.diff
>
>
>  Solr JMX monitoring agent is throwing InvalidClassException when trying to 
> deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.
> Stack trace:
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
> java.io.InvalidClassException: 
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> The serialVersionUID value computed by java at runtime changed when a new 
> constructor was added with a 'cause' field.
> AlreadyClosedExceptions can be thrown by the MBean server if a remote 
> instance is trying to access stats on a recently deleted core for instance. 
> In this case, the exception is serialized/deserialized by the MBean handler 
> which can cause InvalidClassExceptions if the monitoring service is using a 
> different version of lucene. Since Lucene doesn't want to implement 
> Serializable, these exceptions should not be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Dawid Weiss
Welcome Dat!

On Mon, Jan 9, 2017 at 7:21 PM, Mike Drob  wrote:
> Congratulations!
>
>
> On Monday, January 9, 2017, Uwe Schindler  wrote:
>>
>> Welcome Dat!
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> From: Joel Bernstein [mailto:joels...@gmail.com]
>> Sent: Monday, January 9, 2017 4:57 PM
>> To: lucene dev 
>> Subject: Welcome Cao Manh Dat as a Lucene/Solr committer
>>
>>
>>
>> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
>> PMC's invitation to become a committer.
>>
>> Dat, it's tradition that you introduce yourself with a brief bio.
>>
>> Your account has been added to the “lucene" LDAP group, so you
>> now have commit privileges. Please test this by adding yourself to the
>> committers section of the Who We Are page on the website:
>>  (instructions here
>> ).
>>
>> The ASF dev page also has lots of useful links:
>> .
>>
>>
>>
>>
>>
>> Joel Bernstein
>>
>> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Oliver Bates (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Bates updated SOLR-9949:
---
Attachment: SOLR-9949.diff

> Non-serializable AlreadyClosedException returned by MBeanServer
> ---
>
> Key: SOLR-9949
> URL: https://issues.apache.org/jira/browse/SOLR-9949
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JMX
>Affects Versions: 4.8
>Reporter: Oliver Bates
>Priority: Minor
> Fix For: 5.3.1
>
> Attachments: SOLR-9949.diff
>
>
> Solr JMX monitoring agent is throwing InvalidClassException when trying to 
> deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.
> Stack trace:
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
> java.io.InvalidClassException: 
> org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
> stream classdesc serialVersionUID = 5608155941692732578, local class 
> serialVersionUID = -1978883495828278874"
> The serialVersionUID value computed by java at runtime changed when a new 
> constructor was added with a 'cause' field.
> AlreadyClosedExceptions can be thrown by the MBean server if a remote 
> instance is trying to access stats on a recently deleted core for instance. 
> In this case, the exception is serialized/deserialized by the MBean handler 
> which can cause InvalidClassExceptions if the monitoring service is using a 
> different version of lucene. Since Lucene doesn't want to implement 
> Serializable, these exceptions should be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9949) Non-serializable AlreadyClosedException returned by MBeanServer

2017-01-09 Thread Oliver Bates (JIRA)
Oliver Bates created SOLR-9949:
--

 Summary: Non-serializable AlreadyClosedException returned by 
MBeanServer
 Key: SOLR-9949
 URL: https://issues.apache.org/jira/browse/SOLR-9949
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: JMX
Affects Versions: 4.8
Reporter: Oliver Bates
Priority: Minor
 Fix For: 5.3.1


Solr JMX monitoring agent is throwing InvalidClassException when trying to 
deserialize AlreadyClosedException thrown by Solr during JMX stat fetching.

Stack trace:

org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"
java.rmi.UnmarshalException: Error unmarshaling return; nested exception is: 
java.io.InvalidClassException: 
org.apache.lucene.store.AlreadyClosedException; local class incompatible: 
stream classdesc serialVersionUID = 5608155941692732578, local class 
serialVersionUID = -1978883495828278874"

The serialVersionUID value computed by java at runtime changed when a new 
constructor was added with a 'cause' field.

AlreadyClosedExceptions can be thrown by the MBean server if a remote instance 
is trying to access stats on a recently deleted core for instance. In this 
case, the exception is serialized/deserialized by the MBean handler which can 
cause InvalidClassExceptions if the monitoring service is using a different 
version of lucene. Since Lucene doesn't want to implement Serializable, these 
exceptions should be propagated up to the MBeanServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #136: Branch 6 3

2017-01-09 Thread nagakrishna
GitHub user nagakrishna opened a pull request:

https://github.com/apache/lucene-solr/pull/136

Branch 6 3



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene-solr branch_6_3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #136


commit e55b6f49913cb962cc40b3578951a23283317b29
Author: Noble Paul 
Date:   2016-09-16T11:38:55Z

shallowMap() should behave like a map. testcase added

commit 8352ff21cd3a21db5174b6e7af4b00fd2d373d5b
Author: Alan Woodward 
Date:   2016-09-16T12:33:07Z

SOLR-9507: Correctly set MDC values for CoreContainer threads

commit f728a646f388733cfb57f8d4d9a0d9217f42fd38
Author: Varun Thacker 
Date:   2016-09-16T13:17:06Z

SOLR-9522: Improve error handling in ZKPropertiesWriter

commit 380800261009fd04df8ffb73f030846b6d0d5bf9
Author: Mike McCandless 
Date:   2016-09-16T13:54:17Z

make test less evil: don't use random codec, even for the last IndexWriter

commit e8eadedb85c577ec2aed84d0281d45774f75bdc9
Author: Varun Thacker 
Date:   2016-09-16T13:41:59Z

SOLR-9451: Make clusterstatus command logging less verbose

commit 68d9d97510c8c46992cca06c0874cbe0169cdd22
Author: Noble Paul 
Date:   2016-09-17T07:32:09Z

SOLR-9523: Refactor CoreAdminOperation into smaller classes

commit 924e2da5e3e32e3703a471cfea6a8ab5b4d7c6c3
Author: Noble Paul 
Date:   2016-09-17T07:32:32Z

Merge remote-tracking branch 'origin/branch_6x' into branch_6x

commit 1a3bacfc0f55fba0a00fbc03eb49cd19f68167f2
Author: Noble Paul 
Date:   2016-09-19T12:15:17Z

SOLR-9502: ResponseWriters should natively support MapSerializable

commit f96017d9e10c665e7ab6b9161f2af08efc491946
Author: Alan Woodward 
Date:   2016-09-19T14:29:14Z

SOLR-9512: CloudSolrClient tries other replicas if a cached leader is down

commit b67a062f9db6372cf654a4366233e953c89f2722
Author: Uwe Schindler 
Date:   2016-09-19T22:01:45Z

LUCENE-7292: Fix build to use "--release 8" instead of "-release 8" on Java 
9 (this changed with recent EA build b135)

commit 09d399791a37681b5233248841bae738b799d8e1
Author: Jan Høydahl 
Date:   2016-09-20T08:56:25Z

SOLR-8080: bin/solr start script now exits with informative message if 
using wrong Java version

(cherry picked from commit 4574cb8)

commit 74bf88f8fe50b59e666f9387ca65ec26f821089d
Author: Jan Høydahl 
Date:   2016-09-20T09:22:53Z

SOLR-9475: Add install script support for CentOS and better distro 
detection under Docker

(cherry picked from commit a1bbc99)

commit a4293ce7c4e849b171430a34f36b830a84927a93
Author: Alan Woodward 
Date:   2016-09-20T13:33:38Z

Revert "SOLR-9512: CloudSolrClient tries other replicas if a cached leader 
is down"

This reverts commit f96017d9e10c665e7ab6b9161f2af08efc491946.

commit aeb1a173c7cf7f83b2ef2d45aa1b431580238edd
Author: Shalin Shekhar Mangar 
Date:   2016-09-20T20:52:37Z

Synchronizing CHANGES.txt with fixes released in 6.2.1

commit df1a2180a158f02091fe971c04d879bff610a5c0
Author: Shalin Shekhar Mangar 
Date:   2016-09-21T02:56:46Z

Add 6.2.1 back compat test indexes

commit 7a1e6efa9678f9cdfb3f59f61fba6e60e725f3a7
Author: Noble Paul 
Date:   2016-09-21T05:59:53Z

SOLR-9524: SolrIndexSearcher.getIndexFingerprint uses dubious 
synchronization

commit fdbeee974b443ce4a34cbf71ed5c97a70db9d7fb
Author: Noble Paul 
Date:   2016-09-21T06:05:11Z

Merge remote-tracking branch 'origin/branch_6x' into branch_6x

commit bae66f7cca8cff796d142eb19585d8e79fae34f8
Author: Alan Woodward 
Date:   2016-09-21T09:57:50Z

SOLR-9305, SOLR-9390: Don't use directToLeaders updates in partition tests 
(see SOLR-9512)

commit 7d05a081a97d849f0bfb1f510e97927f7d8a7954
Author: Christine Poerschke 
Date:   2016-09-21T10:50:35Z

SOLR-9538: Relocate (BinaryResponse|JSON|Smile)Writer tests to 
org.apache.solr.response which is the package of the classes they test. (Jonny 
Marks via Christine Poerschke)

commit 56f269734d01900d63bb38b65c144e502f263fbc
Author: Dawid Weiss 
Date:   2016-09-21T14:15:40Z

LUCENE-7455: slf4j uses MIT license not BSD-LIKE

commit 8502995e3b1ce66db49be26b23a3fa3c169345a8
Author: Noble Paul 
Date:   2016-09-21T18:25:59Z

SOLR-9446: Leader failure after creating a freshly replicated index can 
send nodes into recovery even if index was not changed

commit 

[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9941:
---
Attachment: SOLR-9941.hoss-test-experiment.patch

I wanted to beef up Ishan's testLogReplayWithReorderedDBQ to prove that if 
another (probably ordered) DBQ arrived _during_ log replay it would correctly 
be applied -- even if some affected docs hadn't been added yet as part of 
replay.  (ie: prove that "RecentUpdates" was still being used during replay)

But for some reason my modified test fails even on master.  I suspect either 
i'm misunderstanding something about how recovery works, or about the way 
TestRecovery works (or i just have a silly bug/typo somewhere)

In anycase, i've attached only my test idea in 
SOLR-9941.hoss-test-experiment.patch if anyone wants to take a look

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9893) EasyMock/Mockito no longer works with Java 9 b148+

2017-01-09 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812465#comment-15812465
 ] 

Julian Hyde commented on SOLR-9893:
---

[~thetaphi], Thanks for replying. I agree with your strategy. I've disabled our 
offending tests using Assume, and we can still claim that Avatica works on 
JDK9, albeit with less coverage.

I am concerned that the Mockito/Cglib community seem to think that JDK9 support 
== adding support for new JDK9 features. Whereas we just want the same old 
functionality to run on a JDK9 runtime. (We can't use JDK9 features until we 
drop support for JDK1.7 and JDK1.8.) I'll weigh in on 
https://github.com/cglib/cglib/issues/93 and until then I guess we'll have to 
be patient.

> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib 
> behind that is trying to access a protected method inside the runtime using 
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected 
> defineClass method available to the outside, it is much more correct to just 
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25 
> tests fail. The only way is to disable all Mocking tests in Java 9. The 
> underlying issue in cglib is still not solved, master's code is here: 
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected 
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with 
> Solr completely! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Mike Drob
Congratulations!

On Monday, January 9, 2017, Uwe Schindler  wrote:

> Welcome Dat!
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de 
>
>
>
> *From:* Joel Bernstein [mailto:joels...@gmail.com
> ]
> *Sent:* Monday, January 9, 2017 4:57 PM
> *To:* lucene dev  >
> *Subject:* Welcome Cao Manh Dat as a Lucene/Solr committer
>
>
>
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links: <
> http://www.apache.org/dev/>.
>
>
>
>
>
> Joel Bernstein
>
> http://joelsolr.blogspot.com/
>


[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812443#comment-15812443
 ] 

Hoss Man commented on SOLR-9941:


The patch looks pretty good to me -- but i'm not a tlog expert.

One question i have is: if the only code paths that call 
{{recoverFromLog(boolean)}} are "startup" paths that pass {{true}} why do we 
need the optional argument?  why not just refactor the method to always use the 
new logic?

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Uwe Schindler
Welcome Dat!

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Joel Bernstein [mailto:joels...@gmail.com] 
Sent: Monday, January 9, 2017 4:57 PM
To: lucene dev 
Subject: Welcome Cao Manh Dat as a Lucene/Solr committer

 

I'm pleased to announce that Cao Manh Dat has accepted the Lucene
PMC's invitation to become a committer.

Dat, it's tradition that you introduce yourself with a brief bio.

Your account has been added to the “lucene" LDAP group, so you
now have commit privileges. Please test this by adding yourself to the
committers section of the Who We Are page on the website:
<  
http://lucene.apache.org/whoweare.html> (instructions here
<  
https://lucene.apache.org/site-instructions.html>).

The ASF dev page also has lots of useful links: <  
http://www.apache.org/dev/>.

 

 

Joel Bernstein

http://joelsolr.blogspot.com/



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3767 - Still Unstable!

2017-01-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3767/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([8A0AC4DB45F62808:25EFB01EB0A45F0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812422#comment-15812422
 ] 

ASF subversion and git services commented on SOLR-9934:
---

Commit 24038af7fab16aabb1365f05e9fe49d4fb1540e7 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=24038af ]

SOLR-9934: SolrTestCase.clearIndex has been improved to take advantage of low 
level test specific logic that clears the index metadata more completely then a 
normal *:* DBQ can due to update versioning

(cherry picked from commit 1d7379b680062eca766f0410e3db7ff9e9b34cb0)


> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-9934.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Noble Paul
Congrats Dat. and welcome

On Tue, Jan 10, 2017 at 3:15 AM, Alexandre Rafalovitch 
wrote:

> Congratulations Dat.
>
> It was awesome to collaborate with you before this, I am sure it will
> be even better after.
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 9 January 2017 at 10:57, Joel Bernstein  wrote:
> > I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Dat, it's tradition that you introduce yourself with a brief bio.
> >
> > Your account has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links:
> > .
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
-
Noble Paul


[jira] [Commented] (SOLR-9453) NullPointerException on PeerSync recovery

2017-01-09 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812359#comment-15812359
 ] 

Michael Braun commented on SOLR-9453:
-

We have not seen this error in some time running 6.2.1 - not sure what state we 
were in when this occurred.

> NullPointerException on PeerSync recovery
> -
>
> Key: SOLR-9453
> URL: https://issues.apache.org/jira/browse/SOLR-9453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
>Assignee: Shalin Shekhar Mangar
>
> Just updated to 6.2.0 (previously using 6.1.0) and we restarted the cluster a 
> few times - for one replica trying to sync on a shard, we got this on a 
> bootup and it's seemingly stuck. Cluster has 96 shards, 2 replicas per shard. 
> Shard 51 is where this issue occurred for us. It looks like the replica 
> eventually recovers, but we probably shouldn't see a NullPointerException.
> {code}
> java.lang.NullPointerException
>   at org.apache.solr.update.PeerSync.handleUpdates(PeerSync.java:605)
>   at org.apache.solr.update.PeerSync.handleResponse(PeerSync.java:344)
>   at org.apache.solr.update.PeerSync.sync(PeerSync.java:257)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processSync(RealTimeGetComponent.java:658)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processGetVersions(RealTimeGetComponent.java:623)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.process(RealTimeGetComponent.java:117)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Before it in the log , pasting some relevant lines with full IPs redacted:
> {code}ERROR - 2016-08-29 15:10:28.940; org.apache.solr.common.SolrException; 
> Error while trying to recover. 
> core=ourcollection_shard51_replica2:org.apache.solr.common.SolrException: No 
> registered leader was found after waiting for 4000ms , collection: 
> ourcollection slice: shard51
>  

[jira] [Resolved] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-9906.
--
   Resolution: Fixed
Fix Version/s: 6.4

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-9906:


Assignee: Noble Paul

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9453) NullPointerException on PeerSync recovery

2017-01-09 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812322#comment-15812322
 ] 

Pushkar Raste commented on SOLR-9453:
-

Try switching to 6.3

> NullPointerException on PeerSync recovery
> -
>
> Key: SOLR-9453
> URL: https://issues.apache.org/jira/browse/SOLR-9453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
>Assignee: Shalin Shekhar Mangar
>
> Just updated to 6.2.0 (previously using 6.1.0) and we restarted the cluster a 
> few times - for one replica trying to sync on a shard, we got this on a 
> bootup and it's seemingly stuck. Cluster has 96 shards, 2 replicas per shard. 
> Shard 51 is where this issue occurred for us. It looks like the replica 
> eventually recovers, but we probably shouldn't see a NullPointerException.
> {code}
> java.lang.NullPointerException
>   at org.apache.solr.update.PeerSync.handleUpdates(PeerSync.java:605)
>   at org.apache.solr.update.PeerSync.handleResponse(PeerSync.java:344)
>   at org.apache.solr.update.PeerSync.sync(PeerSync.java:257)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processSync(RealTimeGetComponent.java:658)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processGetVersions(RealTimeGetComponent.java:623)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.process(RealTimeGetComponent.java:117)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Before it in the log , pasting some relevant lines with full IPs redacted:
> {code}ERROR - 2016-08-29 15:10:28.940; org.apache.solr.common.SolrException; 
> Error while trying to recover. 
> core=ourcollection_shard51_replica2:org.apache.solr.common.SolrException: No 
> registered leader was found after waiting for 4000ms , collection: 
> ourcollection slice: shard51
> at 
> 

[jira] [Commented] (SOLR-9000) New Admin UI hardcodes /solr context and fails when it changes

2017-01-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812246#comment-15812246
 ] 

Jan Høydahl commented on SOLR-9000:
---

That is not true. You can start as many Solrs you like on the same host by 
specifying different ports {{solr start -p }}

> New Admin UI hardcodes /solr context and fails when it changes
> --
>
> Key: SOLR-9000
> URL: https://issues.apache.org/jira/browse/SOLR-9000
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Attachments: solr-wrong-urls-screenshot.png
>
>
> If the solr context is changed from */solr* to any other value (e.g. 
> */solr6_0/instance/solr1*), the new Admin UI does not work as it still tries 
> to load resources from */solr* prefix:
> The context is changed by editing server/contexts/solr-jetty-context.xml:
>  bq.  default="/solr6_0/instance/solr1"/>
> and by changing redirect in the server/etc/jetty.xml
> {quote}
> 
>   ^/$
>   /solr6_0/instance/solr1/
>  
> {quote}
> This affects New Admin UI, as well as both links between the UIs.
> The old Admin UI seems to work with the changed context, once it is manually 
> loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812238#comment-15812238
 ] 

ASF subversion and git services commented on SOLR-9934:
---

Commit 1d7379b680062eca766f0410e3db7ff9e9b34cb0 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d7379b ]

SOLR-9934: SolrTestCase.clearIndex has been improved to take advantage of low 
level test specific logic that clears the index metadata more completely then a 
normal *:* DBQ can due to update versioning


> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Alexandre Rafalovitch
Congratulations Dat.

It was awesome to collaborate with you before this, I am sure it will
be even better after.

Regards,
   Alex.

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 9 January 2017 at 10:57, Joel Bernstein  wrote:
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
>
> Dat, it's tradition that you introduce yourself with a brief bio.
>
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
>
> The ASF dev page also has lots of useful links:
> .
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9453) NullPointerException on PeerSync recovery

2017-01-09 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812164#comment-15812164
 ] 

Pushkar Raste commented on SOLR-9453:
-

Looks like NPE is coming from a log statement 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.2.0/solr/core/src/java/org/apache/solr/update/PeerSync.java#L605

> NullPointerException on PeerSync recovery
> -
>
> Key: SOLR-9453
> URL: https://issues.apache.org/jira/browse/SOLR-9453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
>Assignee: Shalin Shekhar Mangar
>
> Just updated to 6.2.0 (previously using 6.1.0) and we restarted the cluster a 
> few times - for one replica trying to sync on a shard, we got this on a 
> bootup and it's seemingly stuck. Cluster has 96 shards, 2 replicas per shard. 
> Shard 51 is where this issue occurred for us. It looks like the replica 
> eventually recovers, but we probably shouldn't see a NullPointerException.
> {code}
> java.lang.NullPointerException
>   at org.apache.solr.update.PeerSync.handleUpdates(PeerSync.java:605)
>   at org.apache.solr.update.PeerSync.handleResponse(PeerSync.java:344)
>   at org.apache.solr.update.PeerSync.sync(PeerSync.java:257)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processSync(RealTimeGetComponent.java:658)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.processGetVersions(RealTimeGetComponent.java:623)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.process(RealTimeGetComponent.java:117)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Before it in the log , pasting some relevant lines with full IPs redacted:
> {code}ERROR - 2016-08-29 15:10:28.940; org.apache.solr.common.SolrException; 
> Error while trying to recover. 
> core=ourcollection_shard51_replica2:org.apache.solr.common.SolrException: No 
> registered leader was found after 

[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-09 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812165#comment-15812165
 ] 

Ishan Chattopadhyaya commented on SOLR-9941:


Fyi, [~shalinmangar], [~caomanhdat] please review.

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Alan Woodward
Congratulations!

Alan Woodward
www.flax.co.uk


> On 9 Jan 2017, at 15:57, Joel Bernstein  wrote:
> 
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
> 
> Dat, it's tradition that you introduce yourself with a brief bio.
> 
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  > (instructions here
>  >).
> 
> The ASF dev page also has lots of useful links:  >.
> 
> 
> Joel Bernstein
> http://joelsolr.blogspot.com/ 



[jira] [Updated] (SOLR-9856) Collect metrics for shard replication and tlog replay on replicas

2017-01-09 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9856:

Attachment: SOLR-9856.patch

I think this is ready. Brief summary of changes:

* added metrics for transaction log processing: gauges to report the current 
state, number of buffered operations and the processing of buffered ops, number 
and size of replicated logs, and meters for processing these logs.

* added metrics for {{PeerSync}}: timer for actual sync operations, counters 
for errors and skipped sync (when sync was requested but a shard was already in 
sync).

> Collect metrics for shard replication and tlog replay on replicas
> -
>
> Key: SOLR-9856
> URL: https://issues.apache.org/jira/browse/SOLR-9856
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9856.patch
>
>
> Using API from SOLR-4735 add metrics for tracking outgoing replication from 
> leader to shard replicas, and for tracking transaction log processing on 
> replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9000) New Admin UI hardcodes /solr context and fails when it changes

2017-01-09 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812132#comment-15812132
 ] 

Timo Hund commented on SOLR-9000:
-

Hi together, this would mean that multiple solr installations would allways 
require one hostname per solr instance because seperating the instances by the 
path is not possible. Why not make the pathes relativ as proposed in SOLR-9584 
to allow both?

> New Admin UI hardcodes /solr context and fails when it changes
> --
>
> Key: SOLR-9000
> URL: https://issues.apache.org/jira/browse/SOLR-9000
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Attachments: solr-wrong-urls-screenshot.png
>
>
> If the solr context is changed from */solr* to any other value (e.g. 
> */solr6_0/instance/solr1*), the new Admin UI does not work as it still tries 
> to load resources from */solr* prefix:
> The context is changed by editing server/contexts/solr-jetty-context.xml:
>  bq.  default="/solr6_0/instance/solr1"/>
> and by changing redirect in the server/etc/jetty.xml
> {quote}
> 
>   ^/$
>   /solr6_0/instance/solr1/
>  
> {quote}
> This affects New Admin UI, as well as both links between the UIs.
> The old Admin UI seems to work with the changed context, once it is manually 
> loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Steve Rowe
Welcome and congrats Dat!

--
Steve
www.lucidworks.com

> On Jan 9, 2017, at 10:57 AM, Joel Bernstein  wrote:
> 
> I'm pleased to announce that Cao Manh Dat has accepted the Lucene
> PMC's invitation to become a committer.
> 
> Dat, it's tradition that you introduce yourself with a brief bio.
> 
> Your account has been added to the “lucene" LDAP group, so you
> now have commit privileges. Please test this by adding yourself to the
> committers section of the Who We Are page on the website:
>  (instructions here
> ).
> 
> The ASF dev page also has lots of useful links: .
> 
> 
> Joel Bernstein
> http://joelsolr.blogspot.com/


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-09 Thread Timo Hund (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812121#comment-15812121
 ] 

Timo Hund commented on SOLR-9584:
-

Hi together i would also vote for this patch and regarding the comments on 
SOLR-9000. I think there are valid comments, that the handling of the routing 
should happen outside solr, but for me this is an argument, that the urls 
should be relative, because this allows any outer application to do this. 
Otherwise you force that solr is allways installed in hostname:port/solr/ and 
could not be hostname:port//solr/ 

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Welcome Cao Manh Dat as a Lucene/Solr committer

2017-01-09 Thread Joel Bernstein
I'm pleased to announce that Cao Manh Dat has accepted the Lucene
PMC's invitation to become a committer.

Dat, it's tradition that you introduce yourself with a brief bio.

Your account has been added to the “lucene" LDAP group, so you
now have commit privileges. Please test this by adding yourself to the
committers section of the Who We Are page on the website:
 (instructions here
).

The ASF dev page also has lots of useful links: .


Joel Bernstein
http://joelsolr.blogspot.com/


[jira] [Commented] (SOLR-9930) Incomplete documentation for analysis-extra

2017-01-09 Thread Jakob Kylberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812066#comment-15812066
 ] 

Jakob Kylberg commented on SOLR-9930:
-

What I did was to add the following field type to the schema.xml: 

{{}}

I did not make any changes to the solrconfig.

To get it to work I added the lucene-libs/lucene-analyzers-icu-X.Y.jar and 
lib/icu4j-X.Y.jar together with the solr-analysis-extra.X.Y.jar to my 
collection's libs directory. What I'm trying to help clarify is that 
solr-analysis-extra.X.Y.jar is needed together with the icu jars, who are 
already mentioned as dependencies in the readme. To make that clearer I updated 
the pull request.

I'm using Solr 6.3.0. I've also tried this in Solr 6.2.0.

> Incomplete documentation for analysis-extra
> ---
>
> Key: SOLR-9930
> URL: https://issues.apache.org/jira/browse/SOLR-9930
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jakob Kylberg
>Priority: Minor
>  Labels: documentation
>
> The documentation regarding which dependencies that have to be added in order 
> to activate e.g. the ICU analyzer is incomplete. This leads to unnecessary 
> trouble for the user when they have to find the missing dependencies 
> themselves, especially since the error message in the logs and Solr GUI 
> doesn't give a clear hint on what's missing.
> I've created a pull request with updated documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7624) Consider moving TermsQuery to core

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812059#comment-15812059
 ] 

ASF subversion and git services commented on LUCENE-7624:
-

Commit 17cd0f00cc1a7bce647eedfe56c860a02aa22654 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17cd0f0 ]

LUCENE-7624: Remove deprecated TermsQuery


> Consider moving TermsQuery to core
> --
>
> Key: LUCENE-7624
> URL: https://issues.apache.org/jira/browse/LUCENE-7624
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7624.patch
>
>
> TermsQuery current sits in the queries module, but it's used in both 
> spatial-extras and in facets, and currently is the only reason that the 
> facets module has a dependency on queries.  I think it's a generally useful 
> query, and would fit in perfectly well in core.
> This would also allow us to explore rewriting BooleanQuery to TermsQuery to 
> avoid the max-clauses limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7624) Consider moving TermsQuery to core

2017-01-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7624.
---
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

Thanks all

> Consider moving TermsQuery to core
> --
>
> Key: LUCENE-7624
> URL: https://issues.apache.org/jira/browse/LUCENE-7624
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7624.patch
>
>
> TermsQuery current sits in the queries module, but it's used in both 
> spatial-extras and in facets, and currently is the only reason that the 
> facets module has a dependency on queries.  I think it's a generally useful 
> query, and would fit in perfectly well in core.
> This would also allow us to explore rewriting BooleanQuery to TermsQuery to 
> avoid the max-clauses limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7624) Consider moving TermsQuery to core

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812054#comment-15812054
 ] 

ASF subversion and git services commented on LUCENE-7624:
-

Commit 22940f5c49297b606d710c6775309d67ff064f2f in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=22940f5 ]

LUCENE-7624: Move TermsQuery into core as TermInSetQuery


> Consider moving TermsQuery to core
> --
>
> Key: LUCENE-7624
> URL: https://issues.apache.org/jira/browse/LUCENE-7624
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7624.patch
>
>
> TermsQuery current sits in the queries module, but it's used in both 
> spatial-extras and in facets, and currently is the only reason that the 
> facets module has a dependency on queries.  I think it's a generally useful 
> query, and would fit in perfectly well in core.
> This would also allow us to explore rewriting BooleanQuery to TermsQuery to 
> avoid the max-clauses limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7624) Consider moving TermsQuery to core

2017-01-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812053#comment-15812053
 ] 

ASF subversion and git services commented on LUCENE-7624:
-

Commit 8511f9e6991679f71e7a82c6ef9cf1b774d090aa in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8511f9e ]

LUCENE-7624: Move TermsQuery into core as TermInSetQuery


> Consider moving TermsQuery to core
> --
>
> Key: LUCENE-7624
> URL: https://issues.apache.org/jira/browse/LUCENE-7624
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7624.patch
>
>
> TermsQuery current sits in the queries module, but it's used in both 
> spatial-extras and in facets, and currently is the only reason that the 
> facets module has a dependency on queries.  I think it's a generally useful 
> query, and would fit in perfectly well in core.
> This would also allow us to explore rewriting BooleanQuery to TermsQuery to 
> avoid the max-clauses limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >