[jira] [Commented] (SOLR-8292) TransactionLog.next() does not honor contract and return null for EOF

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803900#comment-15803900
 ] 

Cao Manh Dat commented on SOLR-8292:


Hi Erick, I'm not familiar with CDCR code much. But I will give it a try today. 
Do we have any test that re procedure this error?

> TransactionLog.next() does not honor contract and return null for EOF
> -
>
> Key: SOLR-8292
> URL: https://issues.apache.org/jira/browse/SOLR-8292
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8292.patch
>
>
> This came to light in CDCR testing, which stresses this code a lot, there's a 
> stack trace showing this line (641 trunk) throwing an EOF exception:
> o = codec.readVal(fis);
> At first I thought to just wrap reading fis in a try/catch and return null, 
> but looking at the code a bit more I'm not so sure, that seems like it'd mask 
> what looks at first glance like a bug in the logic.
> A few lines earlier (633-4) there's these lines:
> // shouldn't currently happen - header and first record are currently written 
> at the same time
> if (fis.position() >= fos.size()) {
> Why are we comparing the the input file position against the size of the 
> output file? Maybe because the 'i' key is right next to the 'o' key? The 
> comment hints that it's checking for the ability to read the first record in 
> input stream along with the header. And perhaps there's a different issue 
> here because the expectation clearly is that the first record should be there 
> if the header is.
> So what's the right thing to do? Wrap in a try/catch and return null for EOF? 
> Change the test? Do both?
> I can take care of either, but wanted a clue whether the comparison of fis to 
> fos is intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803895#comment-15803895
 ] 

Cao Manh Dat commented on SOLR-9922:


Hi Mark, I used {{beast.sh}} to run {{ChaosMonkeyNothingIsSafeTest}}, 
{{ChaosMonkeySafeLeaderTest}} and both tests successfully!

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 653 - Unstable

2017-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/653/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestCloudPseudoReturnFields

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestCloudPseudoReturnFields: 1) Thread[id=479, 
name=OverseerHdfsCoreFailoverThread-97234799762276361-127.0.0.1:59761_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudPseudoReturnFields: 
   1) Thread[id=479, 
name=OverseerHdfsCoreFailoverThread-97234799762276361-127.0.0.1:59761_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([2B2F7A5AE2C9BD7B]:0)




Build Log:
[...truncated 10819 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudPseudoReturnFields
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.cloud.TestCloudPseudoReturnFields_2B2F7A5AE2C9BD7B-001/init-core-data-001
   [junit4]   2> 59043 INFO  
(SUITE-TestCloudPseudoReturnFields-seed#[2B2F7A5AE2C9BD7B]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 59045 INFO  
(SUITE-TestCloudPseudoReturnFields-seed#[2B2F7A5AE2C9BD7B]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.cloud.TestCloudPseudoReturnFields_2B2F7A5AE2C9BD7B-001/tempDir-001
   [junit4]   2> 59045 INFO  
(SUITE-TestCloudPseudoReturnFields-seed#[2B2F7A5AE2C9BD7B]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 59045 INFO  (Thread-76) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 59045 INFO  (Thread-76) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 59145 INFO  
(SUITE-TestCloudPseudoReturnFields-seed#[2B2F7A5AE2C9BD7B]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:39065
   [junit4]   2> 59168 INFO  (jetty-launcher-57-thread-2) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 59187 INFO  (jetty-launcher-57-thread-1) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 59188 INFO  (jetty-launcher-57-thread-3) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 59191 INFO  (jetty-launcher-57-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@7d732eae{/solr,null,AVAILABLE}
   [junit4]   2> 59195 INFO  (jetty-launcher-57-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6bfd87b4{/solr,null,AVAILABLE}
   [junit4]   2> 59197 INFO  (jetty-launcher-57-thread-2) [] 
o.e.j.s.AbstractConnector Started ServerConnector@458244ee{SSL,[ssl, 
http/1.1]}{127.0.0.1:55216}
   [junit4]   2> 59197 INFO  (jetty-launcher-57-thread-2) [] o.e.j.s.Server 
Started @64242ms
   [junit4]   2> 59197 INFO  (jetty-launcher-57-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=55216}
   [junit4]   2> 59198 ERROR (jetty-launcher-57-thread-2) [] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 59198 INFO  (jetty-launcher-57-thread-2) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
6.4.0
   [junit4]   2> 59198 INFO  (jetty-launcher-57-thread-2) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 59198 INFO  (jetty-launcher-57-thread-2) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 59198 INFO  (jetty-launcher-57-thread-2) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-01-06T06:48:19.259Z
   [junit4]   2> 59200 INFO  (jetty-launcher-57-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@215915cb{/solr,null,AVAILABLE}
   [junit4]   2> 59202 INFO  (jetty-launcher-57-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@abe3ded{SSL,[ssl, 
http/1.1]}{127.0.0.1:34690}
   [junit4]   2> 59202 INFO  (jetty-launcher-57-thread-1) [] o.e.j.s.Server 
Started @64247ms
   [junit4]   2> 59203 INFO  (jetty-launcher-57-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_112) - Build # 671 - Still Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/671/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.SampleTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.SampleTest_671D2FCDEEC67307-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.SampleTest_671D2FCDEEC67307-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.SampleTest_671D2FCDEEC67307-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.SampleTest_671D2FCDEEC67307-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([671D2FCDEEC67307]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([671D2FCDEEC67307:9E50BC62D2B33E8D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 

[jira] [Updated] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9935:
---
Attachment: SOLR_9935_UH_fragsize.patch

Here's a patch.  The default fragsize chosen is 70 as that is the same used 
when the regex fragmenter (of the original Highlighter) is used in Solr.  These 
are both similar in that you typically want to shoot for a passage about a 
sentence in length.

Note the regex fragmenter has a "slop" feature that is 60% of the fragsize... 
this is not (yet) supported by the UH's LengthGoalBreakIterator.

When LUCENE-7620 lands (this weekend?), I plan to commit this immediately after.

> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-05 Thread David Smiley (JIRA)
David Smiley created SOLR-9935:
--

 Summary: When hl.method=unified add support for hl.fragsize param
 Key: SOLR-9935
 URL: https://issues.apache.org/jira/browse/SOLR-9935
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley


In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows it 
to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this on 
the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7620:
-
Attachment: LUCENE_7620_UH_LengthGoalBreakIterator.patch

Here's a patch.  I'm calling it {{LengthGoalBreakIterator}}.  In time, perhaps 
we might add some tweaks like a "slop" akin to the LuceneRegexFragmenter (in 
Solr). 

[~jim.ferenczi] I thought you might want to take a peek.  I figure this can get 
into 6.4; I'll commit it this weekend.

> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2601 - Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2601/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.metrics.SolrMetricManagerTest.testClearMetrics

Error Message:
expected:<80> but was:<81>

Stack Trace:
java.lang.AssertionError: expected:<80> but was:<81>
at 
__randomizedtesting.SeedInfo.seed([69FC2910FF09178E:2D5EAEAF876EEC7E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.metrics.SolrMetricManagerTest.testClearMetrics(SolrMetricManagerTest.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12635 lines...]
   [junit4] Suite: org.apache.solr.metrics.SolrMetricManagerTest
   [junit4]   2> 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6337 - Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6337/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([8F0C9F648EFB0711:758A0BE20076AE9]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803485#comment-15803485
 ] 

Cao Manh Dat edited comment on SOLR-9922 at 1/6/17 4:16 AM:


In the current code, FLAG_GAP is used in RecoveryStrategy, we first check 
lastOperation have FLAG_GAP, if yes we are sure that buffering updates is not 
applied ( because the node failed during buffering ) so we skip peersync and go 
directly to replication process.

In my patch, I detect this event by checking that any old buffer log exists. So 
I'm worried about the case when the lastOperation have FLAG_GAP when users 
restart the whole cluster with the new code. Instead of going to replication 
process, the new code will go to peerSync.


was (Author: caomanhdat):
In current code, FLAG_GAP is used in RecoveryStrategy, we first check 
lastOperation have FLAG_GAP, if yes we are sure that buffering updates is not 
applied ( because the node failed during buffering ) so we skip peersync and go 
directly to replication process.

In my patch, I detect this event by checking that any old buffer log exist. So 
I'm worry about the case when the lastOperation have FLAG_GAP when users 
restart the whole cluster with new code. That the reason why I said that "all 
nodes should be in ACTIVE state".

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803494#comment-15803494
 ] 

Mark Miller commented on SOLR-9928:
---

Im not even sure the wrapping is needed. The impels should properly unwrap bow. 
But perhaps the Intent was to hide. In either case we should be consistent. 

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803485#comment-15803485
 ] 

Cao Manh Dat edited comment on SOLR-9922 at 1/6/17 4:13 AM:


In current code, FLAG_GAP is used in RecoveryStrategy, we first check 
lastOperation have FLAG_GAP, if yes we are sure that buffering updates is not 
applied ( because the node failed during buffering ) so we skip peersync and go 
directly to replication process.

In my patch, I detect this event by checking that any old buffer log exist. So 
I'm worry about the case when the lastOperation have FLAG_GAP when users 
restart the whole cluster with new code. That the reason why I said that "all 
nodes should be in ACTIVE state".


was (Author: caomanhdat):
In current code, FLAG_GAP is used in RecoveryStrategy, we first check 
lastOperation have FLAG_GAP, if yes we are sure that buffering updates is not 
applied ( because the node failed during buffering ) so we skip peersync and go 
directly to replication process.

In my patch, I detect this event by checking that any old buffer log exist.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8292) TransactionLog.next() does not honor contract and return null for EOF

2017-01-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803488#comment-15803488
 ] 

Erick Erickson commented on SOLR-8292:
--

 [~caomanhdat] You've also been in the tlog code significantly recently, do you 
have any opinion  on whether this (and SOLR-4116) are valid any longer?

> TransactionLog.next() does not honor contract and return null for EOF
> -
>
> Key: SOLR-8292
> URL: https://issues.apache.org/jira/browse/SOLR-8292
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8292.patch
>
>
> This came to light in CDCR testing, which stresses this code a lot, there's a 
> stack trace showing this line (641 trunk) throwing an EOF exception:
> o = codec.readVal(fis);
> At first I thought to just wrap reading fis in a try/catch and return null, 
> but looking at the code a bit more I'm not so sure, that seems like it'd mask 
> what looks at first glance like a bug in the logic.
> A few lines earlier (633-4) there's these lines:
> // shouldn't currently happen - header and first record are currently written 
> at the same time
> if (fis.position() >= fos.size()) {
> Why are we comparing the the input file position against the size of the 
> output file? Maybe because the 'i' key is right next to the 'o' key? The 
> comment hints that it's checking for the ability to read the first record in 
> input stream along with the header. And perhaps there's a different issue 
> here because the expectation clearly is that the first record should be there 
> if the header is.
> So what's the right thing to do? Wrap in a try/catch and return null for EOF? 
> Change the test? Do both?
> I can take care of either, but wanted a clue whether the comparison of fis to 
> fos is intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803485#comment-15803485
 ] 

Cao Manh Dat commented on SOLR-9922:


In current code, FLAG_GAP is used in RecoveryStrategy, we first check 
lastOperation have FLAG_GAP, if yes we are sure that buffering updates is not 
applied ( because the node failed during buffering ) so we skip peersync and go 
directly to replication process.

In my patch, I detect this event by checking that any old buffer log exist.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803476#comment-15803476
 ] 

Mark Miller edited comment on SOLR-9922 at 1/6/17 4:05 AM:
---

Hmm, from memory, FLAG_GAP is what indicates the updates are buffered and not 
real right? My first thought at a worry would be even if a node did not need to 
replay those buffered updates, if you can't tell they are buffered they could 
be incorrectly used in peer sync and realtime get and stuff. Or is FLAG_GAP the 
solution Yonik used to avoid peer sync after a failed recovery attempt even 
after restart?


was (Author: markrmil...@gmail.com):
Hmm, from memory, FLAG_GAP is what indicates the updates are buffered and not 
real right? My first thought at a worry would be even if a node did not need to 
replay those buffered updates, if you can't tell they are buffered they could 
be incorrectly used in peer sync and realtime get and stuff. Or FLAG_GAP the 
solution Yonik used to avoid peer sync after a failed recovery attempt even 
after restart?

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803477#comment-15803477
 ] 

Cao Manh Dat commented on SOLR-9922:


That's sound a great test.
BTW: Do you mean {{ChaosMonkeyNothingIsSafeTest}}, 
{{ChaosMonkeySafeLeaderTest}}?

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803476#comment-15803476
 ] 

Mark Miller commented on SOLR-9922:
---

Hmm, from memory, FLAG_GAP is what indicates the updates are buffered and not 
real right? My first thought at a worry would be even if a node did not need to 
replay those buffered updates, if you can't tell they are buffered they could 
be incorrectly used in peer sync and realtime get and stuff. Or FLAG_GAP the 
solution Yonik used to avoid peer sync after a failed recovery attempt even 
after restart?

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803457#comment-15803457
 ] 

Cao Manh Dat commented on SOLR-9922:


Ok, So I'm talking about FLAG_GAP which only be set when a node is recovering. 
So I'm not sure that the patch won't mess up the recovery logic because I 
removed the FLAG_GAP on the patch.

About {{UpdateLog.recoverFromLog()}}, I think it will still do they job 
perfectly if some updates had not been committed during shutdown.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803449#comment-15803449
 ] 

Mark Miller edited comment on SOLR-9922 at 1/6/17 3:56 AM:
---

It's a good idea to loop the two chaosmonkey tests a lot to confirm changes in 
this area. Sometimes new failures have crept in (in which case we want to raise 
a JIRA issue) but even still you can gauge if the failure rate goes up or not 
before and after. Yonik has a script for this, but I usually just setup a local 
jenkins job. You can also use the ant beast stuff, or I have a beasting script 
here: https://gist.github.com/markrmiller/dbdb792216dc98b018ad

Just for frame of reference, many of this out of sync failures these tests 
catch will fail one in 30, one in 50, one in 100.


was (Author: markrmil...@gmail.com):
It's a good idea to loop the two chaosmonkey tests a lot to confirm changes in 
this area. Sometimes new failures have crept in (in which case we want to raise 
a JIRA issue) but even still you can gauge if the failure rate goes up or not 
before and after. Yonik has a script for this, but I usually just setup a local 
jenkins job. You can also use the ant beast stuff, or I have a beasting script 
here: https://gist.github.com/markrmiller/dbdb792216dc98b018ad

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803449#comment-15803449
 ] 

Mark Miller commented on SOLR-9922:
---

It's a good idea to loop the two chaosmonkey tests a lot to confirm changes in 
this area. Sometimes new failures have crept in (in which case we want to raise 
a JIRA issue) but even still you can gauge if the failure rate goes up or not 
before and after. Yonik has a script for this, but I usually just setup a local 
jenkins job. You can also use the ant beast stuff, or I have a beasting script 
here: https://gist.github.com/markrmiller/dbdb792216dc98b018ad

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803441#comment-15803441
 ] 

Mark Miller commented on SOLR-9922:
---

I guess I don't understand "But I think when user update the Solr, their node 
must be in active state"

Or really, the whole statement "But I think when user update the Solr, their 
node must be in active state, so the the new version won't have to read old 
tlog."

The way I read it, you are saying, when you update Solr, the new version won't 
have to read the old tlog". So I'm mentioning where that could easily happen. 
But you have answered there is no problem in UpdateLog.recoverFromLog. I have 
not looked at the patch, so I assume I don't understand the quoted sentence 
fully.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803433#comment-15803433
 ] 

Cao Manh Dat commented on SOLR-9922:


That's right. But I think current patch don't have any problem with 
{{UpdateLog.recoverFromLog()}}.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803427#comment-15803427
 ] 

Mike Drob commented on SOLR-9928:
-

Also, we can safely remove MetricsDirectory.getBaseDir since all 
implementations bubble up to DirectoryFactory anyway.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803421#comment-15803421
 ] 

Mike Drob commented on SOLR-9928:
-

We very possibly could need to unwrap for move, and haven't caught it because 
that only gets called in a very specific case to move specific files during a 
replication. I'll try to write up a test to hit this path. Note that we do 
already unwrap for remove, which makes me think you're on the right track.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803405#comment-15803405
 ] 

Mark Miller commented on SOLR-9922:
---

bq. so the the new version won't have to read old tlog.

I suppose it depends? For example, our shutdown script used to kill Solr really 
fast on shutdown - like after 5 seconds. It's still pretty quick, maybe 30 
seconds? But with lots of cores and data, it can easily take longer than for a 
shutdown. Some users also index during shutdown. So it's not too hard to 
imagine a shutdown, upgrade, and requirement to replay old tlogs.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803401#comment-15803401
 ] 

Mark Miller commented on SOLR-9922:
---

Sounds fine to me. If a node dies during buffering, that buffer should be 
discardable.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803396#comment-15803396
 ] 

Cao Manh Dat edited comment on SOLR-9922 at 1/6/17 3:28 AM:


Updated patch for this issue.
- Fix bug when ensureLog delete replaying buffer tlog.
- Remove unnecessary flags on {{TransactionLog.write(AddUpdateCommand cmd, int 
flags)}}, {{TransactionLog.writeDelete()}}, 
{{TransctionLog.writeDeleteByQuery()}}
- change {{int UpdateLog.startingOperation}} to {{boolean 
UpdateLog.existOldBufferLog}}--> indicate the old buffer tlog is not applied.
- enable {{TestRecoveryHdfs.testDropBuffered()}}

[~markrmil...@gmail.com] : I think the patch is pretty solid now, can you 
review the patch?


was (Author: caomanhdat):
Updated patch for this issue.
- Fix bug when ensureLog delete replaying buffer tlog.
- Remove unnecessary flags on {{TransactionLog.write(AddUpdateCommand cmd, int 
flags)}}, {{TransactionLog.writeDelete()}}, 
{{TransctionLog.writeDeleteByQuery()}}
- change {{int UpdateLog.startingOperation}} to {{boolean 
UpdateLog.existOldBufferLog}}--> indicate the old buffer tlog is not applied.
- enable {{TestRecoveryHdfs.testDropBuffered()}}

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9922) Write buffering updates to another tlog

2017-01-05 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9922:
---
Attachment: SOLR-9922.patch

Updated patch for this issue.
- Fix bug when ensureLog delete replaying buffer tlog.
- Remove unnecessary flags on {{TransactionLog.write(AddUpdateCommand cmd, int 
flags)}}, {{TransactionLog.writeDelete()}}, 
{{TransctionLog.writeDeleteByQuery()}}
- change {{int UpdateLog.startingOperation}} to {{boolean 
UpdateLog.existOldBufferLog}}--> indicate the old buffer tlog is not applied.
- enable {{TestRecoveryHdfs.testDropBuffered()}}

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-05 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803344#comment-15803344
 ] 

Koji Sekiguchi commented on SOLR-9918:
--

Thank you for your additional explanation. I agree with you on the Confluence 
page is the best place to put that kind of guideline notes. I just wanted to 
see such information in the ticket, not javadoc, because I think it helps 
committers to understand the requirement and importance of this proposal.

As for SignatureUpdateProcessor, I thought it skipped to add the doc if the 
signature is same, but when I looked into the patch on SOLR-799, I noticed that 
it always updates the existing document even if the doc has the same signature.

> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9931) hll omits value in distributed mode when no values in field

2017-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-9931.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> hll omits value in distributed mode when no values in field
> ---
>
> Key: SOLR-9931
> URL: https://issues.apache.org/jira/browse/SOLR-9931
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9931.patch, SOLR-9931.patch
>
>
> Given a non-empty bucket, but hll of a field with no values for that bucket 
> domain
> - In non-distributed mode, hll returns 0
> - In distributed mode, the key+value is omitted entirely
> We should make these consistent.
> In this case, what makes the most sense is to return 0 for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803218#comment-15803218
 ] 

ASF subversion and git services commented on SOLR-8530:
---

Commit 7ae9ca85d9d920db353d3d080b0cb36567e206b2 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7ae9ca8 ]

SOLR-8530: Add support for aggregate HAVING comparisons without single quotes


> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-8530.patch
>
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803208#comment-15803208
 ] 

Mark Miller commented on SOLR-9928:
---

Mostly I guess I'm looking at move.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803205#comment-15803205
 ] 

Mark Miller commented on SOLR-9928:
---

Why do we unwrap in some methods that can get overridden but not others?

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-05 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-9644:
--

Assignee: Anshum Gupta

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>Assignee: Anshum Gupta
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803191#comment-15803191
 ] 

ASF subversion and git services commented on SOLR-8530:
---

Commit b32cd82318f5c8817a8383e1be7534c772e6fa13 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b32cd82 ]

SOLR-8530: Add support for aggregate HAVING comparisons without single quotes


> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-8530.patch
>
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9644) MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts properly

2017-01-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803192#comment-15803192
 ] 

Anshum Gupta commented on SOLR-9644:


Thanks [~emaijala]. I'll take a look.

> MoreLikeThis parsers SimpleMLTQParser and CloudMLTQParser don't handle boosts 
> properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>  Labels: patch
> Attachments: SOLR-9644-branch_6x.patch, SOLR-9644-master.patch
>
>
> It seems SimpleMLTQParser and CloudMLTQParser should be able to handle boost 
> parameters, but it's not working properly. I'll make a pull request to add 
> tests and fix both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803185#comment-15803185
 ] 

Dennis Gove edited comment on SOLR-9916 at 1/6/17 1:45 AM:
---

Of course. sum is a metric.


was (Author: dpgove):
Oh jeez. Of course. sum is a metric.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803185#comment-15803185
 ] 

Dennis Gove commented on SOLR-9916:
---

Oh jeez. Of course. sum is a metric.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803181#comment-15803181
 ] 

Joel Bernstein edited comment on SOLR-9916 at 1/6/17 1:44 AM:
--

Here is the full expression:

{code}
having(rollup(over=a_f, 
  sum(a_i), 
  search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc")), 
   eq(sum(a_i), 9)))

{code}

So the "sum(a_i)" is the field in the tuples produced by the rollup.


was (Author: joel.bernstein):
Here is the full expression:

{code}
having(rollup(over=a_f, 
  sum(a_i), 
  search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc")), 
   eq(sum(a_i), 9)))

{code}

So the having is 

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803181#comment-15803181
 ] 

Joel Bernstein edited comment on SOLR-9916 at 1/6/17 1:43 AM:
--

Here is the full expression:

{code}
having(rollup(over=a_f, 
  sum(a_i), 
  search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc")), 
   eq(sum(a_i), 9)))

{code}

So the having is 


was (Author: joel.bernstein):
Here is the full expression:

{code}
having(rollup(over=a_f, 
 sum(a_i), 
 search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc")), 
  eq(sum(a_i), 9)))");

{code}

So the having is 

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803181#comment-15803181
 ] 

Joel Bernstein edited comment on SOLR-9916 at 1/6/17 1:43 AM:
--

Here is the full expression:

{code}
having(rollup(over=a_f, 
 sum(a_i), 
 search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc")), 
  eq(sum(a_i), 9)))");

{code}

So the having is 


was (Author: joel.bernstein):
Here is the full expression:

{code}
having(rollup(over=a_f, 
 sum(a_i), 
 search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f 
asc")), 
 eq(sum(a_i), 9)))");

{code}

So the having is 

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803181#comment-15803181
 ] 

Joel Bernstein commented on SOLR-9916:
--

Here is the full expression:

{code}
having(rollup(over=a_f, 
 sum(a_i), 
 search(collection1 q=*:*, fl="id,a_s,a_i,a_f", sort="a_f 
asc")), 
 eq(sum(a_i), 9)))");

{code}

So the having is 

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1059 - Still Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1059/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([784EE71CA405F489:10F1D236749FE665]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:304)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803159#comment-15803159
 ] 

Dennis Gove commented on SOLR-9916:
---

Sounds good.

What is {code}sum(a_i){code}? Is that calculating the sum over a multivalued 
field? (if so...didn't know we were supporting multivalued fields, really cool)

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803133#comment-15803133
 ] 

Joel Bernstein edited comment on SOLR-9916 at 1/6/17 1:31 AM:
--

Looks really good to me. Having the ability to nest the different types of 
operations with conditional logic in the select stream is really powerful.

I'm just about to commit a small change so that LeafOperations can accept a 
metric identifier without single quotes. Currently you have to do the following 
or the parser will parse the metric and not know how to use it as a value 
operand.

{code}
having(expr, eq('sum(a_i)', 9))
{code}

After this small commit it will support:

{code}
having(expr, eq(sum(a_i), 9))
{code}

This will just be relevant for Solr 6.4 which is coming in a few days.

The work you're doing on this ticket will supersede this change but it's nice 
to have for 6.4.





was (Author: joel.bernstein):
Looks really good to me.

I'm just about to commit a small change so that LeafOperations can accept a 
metric identifier without single quotes. Currently you have to do the following 
or the parser will parse the metric and not know how to use it as a value 
operand.

{code}
having(expr, eq('sum(a_i)', 9))
{code}

After this small commit it will support:

{code}
having(expr, eq(sum(a_i), 9))
{code}

This will just be relevant for Solr 6.4 which is coming in a few days.

The work you're doing on this ticket will supersede this change but it's nice 
to have for 6.4.




> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803133#comment-15803133
 ] 

Joel Bernstein commented on SOLR-9916:
--

Looks really good to me.

I'm just about to commit a small change so that LeafOperations can accept a 
metric identifier without single quotes. Currently you have to do the following 
or the parser will parse the metric and not know how to use it as a value 
operand.

{code}
having(expr, eq('sum(a_i)', 9))
{code}

After this small commit it will support:

{code}
having(expr, eq(sum(a_i), 9))
{code}

This will just be relevant for Solr 6.4 which is coming in a few days.

The work you're doing on this ticket will supersede this change but it's nice 
to have for 6.4.




> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803113#comment-15803113
 ] 

Dennis Gove commented on SOLR-9916:
---

Looking at the current state of Operations, the following class structure exists

{code}
StreamOperation
  ConcatOperation
  BooleanOperation
AndOperation
LeafOperation
  EqualsOperation
  GreaterThanEqualToOperation
  GreaterThanOperation
  LessThanEqualToOperation
  LessThanOperation
  NotOperation
  OrOperation
  ReduceOperation
DistinctOperation
GroupOperation
  ReplaceOperation (and associated hidden ReplaceWithFieldOperation, 
ReplaceWithValueOperation)
{code}

I'd like to enhance this slightly to the following

{code}
StreamOperation
  BooleanOperation
AndOperation
LeafOperation
  EqualsOperation
  GreaterThanEqualToOperation
  GreaterThanOperation
  LessThanEqualToOperation
  LessThanOperation
  NotOperation
  OrOperation
  ComparisonOperation
IfOperation
  ModificationOperation
AbsoluteValueOperation
AdditionOperation
ConcatOperation
DivisionOperation
ModuloOperation
MultiplicationOperation
ReplaceOperation (and associated hidden ReplaceWithFieldOperation, 
ReplaceWithValueOperation)
SubtractionOperation
  ReduceOperation
DistinctOperation
GroupOperation
{code}

This will allow us to support arbitrarily complex operations in the Select 
stream. It accomplishes this in 3 ways.

h3. Comparison Operation

First, add an if/then/else concept with the ComparisonOperation. Embedded 
operations will be supported, either Modification or Comparison.
The full supported structure is
{code}
if(boolean, field | modification | comparison, field | modification | 
comparison)
{code}

For example,
{code}
if(boolean(...), fieldA, fieldB)
  ex: if(gt(a,b), a, b) // if a > b then a else b
 
if(boolean(...), modification(...), modification)
  ex: if(gt(a,b), sub(a,b), sub(b,a)) // if a > b then a - b else b - a

if(boolean(...), comparison(...), comparison(...))
  ex: if(gt(a,b), if(or(c,d), a, b), if(and(c,d), a, b)) // if a > b then (if c 
or d then a else b) else (if c and d then a else b)
{code}

h3. ModificationOperations with Embedded Operations

Second, enhance ModificationOperations to support embedded operations, either 
Modification or Comparison.

The full supported structure is
{code}
modification(field | modification | comparison [, field | modification | 
comparison])
{code}

For example,
{code}
modification(fieldA [,fieldB])
  ex: add(a,b) // a + b

modification(fieldA [,modification(...)]) // order doesn't matter
  ex: add(a, div(b,c)) // a + (b/c)
  add(div(b,c), a) // (b/c) + a

modification(fieldA [,comparison(...)]) // order doesn't matter
  ex: add(a, if(gt(b,c),b,c)) // if b > c then a + b else a + c
  add(if(gt(b,c),b,c), a)  // if b > c then a + b else a + c
{code}

h3. BooleanOperations with Embedded Operations

Third, enhance BooleanOperations to support embedded operations, either 
Modification or Comparison. Each would support the following constructs

The full supported structure is
{code}
boolean(field | modification | comparison [, field | modification | comparison])
{code}

{code}
boolean(fieldA [,fieldB])
  ex: gt(a,b)

boolean(fieldA [,modification(...)]) // order doesn't matter
  ex: gt(a, add(b,c)) // is a > (b + c)
  gt(add(b,c), a) // is (b + c) > a

boolean(fieldA [,comparison(...)]) // order doesn't matter
  ex: gt(a, if(gt(b,c),b,c)) // if b > c then is a > b else is a > c
  gt(if(gt(b,c),b,c), a) // if b > c then is b > a else is c > a
{code}



[~joel.bernstein], I'm interested in your thoughts on this.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802935#comment-15802935
 ] 

Dennis Gove edited comment on SOLR-9916 at 1/6/17 12:01 AM:


I'm going to start implementing these as Operations. I'll be sure to support 
the cases of operations within operations like
{code}
plus(div(a,replace(b,null,0)),c)
{code}


was (Author: dpgove):
I'm going to start implementing these as Operations.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802935#comment-15802935
 ] 

Dennis Gove edited comment on SOLR-9916 at 1/5/17 11:59 PM:


I'm going to start implementing these as Operations.


was (Author: dpgove):
I'mm going to start implementing these as Operations.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802934#comment-15802934
 ] 

Dennis Gove edited comment on SOLR-9916 at 1/5/17 11:59 PM:


I think this is a good idea. 

Select already supports an "as" concept, so something like would be possible 
already
{code}
select(plus(a,b) as outfield, )
{code}


was (Author: dpgove):
I think this is a good idea. 

Select already supports an "as" concept, so something like would be possible 
already
{code}
select(add(a,b) as outfield, )
{code}

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802935#comment-15802935
 ] 

Dennis Gove commented on SOLR-9916:
---

I'mm going to start implementing these as Operations.

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802934#comment-15802934
 ] 

Dennis Gove commented on SOLR-9916:
---

I think this is a good idea. 

Select already supports an "as" concept, so something like would be possible 
already
{code}
select(add(a,b) as outfield, )
{code}

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-05 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9934:
---
Attachment: SOLR-9934.patch


Some background...

Once upon a time, if if you gave solr a DBQ of {{\*\:\*}}, it would optimize 
that internally into throwing away the current Index/IndexWriter and opening a 
brand new one from scratch.  Around the time of SOLR-3559 this had to be 
changed (git SHA 0f808c6bcdfcb11ce1398fe3c79c9b28c851aa1c) to account for the 
possibility that updates could arrive out of order, and all DBQs (even 
{{\*\:\*}}) needed their versions checked against doc versions in the index.

At the time of this change, special code was added to DUH2 so that some tests 
could still force the old behavior -- notably in cases where tests created 
synthetic versions and generally broke the tlog...

{noformat}
  // since we make up fake versions in these tests, we can get messed up by a 
DBQ with a real version
  // since Solr can think following updates were reordered.
{noformat}


Recently as part of the work ishan and I have been doing in SOLR-5944, we 
realized another issue with the current behavior is that even the test code is 
well behaved as var as versions/tlog go, and even though {{clearIndex}} is 
being called in {{\@Before}} methods, the low level field metdata in the 
IndexWriter (ex: what fields have docvalues) is surviving, causing inconsistent 
behavior between test methods (depending on the order of the test methods)



In my opinion, the behavior of {{SolrTestCaseJ4.clearIndex()}} should be to do 
the lowest possible level of "clear the index" (not just "do a \*:\* DBQ)  so 
that low level IndexWriter metadata doesn't survive, and people writting unit 
tests aren't suprised by stuff like this in future.

The attached patch refactors all the various copy/pasted versions of 
{{clearIndex()}} that take advantage of this low level delete up into 
{{SolrTestCaseJ4.clearIndex()}} and adds javadocs explaining how it differs 
from doing your own {{\*:\*}} DBQ.


> SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
> -
>
> Key: SOLR-9934
> URL: https://issues.apache.org/jira/browse/SOLR-9934
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9934.patch
>
>
> Normal deleteByQuery commands are subject to version constraint checks due to 
> the possibility of out of order updates, but DUH2 has special support 
> (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
> version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
> handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage 
> of this (using copy/pasted impls), but given the intended purpose/usage of 
> {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
> {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests 
> get this behavior automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called

2017-01-05 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9934:
--

 Summary: SolrTestCase.clearIndex should ensure 
IndexWriter.deleteAll is called
 Key: SOLR-9934
 URL: https://issues.apache.org/jira/browse/SOLR-9934
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man



Normal deleteByQuery commands are subject to version constraint checks due to 
the possibility of out of order updates, but DUH2 has special support 
(triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these 
version constraints and do a low level {{IndexWriter.deleteAll()}} call.  A 
handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage of 
this (using copy/pasted impls), but given the intended purpose/usage of 
{{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in 
{{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests get 
this behavior automatically.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2599 - Still Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2599/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSynced node did not become leader expected:https://127.0.0.1:35609/tp/te/collection1]> but was:https://127.0.0.1:43995/tp/te/collection1]>

Stack Trace:
java.lang.AssertionError: PeerSynced node did not become leader 
expected:https://127.0.0.1:35609/tp/te/collection1]> but 
was:https://127.0.0.1:43995/tp/te/collection1]>
at 
__randomizedtesting.SeedInfo.seed([69CD2FE8AF38A2FC:E199103201C4CF04]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-7621) Per-document minShouldMatch

2017-01-05 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802710#comment-15802710
 ] 

Paul Elschot commented on LUCENE-7621:
--

Starting from the number of indexed terms in a doc, when more than one of any 
synonym occurs, such extra occurrences would have to be ignored for counting 
the number of present clauses.

> Per-document minShouldMatch
> ---
>
> Key: LUCENE-7621
> URL: https://issues.apache.org/jira/browse/LUCENE-7621
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Adrien Grand
>Priority: Minor
>
> I have seen similar requirements a couple times but could not find any 
> related issue so I am opening one now. The idea would be to allow passing a 
> {{LongValuesSource}} rather than an integer as the {{minShouldMatch}} 
> parameter of {{BooleanQuery}} so that the number of required clauses can 
> depend on the document that is being matched. In terms of implementation, it 
> looks like it would be straightforward as we would just have to update the 
> value of {{minShouldMatch}} in {{MinShouldMatchSumScorer.setDocAndFreq}} and 
> things would still be efficient, ie. we would still use advance on the costly 
> clauses.
> This kind of feature would allow to run queries that must match eg. 80% of 
> the terms that a document contains (by indexing the number of terms in a 
> separate field). It would also make it possible for Luwak or ES' percolator 
> to index boolean queries that have a value of {{minShouldMatch}} greater than 
> 1 more efficiently.
> I do not have any plans to work on it soon but I am curious how much interest 
> this feature would drive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3758 - Still Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3758/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics

Error Message:
minorMerge: 3 expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: minorMerge: 3 expected:<4> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([C56A6804578F0C98:9BA55B89701F7A3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics(SolrIndexMetricsTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
No live SolrServers available to handle this request

Stack 

[jira] [Commented] (SOLR-9931) hll omits value in distributed mode when no values in field

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802689#comment-15802689
 ] 

ASF subversion and git services commented on SOLR-9931:
---

Commit dd06a0b9041eb42dd308a51e6337bbbe4b3057fc in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dd06a0b ]

SOLR-9931: return 0 for hll on field with no values in bucket


> hll omits value in distributed mode when no values in field
> ---
>
> Key: SOLR-9931
> URL: https://issues.apache.org/jira/browse/SOLR-9931
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-9931.patch, SOLR-9931.patch
>
>
> Given a non-empty bucket, but hll of a field with no values for that bucket 
> domain
> - In non-distributed mode, hll returns 0
> - In distributed mode, the key+value is omitted entirely
> We should make these consistent.
> In this case, what makes the most sense is to return 0 for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7621) Per-document minShouldMatch

2017-01-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802664#comment-15802664
 ] 

Adrien Grand commented on LUCENE-7621:
--

I think so, it should work with any query. Is there something that makes you 
think synonym queries would be more complicated?

> Per-document minShouldMatch
> ---
>
> Key: LUCENE-7621
> URL: https://issues.apache.org/jira/browse/LUCENE-7621
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Adrien Grand
>Priority: Minor
>
> I have seen similar requirements a couple times but could not find any 
> related issue so I am opening one now. The idea would be to allow passing a 
> {{LongValuesSource}} rather than an integer as the {{minShouldMatch}} 
> parameter of {{BooleanQuery}} so that the number of required clauses can 
> depend on the document that is being matched. In terms of implementation, it 
> looks like it would be straightforward as we would just have to update the 
> value of {{minShouldMatch}} in {{MinShouldMatchSumScorer.setDocAndFreq}} and 
> things would still be efficient, ie. we would still use advance on the costly 
> clauses.
> This kind of feature would allow to run queries that must match eg. 80% of 
> the terms that a document contains (by indexing the number of terms in a 
> separate field). It would also make it possible for Luwak or ES' percolator 
> to index boolean queries that have a value of {{minShouldMatch}} greater than 
> 1 more efficiently.
> I do not have any plans to work on it soon but I am curious how much interest 
> this feature would drive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9931) hll omits value in distributed mode when no values in field

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802660#comment-15802660
 ] 

ASF subversion and git services commented on SOLR-9931:
---

Commit a810fb3234ec461e23c76533fbfcc523d4c46faa in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a810fb3 ]

SOLR-9931: return 0 for hll on field with no values in bucket


> hll omits value in distributed mode when no values in field
> ---
>
> Key: SOLR-9931
> URL: https://issues.apache.org/jira/browse/SOLR-9931
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-9931.patch, SOLR-9931.patch
>
>
> Given a non-empty bucket, but hll of a field with no values for that bucket 
> domain
> - In non-distributed mode, hll returns 0
> - In distributed mode, the key+value is omitted entirely
> We should make these consistent.
> In this case, what makes the most sense is to return 0 for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7621) Per-document minShouldMatch

2017-01-05 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802649#comment-15802649
 ] 

Paul Elschot commented on LUCENE-7621:
--

Could this also work when the clauses are SynonymQueries?

> Per-document minShouldMatch
> ---
>
> Key: LUCENE-7621
> URL: https://issues.apache.org/jira/browse/LUCENE-7621
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Adrien Grand
>Priority: Minor
>
> I have seen similar requirements a couple times but could not find any 
> related issue so I am opening one now. The idea would be to allow passing a 
> {{LongValuesSource}} rather than an integer as the {{minShouldMatch}} 
> parameter of {{BooleanQuery}} so that the number of required clauses can 
> depend on the document that is being matched. In terms of implementation, it 
> looks like it would be straightforward as we would just have to update the 
> value of {{minShouldMatch}} in {{MinShouldMatchSumScorer.setDocAndFreq}} and 
> things would still be efficient, ie. we would still use advance on the costly 
> clauses.
> This kind of feature would allow to run queries that must match eg. 80% of 
> the terms that a document contains (by indexing the number of terms in a 
> separate field). It would also make it possible for Luwak or ES' percolator 
> to index boolean queries that have a value of {{minShouldMatch}} greater than 
> 1 more efficiently.
> I do not have any plans to work on it soon but I am curious how much interest 
> this feature would drive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7613) Update Surround query language

2017-01-05 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7613:
-
Attachment: LUCENE-7613.patch

Patch of 5 Jan 2017

This includes:
- the previous patch for using DisjunctionMaxQuery over fields,
- using (Span)SynonymQuery for truncations and prefixes, i.e. groups of terms.
- the patch of LUCENE-7615 for SpanSynonymQuery.
- Further improvements in the surround query code, mostly:
-- Removal of SimpleTerm implementing Comparable as deprecated in 2011.
-- Move all creation of primitive queries (i.e. rewrite results) into 
BasicQueryFactory.
-- Use BytesRef for visiting index terms.
-- A Test for TooManyBasicQueries.


> Update Surround query language
> --
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613.patch, LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7613) Update Surround query language

2017-01-05 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7613:
-
Lucene Fields: New,Patch Available  (was: New)
  Summary: Update Surround query language  (was: Make Surround use 
DisjunctionMaxQuery for multiple fields)

> Update Surround query language
> --
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8292) TransactionLog.next() does not honor contract and return null for EOF

2017-01-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802475#comment-15802475
 ] 

Erick Erickson commented on SOLR-8292:
--

[~md...@cloudera.com][~rendel][~markrmil...@gmail.com] I originally assigned 
this one to myself to not lose track of it but haven't done anything else with 
it. Is it reasonable to close this? Perhaps SOLR-7478 or similar has fixed 
this? There's been a lot of hardening in the last year.

WDYT about closing SOLR-4116 too?

> TransactionLog.next() does not honor contract and return null for EOF
> -
>
> Key: SOLR-8292
> URL: https://issues.apache.org/jira/browse/SOLR-8292
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8292.patch
>
>
> This came to light in CDCR testing, which stresses this code a lot, there's a 
> stack trace showing this line (641 trunk) throwing an EOF exception:
> o = codec.readVal(fis);
> At first I thought to just wrap reading fis in a try/catch and return null, 
> but looking at the code a bit more I'm not so sure, that seems like it'd mask 
> what looks at first glance like a bug in the logic.
> A few lines earlier (633-4) there's these lines:
> // shouldn't currently happen - header and first record are currently written 
> at the same time
> if (fis.position() >= fos.size()) {
> Why are we comparing the the input file position against the size of the 
> output file? Maybe because the 'i' key is right next to the 'o' key? The 
> comment hints that it's checking for the ability to read the first record in 
> input stream along with the header. And perhaps there's a different issue 
> here because the expectation clearly is that the first record should be there 
> if the header is.
> So what's the right thing to do? Wrap in a try/catch and return null for EOF? 
> Change the test? Do both?
> I can take care of either, but wanted a clue whether the comparison of fis to 
> fos is intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7614) Allow single prefix "phrase*" in complexphrase queryparser

2017-01-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7614:
-
Attachment: LUCENE-7614.patch

Thanks, [~mikemccand]. Attaching [^LUCENE-7614.patch] with the applied 
suggestion. I suppose we can tackle points later then.  

> Allow single prefix "phrase*" in complexphrase queryparser 
> ---
>
> Key: LUCENE-7614
> URL: https://issues.apache.org/jira/browse/LUCENE-7614
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Mikhail Khludnev
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7614.patch, LUCENE-7614.patch
>
>
> {quote}
> From  Otmar Caduff 
> Subject   ComplexPhraseQueryParser with wildcards
> Date  Tue, 20 Dec 2016 13:55:42 GMT
> Hi,
> I have an index with a single document with a field "field" and textual
> content "johnny peters" and I am using
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser to
> parse the query:
>field: (john* peter)
> When searching with this query, I am getting the document as expected.
> However with this query:
>field: ("john*" "peter")
> I am getting the following exception:
> Exception in thread "main" java.lang.IllegalArgumentException: Unknown
> query type "org.apache.lucene.search.PrefixQuery" found in phrase query
> string "john*"
> at
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:268)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9839) Use 'useFactory' in tests instead of setting manually

2017-01-05 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved SOLR-9839.
-
Resolution: Won't Fix

For future reference, there are ~175 invocations of System.clearProperty across 
the various Solr tests that we might be able to get rid of and rely completely 
on the restore rule. I'm not in favor of that solution, since it makes it very 
non-intuitive to understand what is going on, but it is worth considering if 
somebody ever attempts to do other clean up in the tests.

> Use 'useFactory' in tests instead of setting manually
> -
>
> Key: SOLR-9839
> URL: https://issues.apache.org/jira/browse/SOLR-9839
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mike Drob
>Priority: Minor
> Attachments: SOLR-9839.patch
>
>
> We have several tests that will explicitly set a directory factory via 
> SysProp, some of which forget to unset it.
> We should use {{useFactory}} so that we can benefit from the call to 
> {{resetFactory}} in {{SolrTestCaseJ4.teardownTestCases}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_112) - Build # 670 - Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/670/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection

Error Message:
Error from server at https://127.0.0.1:60015/solr: Could not fully create 
collection: solrj_test

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:60015/solr: Could not fully create collection: 
solrj_test
at 
__randomizedtesting.SeedInfo.seed([B57335562B8DDB9F:B2A6341D0C19BE92]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection(CollectionsAPISolrJTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9869) MiniSolrCloudCluster does not always remove jettys from running list after stopping them

2017-01-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802419#comment-15802419
 ] 

Mike Drob commented on SOLR-9869:
-

[~romseygeek] - any additional thoughts on this?

> MiniSolrCloudCluster does not always remove jettys from running list after 
> stopping them
> 
>
> Key: SOLR-9869
> URL: https://issues.apache.org/jira/browse/SOLR-9869
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mike Drob
> Attachments: SOLR-9869.patch
>
>
> MiniSolrCloudCluster has two {{stopJettySolrRunner}} methods that behave 
> differently.
> The {{int}} version calls {{jettys.remove(index);}} to remove the now stopped 
> jetty from the list of running jettys.
> The version that takes a {{JettySolrRunner}}, however, does not modify the 
> running list.
> This can cause calls to {{getReplicaJetty}} to fail after a call to {{stop}} 
> because we will try to get the base url of a stopped jetty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_112) - Build # 2598 - Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2598/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.PeerSyncTest.test

Error Message:
.response[1][id][0]:3!=2

Stack Trace:
junit.framework.AssertionFailedError: .response[1][id][0]:3!=2
at 
__randomizedtesting.SeedInfo.seed([E848954CF82E6F14:601CAA9656D202EC]:0)
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:920)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:939)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryAndCompare(BaseDistributedSearchTestCase.java:657)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryAndCompare(BaseDistributedSearchTestCase.java:648)
at org.apache.solr.update.PeerSyncTest.test(PeerSyncTest.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-9931) hll omits value in distributed mode when no values in field

2017-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9931:
---
Attachment: SOLR-9931.patch

OK, here's a patch that returns 0 for both distrib and non-distrib hll for a 
non-empty bucket with no values in the field.  Basically, at the shard level, 
it returns 0 for that case, and the distributed merger checks for a number (as 
opposed to just checking for the serialized HLL bytes)

> hll omits value in distributed mode when no values in field
> ---
>
> Key: SOLR-9931
> URL: https://issues.apache.org/jira/browse/SOLR-9931
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-9931.patch, SOLR-9931.patch
>
>
> Given a non-empty bucket, but hll of a field with no values for that bucket 
> domain
> - In non-distributed mode, hll returns 0
> - In distributed mode, the key+value is omitted entirely
> We should make these consistent.
> In this case, what makes the most sense is to return 0 for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9911) Add a way to filter metrics by prefix in the MetricsHandler API

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802348#comment-15802348
 ] 

ASF subversion and git services commented on SOLR-9911:
---

Commit 6a5895c0ec99c31da05fdf3948a0e18c84ffcb4e in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a5895c ]

SOLR-9911: Remove http group from example in change log

(cherry picked from commit 2cffa2e)


> Add a way to filter metrics by prefix in the MetricsHandler API
> ---
>
> Key: SOLR-9911
> URL: https://issues.apache.org/jira/browse/SOLR-9911
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9911.patch
>
>
> It would be nice to have a way to filter metrics by prefix in addition to the 
> group and type filters already available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9911) Add a way to filter metrics by prefix in the MetricsHandler API

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802345#comment-15802345
 ] 

ASF subversion and git services commented on SOLR-9911:
---

Commit 2cffa2e3e716e3ca3e9e3099f6c12ad157005e4c in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2cffa2e ]

SOLR-9911: Remove http group from example in change log


> Add a way to filter metrics by prefix in the MetricsHandler API
> ---
>
> Key: SOLR-9911
> URL: https://issues.apache.org/jira/browse/SOLR-9911
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9911.patch
>
>
> It would be nice to have a way to filter metrics by prefix in addition to the 
> group and type filters already available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 651 - Unstable

2017-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/651/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor177.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:930)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor177.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.(SolrCore.java:930)
at org.apache.solr.core.SolrCore.(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([4CAD4E9DDD1317B5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9933) support span queries in SolrCoreParser

2017-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9933:
-

 Summary: support span queries in SolrCoreParser
 Key: SOLR-9933
 URL: https://issues.apache.org/jira/browse/SOLR-9933
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


starting point: Daniel Collins's [pull 
request|https://github.com/bloomberg/lucene-solr/pull/188]

next step: test case(s) to demonstrate what is currently not supported




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9932) add TestSolrCoreParser class

2017-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9932:
--
Attachment: SOLR-9932.patch

> add TestSolrCoreParser class
> 
>
> Key: SOLR-9932
> URL: https://issues.apache.org/jira/browse/SOLR-9932
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9932.patch
>
>
> The new TestSolrCoreParser class directly instantiates a SolrCoreParser and 
> initialises it with custom query builders, the tests then check that custom 
> xml parses correctly _and_ that the resulting Query object is an instance of 
> the correct class.
> In comparison, the existing TestXmlQParserPlugin test indirectly instantiates 
> and initialises a SolrCoreParser via the solrconfig-testxmlparser.xml and 
> then executes the custom xml queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9932) add TestSolrCoreParser class

2017-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9932:
-

 Summary: add TestSolrCoreParser class
 Key: SOLR-9932
 URL: https://issues.apache.org/jira/browse/SOLR-9932
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


The new TestSolrCoreParser class directly instantiates a SolrCoreParser and 
initialises it with custom query builders, the tests then check that custom xml 
parses correctly _and_ that the resulting Query object is an instance of the 
correct class.

In comparison, the existing TestXmlQParserPlugin test indirectly instantiates 
and initialises a SolrCoreParser via the solrconfig-testxmlparser.xml and then 
executes the custom xml queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2017-01-05 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802076#comment-15802076
 ] 

Ben Manes commented on SOLR-8241:
-

I think the tests all passed last I checked with this new SolrCache, but I 
don't think we had made it the default yet so that might be a premature 
statement. If you want to upgrade only the 1.x usage, that would be a safe 
change to extract from this patch (a minor API tweak). If anything the later 
versions also have fewer bugs.

I'd love to see this patch land.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> proposal.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802048#comment-15802048
 ] 

Andrzej Bialecki  commented on SOLR-9928:
-

Well spotted! Thank you for the patch - committed to master and branch_6x.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-9928.
-
Resolution: Fixed

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802047#comment-15802047
 ] 

ASF subversion and git services commented on SOLR-9928:
---

Commit 60da846b14f4d7904db2b4ee74b4cea247c6c572 in lucene-solr's branch 
refs/heads/branch_6x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=60da846 ]

SOLR-9928: MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super 
(Mike Drob via ab)


> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2017-01-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802010#comment-15802010
 ] 

Shawn Heisey commented on SOLR-8241:


This issue was filed by the author of Caffeine (Ben Manes) and does include 
upgrading the caffeine dependency already present.  I haven't checked yet, but 
presumably all the Solr tests still pass with the upgrade.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> proposal.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9877) Use instrumented http client

2017-01-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9877.
-
Resolution: Fixed

I added a null check for original request on branch_6x and the target on master 
in the commit.

> Use instrumented http client
> 
>
> Key: SOLR-9877
> URL: https://issues.apache.org/jira/browse/SOLR-9877
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9877.patch, SOLR-9877.patch, 
> SOLR-9877_branch_6x.patch, SOLR_9877_branch_6x_hostport_fix.patch, 
> solr-http-metrics.png
>
>
> Use instrumented equivalents of PooledHttpClientConnectionManager and others 
> from metrics-httpclient library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: -ea assertions mismatch error when running JUnit test

2017-01-05 Thread Chris Hostetter
: I am trying to switch from using the SolrJ Embedded Server to the
: SolrJettyTestBase in my Junit tests. However, I am getting the following
: error:
: 
: java.lang.Exception: Assertions mismatch: -ea was not specified but
: -Dtests.asserts=true
...

: I reached out to the Solr User list, and they said that I need to either
: add "-ea" to the java commandline or remove the system property. The
: problem is that I didn’t specify the property and I’m not sure how to tell

... "tests.asserts" actaully defaults to "true" in the LuceneTestCase base 
class used by all Solr tests (that error mesg isn't the greatest, because 
it implies -D... was explicitly used, but it didn't have to me)

The simplest way to avoid this error is to ensure your tests are run with 
"-ea" ... but then you only ever test your code with assertions enabled 
(in the lucene/solr code base, we randomly decide as part of the test 
runner if/when to use assertions -- hence the extra property as a sanity 
check)

I'm not very familiar with maven, but from some quick googling it 
sounds like the "normal" way to run tests in maven is the 
"maven-surefire-plugin" and it has "enableAssertions" enabled by 
default? ... perhaps something about your pom is overriding that?  (i see 
"maven-failsafe-plugin" in your pom -- is that what's running your test? 
does it have an option to enable jvm assertions?)

If that approach doesn't work, the other option is to go the other 
direction and ignore the jvm assertions by forcing maven to set 
tests.asserts=false when invoking the test JVM.  Again, i don't know much 
about maven, but some random googling suggests you can configure the 
surefire plugin (assuming that's what you're using) to pass custom system 
properties to the JVMs it creates...

https://stackoverflow.com/questions/824019/maven-2-1-0-not-passing-on-system-properties-to-java-virtual-machine



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9877) Use instrumented http client

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801982#comment-15801982
 ] 

ASF subversion and git services commented on SOLR-9877:
---

Commit 3eab1b4839e30d5a82923afeff1bc19bf8e6b25f in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3eab1b4 ]

SOLR-9877: Add a null check for target


> Use instrumented http client
> 
>
> Key: SOLR-9877
> URL: https://issues.apache.org/jira/browse/SOLR-9877
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9877.patch, SOLR-9877.patch, 
> SOLR-9877_branch_6x.patch, SOLR_9877_branch_6x_hostport_fix.patch, 
> solr-http-metrics.png
>
>
> Use instrumented equivalents of PooledHttpClientConnectionManager and others 
> from metrics-httpclient library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9877) Use instrumented http client

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801978#comment-15801978
 ] 

ASF subversion and git services commented on SOLR-9877:
---

Commit fd2c8cb125c1955940bd33f19ee06b4230f38a36 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fd2c8cb ]

SOLR-9877: Unwrap the EntityEnclosingRequestWrapper to get the right URI which 
has host/port information


> Use instrumented http client
> 
>
> Key: SOLR-9877
> URL: https://issues.apache.org/jira/browse/SOLR-9877
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9877.patch, SOLR-9877.patch, 
> SOLR-9877_branch_6x.patch, SOLR_9877_branch_6x_hostport_fix.patch, 
> solr-http-metrics.png
>
>
> Use instrumented equivalents of PooledHttpClientConnectionManager and others 
> from metrics-httpclient library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9877) Use instrumented http client

2017-01-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9877:

Attachment: SOLR_9877_branch_6x_hostport_fix.patch

Patch which unwraps the EntityEnclosingRequestWrapper to get the right URI that 
has host/port information.

> Use instrumented http client
> 
>
> Key: SOLR-9877
> URL: https://issues.apache.org/jira/browse/SOLR-9877
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9877.patch, SOLR-9877.patch, 
> SOLR-9877_branch_6x.patch, SOLR_9877_branch_6x_hostport_fix.patch, 
> solr-http-metrics.png
>
>
> Use instrumented equivalents of PooledHttpClientConnectionManager and others 
> from metrics-httpclient library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9923) Remove solr.http metric group and merge its metrics to solr.node group

2017-01-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9923.
-
Resolution: Fixed

> Remove solr.http metric group and merge its metrics to solr.node group
> --
>
> Key: SOLR-9923
> URL: https://issues.apache.org/jira/browse/SOLR-9923
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR_9923_master.patch
>
>
> The components in the http metric group such as UpdateShardHandler and 
> HttpShardHandler have both httpclient and thread pool metrics and it is 
> awkward to see both in the "http" group. I propose to eliminate the http 
> group and move its metrics into the node group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Installing PyLucene

2017-01-05 Thread Andi Vajda

> On Jan 5, 2017, at 08:57, marco turchi  wrote:
> 
> Done! all testes passed!

Excellent !

> 
> thanks a lot!
> Marco
> 
>> On Thu, Jan 5, 2017 at 5:21 PM, Andi Vajda  wrote:
>> 
>> 
>>> On Jan 5, 2017, at 07:27, marco turchi  wrote:
>>> 
>>> Perfect!!!
>>> 
>>> For now, I keep the version as it is. I'll try later to install jcc with
>>> --shared flag, because I'm not sure if the patch for the setuptools
>>> requires root access.
>> 
>> Your JCC install is fine. It's PyLucene that needs to be rebuilt by adding
>> a --shared arg to its jcc invocation command line in its Makefile. No
>> setuptools patching necessary.
>> 
>> Andi..
>> 
>>> 
>>> Thanks a lot for your help!
>>> Marco
>>> 
 On Thu, Jan 5, 2017 at 2:27 AM, Andi Vajda  wrote:
 
 
> On Jan 4, 2017, at 13:51, marco turchi  wrote:
> 
> No I didn't.
> 
> I have run the codes in sample and they work. For my project the
> functionalities in the samples are enough. If necessary I can recompile
 jcc
> with --shared. What do you suggest?
 
 If you don't use --shared then the jcc that is linked into PyLucene is
>> not
 running shared mode and the test failure you're seeing is due to that.
 
 It's easy enough to rebuild PyLucene with --shared.
 Up to you !
 
 Andi..
 
> 
> Best
> Marco
> 
> 
> Il 04 Gen 2017 19:42, "Andi Vajda"  ha scritto:
> 
> 
>> On Jan 4, 2017, at 04:24, marco turchi 
>> wrote:
>> 
>> Dear Andi and Thomas,
>> following your advice I have removed the Windows error.
>> 
>> I still have this
>> 
>> ERROR: testThroughLayerException (__main__.PythonExceptionTestCase)
>> 
>> To answer Andi, I have printed the config.SHARED just before the error
 and
>> the output is true, in my opinion, showing that the shared mode is
 enabled
>> when running tests. Is this that you were mentioning in your email?
> 
> When you built PyLucene did you include --shared on the jcc invocation
> command line ?
> 
> Andi..
> 
>> 
>> Thanks a lot for your help!
>> Marco
>> 
>> 
>> On Wed, Jan 4, 2017 at 10:59 AM, Petrus Hyvönen <
 petrus.hyvo...@gmail.com>
>> wrote:
>> 
>>> Dear Thomas,
>>> 
>>> I would be very interested in a python 3 port of JCC. I am not a very
>>> skilled developer, looked at starting a development based on the old
>>> python-3 version but it's beyond my current skills.
>>> 
>>> I would be happy to help and test and review the JCC patches, I think
> your
>>> patches would be a valuable contribution to JCC.
>>> 
>>> With Best Regards
>>> /Petrus
>>> 
>>> 
>>> On Wed, Jan 4, 2017 at 9:13 AM, Thomas Koch 
>> wrote:
>>> 
> NameError: global name 'WindowsError' is not defined
 
 Note that PyLucene currently lacks official Python3 support!
 We've done a port of PyLucene 3.6 (!) to support Python3 and offered
 the
 patches needed to JCC and PyLucene for use/review on the list - but
>>> didn't
 get any feedback so far.
 cf. https://www.mail-archive.com/pylucene-dev@lucene.apache.
 org/msg02167.html 
 
 Regards,
 Thomas
 
>>> --
>>> _
>>> Petrus Hyvönen, Uppsala, Sweden
>>> Mobile Phone/SMS:+46 73 803 19 00
>>> 
 
 
>> 
>> 



Re: Installing PyLucene

2017-01-05 Thread marco turchi
Done! all testes passed!

thanks a lot!
Marco

On Thu, Jan 5, 2017 at 5:21 PM, Andi Vajda  wrote:

>
> > On Jan 5, 2017, at 07:27, marco turchi  wrote:
> >
> > Perfect!!!
> >
> > For now, I keep the version as it is. I'll try later to install jcc with
> > --shared flag, because I'm not sure if the patch for the setuptools
> > requires root access.
>
> Your JCC install is fine. It's PyLucene that needs to be rebuilt by adding
> a --shared arg to its jcc invocation command line in its Makefile. No
> setuptools patching necessary.
>
> Andi..
>
> >
> > Thanks a lot for your help!
> > Marco
> >
> >> On Thu, Jan 5, 2017 at 2:27 AM, Andi Vajda  wrote:
> >>
> >>
> >>> On Jan 4, 2017, at 13:51, marco turchi  wrote:
> >>>
> >>> No I didn't.
> >>>
> >>> I have run the codes in sample and they work. For my project the
> >>> functionalities in the samples are enough. If necessary I can recompile
> >> jcc
> >>> with --shared. What do you suggest?
> >>
> >> If you don't use --shared then the jcc that is linked into PyLucene is
> not
> >> running shared mode and the test failure you're seeing is due to that.
> >>
> >> It's easy enough to rebuild PyLucene with --shared.
> >> Up to you !
> >>
> >> Andi..
> >>
> >>>
> >>> Best
> >>> Marco
> >>>
> >>>
> >>> Il 04 Gen 2017 19:42, "Andi Vajda"  ha scritto:
> >>>
> >>>
>  On Jan 4, 2017, at 04:24, marco turchi 
> wrote:
> 
>  Dear Andi and Thomas,
>  following your advice I have removed the Windows error.
> 
>  I still have this
> 
>  ERROR: testThroughLayerException (__main__.PythonExceptionTestCase)
> 
>  To answer Andi, I have printed the config.SHARED just before the error
> >> and
>  the output is true, in my opinion, showing that the shared mode is
> >> enabled
>  when running tests. Is this that you were mentioning in your email?
> >>>
> >>> When you built PyLucene did you include --shared on the jcc invocation
> >>> command line ?
> >>>
> >>> Andi..
> >>>
> 
>  Thanks a lot for your help!
>  Marco
> 
> 
>  On Wed, Jan 4, 2017 at 10:59 AM, Petrus Hyvönen <
> >> petrus.hyvo...@gmail.com>
>  wrote:
> 
> > Dear Thomas,
> >
> > I would be very interested in a python 3 port of JCC. I am not a very
> > skilled developer, looked at starting a development based on the old
> > python-3 version but it's beyond my current skills.
> >
> > I would be happy to help and test and review the JCC patches, I think
> >>> your
> > patches would be a valuable contribution to JCC.
> >
> > With Best Regards
> > /Petrus
> >
> >
> > On Wed, Jan 4, 2017 at 9:13 AM, Thomas Koch 
> wrote:
> >
> >>> NameError: global name 'WindowsError' is not defined
> >>
> >> Note that PyLucene currently lacks official Python3 support!
> >> We've done a port of PyLucene 3.6 (!) to support Python3 and offered
> >> the
> >> patches needed to JCC and PyLucene for use/review on the list - but
> > didn't
> >> get any feedback so far.
> >> cf. https://www.mail-archive.com/pylucene-dev@lucene.apache.
> >> org/msg02167.html  >> pylucene-dev@lucene.apache.org/msg02167.html>
> >>
> >> Regards,
> >> Thomas
> >>
> > --
> > _
> > Petrus Hyvönen, Uppsala, Sweden
> > Mobile Phone/SMS:+46 73 803 19 00
> >
> >>
> >>
>
>


[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801834#comment-15801834
 ] 

ASF subversion and git services commented on SOLR-9928:
---

Commit e5264c48955165ac5c5b1aabba4748378d3f6fa9 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e5264c4 ]

SOLR-9928: MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super 
(Mike Drob via ab)


> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2017-01-05 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801792#comment-15801792
 ] 

Timothy M. Rodriguez commented on SOLR-8241:


+1 for this issue.  Solr currently uses caffeine-1.0.1 in it's distribution, 
which can cause conflicts if you create any extensions that intend to use the 
new library.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> proposal.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Installing PyLucene

2017-01-05 Thread Andi Vajda

> On Jan 5, 2017, at 07:27, marco turchi  wrote:
> 
> Perfect!!!
> 
> For now, I keep the version as it is. I'll try later to install jcc with
> --shared flag, because I'm not sure if the patch for the setuptools
> requires root access.

Your JCC install is fine. It's PyLucene that needs to be rebuilt by adding a 
--shared arg to its jcc invocation command line in its Makefile. No setuptools 
patching necessary.

Andi..

> 
> Thanks a lot for your help!
> Marco
> 
>> On Thu, Jan 5, 2017 at 2:27 AM, Andi Vajda  wrote:
>> 
>> 
>>> On Jan 4, 2017, at 13:51, marco turchi  wrote:
>>> 
>>> No I didn't.
>>> 
>>> I have run the codes in sample and they work. For my project the
>>> functionalities in the samples are enough. If necessary I can recompile
>> jcc
>>> with --shared. What do you suggest?
>> 
>> If you don't use --shared then the jcc that is linked into PyLucene is not
>> running shared mode and the test failure you're seeing is due to that.
>> 
>> It's easy enough to rebuild PyLucene with --shared.
>> Up to you !
>> 
>> Andi..
>> 
>>> 
>>> Best
>>> Marco
>>> 
>>> 
>>> Il 04 Gen 2017 19:42, "Andi Vajda"  ha scritto:
>>> 
>>> 
 On Jan 4, 2017, at 04:24, marco turchi  wrote:
 
 Dear Andi and Thomas,
 following your advice I have removed the Windows error.
 
 I still have this
 
 ERROR: testThroughLayerException (__main__.PythonExceptionTestCase)
 
 To answer Andi, I have printed the config.SHARED just before the error
>> and
 the output is true, in my opinion, showing that the shared mode is
>> enabled
 when running tests. Is this that you were mentioning in your email?
>>> 
>>> When you built PyLucene did you include --shared on the jcc invocation
>>> command line ?
>>> 
>>> Andi..
>>> 
 
 Thanks a lot for your help!
 Marco
 
 
 On Wed, Jan 4, 2017 at 10:59 AM, Petrus Hyvönen <
>> petrus.hyvo...@gmail.com>
 wrote:
 
> Dear Thomas,
> 
> I would be very interested in a python 3 port of JCC. I am not a very
> skilled developer, looked at starting a development based on the old
> python-3 version but it's beyond my current skills.
> 
> I would be happy to help and test and review the JCC patches, I think
>>> your
> patches would be a valuable contribution to JCC.
> 
> With Best Regards
> /Petrus
> 
> 
> On Wed, Jan 4, 2017 at 9:13 AM, Thomas Koch  wrote:
> 
>>> NameError: global name 'WindowsError' is not defined
>> 
>> Note that PyLucene currently lacks official Python3 support!
>> We've done a port of PyLucene 3.6 (!) to support Python3 and offered
>> the
>> patches needed to JCC and PyLucene for use/review on the list - but
> didn't
>> get any feedback so far.
>> cf. https://www.mail-archive.com/pylucene-dev@lucene.apache.
>> org/msg02167.html > pylucene-dev@lucene.apache.org/msg02167.html>
>> 
>> Regards,
>> Thomas
>> 
> --
> _
> Petrus Hyvönen, Uppsala, Sweden
> Mobile Phone/SMS:+46 73 803 19 00
> 
>> 
>> 



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-05 Thread Eugene Tskhovrebov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801757#comment-15801757
 ] 

Eugene Tskhovrebov commented on SOLR-7495:
--

AFAIS you can wrap any field into InsanityWrapper. What is the idea behind such 
strict check?
bq. if (sf != null && !sf.hasDocValues() && !sf.multiValued() && 
sf.getType().getNumericType() != null) {
What about other field types (e.g., Numeric DocValued or Dates)?

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> 

[jira] [Created] (LUCENE-7621) Per-document minShouldMatch

2017-01-05 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7621:


 Summary: Per-document minShouldMatch
 Key: LUCENE-7621
 URL: https://issues.apache.org/jira/browse/LUCENE-7621
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Adrien Grand
Priority: Minor


I have seen similar requirements a couple times but could not find any related 
issue so I am opening one now. The idea would be to allow passing a 
{{LongValuesSource}} rather than an integer as the {{minShouldMatch}} parameter 
of {{BooleanQuery}} so that the number of required clauses can depend on the 
document that is being matched. In terms of implementation, it looks like it 
would be straightforward as we would just have to update the value of 
{{minShouldMatch}} in {{MinShouldMatchSumScorer.setDocAndFreq}} and things 
would still be efficient, ie. we would still use advance on the costly clauses.

This kind of feature would allow to run queries that must match eg. 80% of the 
terms that a document contains (by indexing the number of terms in a separate 
field). It would also make it possible for Luwak or ES' percolator to index 
boolean queries that have a value of {{minShouldMatch}} greater than 1 more 
efficiently.

I do not have any plans to work on it soon but I am curious how much interest 
this feature would drive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9916) Add arithmetic operations to the SelectStream

2017-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801745#comment-15801745
 ] 

Joel Bernstein commented on SOLR-9916:
--

[~dpgove], I'm curious about your thoughts on this ticket. Do you think this is 
the right approach?

> Add arithmetic operations to the SelectStream
> -
>
> Key: SOLR-9916
> URL: https://issues.apache.org/jira/browse/SOLR-9916
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> One of the things that will be needed as the SQL implementation matures is 
> the ability to do arithmetic operations. For example:
> select (a+b) from x;
> select sum(a)+sum(b) from x;
> We will need to support arithmetic operations within the Streaming API to 
> support these types of operations.
> It looks like adding arithmetic operations to the SelectStream is the best 
> place to add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-05 Thread David Smiley (JIRA)
David Smiley created LUCENE-7620:


 Summary: UnifiedHighlighter: add target character width 
BreakIterator wrapper
 Key: LUCENE-7620
 URL: https://issues.apache.org/jira/browse/LUCENE-7620
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/highlighter
Reporter: David Smiley
Assignee: David Smiley


The original Highlighter includes a {{SimpleFragmenter}} that delineates 
fragments (aka Passages) by a character width.  The default is 100 characters.

It would be great to support something similar for the UnifiedHighlighter.  
It's useful in its own right and of course it helps users transition to the UH. 
 I'd like to do it as a wrapper to another BreakIterator -- perhaps a sentence 
one.  In this way you get back Passages that are a number of sentences so they 
will look nice instead of breaking mid-way through a sentence.  And you get 
some control by specifying a target number of characters.  This BreakIterator 
wouldn't be a general purpose java.text.BreakIterator since it would assume 
it's called in a manner exactly as the UnifiedHighlighter uses it.  It would 
probably be compatible with the PostingsHighlighter too.

I don't propose doing this by default; besides, it's easy enough to pick your 
BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+147) - Build # 18704 - Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18704/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSF: 6

Stack Trace:
java.lang.AssertionError: Unexpected number of elements in the group for 
intGSF: 6
at 
__randomizedtesting.SeedInfo.seed([B7FCF0E58CDDF5B3:2C479EBDC185C7ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:376)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[snapshot_metadata, index.20170105231113117, index.20170105231109071, 
replication.properties, index.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [snapshot_metadata, 

[jira] [Updated] (SOLR-9503) NPE in Replica Placement Rules when using Overseer Role with other rules

2017-01-05 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-9503:
---
Attachment: SOLR-9503.patch

I went through the tests and found that if I added another rule to the existing 
test for the overseer-role, it would fail as expected with the previous code. 
That test now passes with the fix, so I've updated my patch with that test 
change.

> NPE in Replica Placement Rules when using Overseer Role with other rules
> 
>
> Key: SOLR-9503
> URL: https://issues.apache.org/jira/browse/SOLR-9503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Rules, SolrCloud
>Affects Versions: 6.2, master (7.0)
>Reporter: Tim Owen
>Assignee: Noble Paul
> Attachments: SOLR-9503.patch, SOLR-9503.patch
>
>
> The overseer role introduced in SOLR-9251 works well if there's only a single 
> Rule for replica placement e.g. {code}rule=role:!overseer{code} but when 
> combined with another rule, e.g. 
> {code}rule=role:!overseer=host:*,shard:*,replica:<2{code} it can result 
> in a NullPointerException (in Rule.tryAssignNodeToShard)
> This happens because the code builds up a nodeVsTags map, but it only has 
> entries for nodes that have values for *all* tags used among the rules. This 
> means not enough information is available to other rules when they are being 
> checked during replica assignment. In the example rules above, if we have a 
> cluster of 12 nodes and only 3 are given the Overseer role, the others do not 
> have any entry in the nodeVsTags map because they only have the host tag 
> value and not the role tag value.
> Looking at the code in ReplicaAssigner.getTagsForNodes, it is explicitly only 
> keeping entries that fulfil the constraint of having values for all tags used 
> in the rules. Possibly this constraint was suitable when rules were 
> originally introduced, but the Role tag (used for Overseers) is unlikely to 
> be present for all nodes in the cluster, and similarly for sysprop tags which 
> may or not be set for a node.
> My patch removes this constraint, so the nodeVsTags map contains everything 
> known about all nodes, even if they have no value for a given tag. This 
> allows the rule combination above to work, and doesn't appear to cause any 
> problems with the code paths that use the nodeVsTags map. They handle null 
> values quite well, and the tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1058 - Still Unstable!

2017-01-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1058/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics

Error Message:
minorMerge: 3 expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: minorMerge: 3 expected:<4> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([27DF473435018129:EB0F7A88F58F7A12]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics(SolrIndexMetricsTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11282 lines...]
   [junit4] Suite: org.apache.solr.update.SolrIndexMetricsTest
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-9931) hll omits value in distributed mode when no values in field

2017-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9931:
---
Attachment: SOLR-9931.patch

Here's a simple test that currently fails.

> hll omits value in distributed mode when no values in field
> ---
>
> Key: SOLR-9931
> URL: https://issues.apache.org/jira/browse/SOLR-9931
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-9931.patch
>
>
> Given a non-empty bucket, but hll of a field with no values for that bucket 
> domain
> - In non-distributed mode, hll returns 0
> - In distributed mode, the key+value is omitted entirely
> We should make these consistent.
> In this case, what makes the most sense is to return 0 for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >