[jira] [Updated] (SOLR-10256) Parentheses in SpellCheckCollator

2017-03-08 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-10256:

Description: 
SpellCheckCollator adds parentheses ( *'('* and *')'* ) around tokens which 
have space between them.  
This should be configurable, because if *_WordBreakSpellCheckComponent_* is 
being used, queries like : *applejuice* will be broken down to *apple juice*. 
Such suggestions are being surrounded by braces by current 
*SpellCheckCollator*. 
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parenthesisation of spell check suggestions.

  was:
SpellCheckCollator adds parentheses ( *'('* and *')'* ) around tokens which 
have space between them.  
This should be configurable, because if *_WordBreakSpellCheckComponent_* is 
being used, queries like : *applejuice* will be broken down to *apple juice*.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.


> Parentheses in SpellCheckCollator
> -
>
> Key: SOLR-10256
> URL: https://issues.apache.org/jira/browse/SOLR-10256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>
> SpellCheckCollator adds parentheses ( *'('* and *')'* ) around tokens which 
> have space between them.  
> This should be configurable, because if *_WordBreakSpellCheckComponent_* is 
> being used, queries like : *applejuice* will be broken down to *apple juice*. 
> Such suggestions are being surrounded by braces by current 
> *SpellCheckCollator*. 
> And when surrounded by brackets, they represent the same position, which is 
> not required. 
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
>   
> A solution to this will be to have a flag, which can help disable this 
> parenthesisation of spell check suggestions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10256) Parentheses in SpellCheckCollator

2017-03-08 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-10256:

Description: 
SpellCheckCollator adds parentheses ( *'('* and *')'* ) around tokens which 
have space between them.  
This should be configurable, because if *_WordBreakSpellCheckComponent_* is 
being used, queries like : *applejuice* will be broken down to *apple juice*.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.

  was:
SpellCheckCollator adds parentheses ( '(' and ')' ) around tokens which have 
space between them.  
This should be configurable, because if WordBreakSpellCheckComponent is being 
used, queries like : *applejuice* will be broken down to *apple juice*.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.


> Parentheses in SpellCheckCollator
> -
>
> Key: SOLR-10256
> URL: https://issues.apache.org/jira/browse/SOLR-10256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>
> SpellCheckCollator adds parentheses ( *'('* and *')'* ) around tokens which 
> have space between them.  
> This should be configurable, because if *_WordBreakSpellCheckComponent_* is 
> being used, queries like : *applejuice* will be broken down to *apple juice*.
> And when surrounded by brackets, they represent the same position, which is 
> not required. 
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
>   
> A solution to this will be to have a flag, which can help disable this 
> parnthesisation of spell check suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10256) Parentheses in SpellCheckCollator

2017-03-08 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-10256:

Description: 
SpellCheckCollator adds parentheses ( '(' and ')' ) around tokens which have 
space between them.  
This should be configurable, because if WordBreakSpellCheckComponent is being 
used, queries like : *applejuice* will be broken down to *apple juice*.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.

  was:
SpellCheckCollator adds parentheses ( '(' and ')' ) around tokens which have 
space between them.  
This should be configurable, because if WordBreakSpellCheckComponent is being 
used, queries like : applejuice will be broken down to apple juice.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.


> Parentheses in SpellCheckCollator
> -
>
> Key: SOLR-10256
> URL: https://issues.apache.org/jira/browse/SOLR-10256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>
> SpellCheckCollator adds parentheses ( '(' and ')' ) around tokens which have 
> space between them.  
> This should be configurable, because if WordBreakSpellCheckComponent is being 
> used, queries like : *applejuice* will be broken down to *apple juice*.
> And when surrounded by brackets, they represent the same position, which is 
> not required. 
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
>   
> A solution to this will be to have a flag, which can help disable this 
> parnthesisation of spell check suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10256) Parentheses in SpellCheckCollator

2017-03-08 Thread Abhishek Kumar Singh (JIRA)
Abhishek Kumar Singh created SOLR-10256:
---

 Summary: Parentheses in SpellCheckCollator
 Key: SOLR-10256
 URL: https://issues.apache.org/jira/browse/SOLR-10256
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spellchecker
Reporter: Abhishek Kumar Singh


SpellCheckCollator adds parentheses ( '(' and ')' ) around tokens which have 
space between them.  
This should be configurable, because if WordBreakSpellCheckComponent is being 
used, queries like : applejuice will be broken down to apple juice.
And when surrounded by brackets, they represent the same position, which is not 
required. 

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/spelling/SpellCheckCollator.java#L227
  

A solution to this will be to have a flag, which can help disable this 
parnthesisation of spell check suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10076) Hiding keystore and truststore passwords from /admin/info/* outputs

2017-03-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902592#comment-15902592
 ] 

Mark Miller commented on SOLR-10076:


So I think we want to make sure the search for 'password' is case insensitive 
due to things like javax.net.ssl.trustStorePassword. Could use a test for that 
too.

We should move RedactionUtils.java to org.apache.solr.util probably.

Greg did something similar in Cloudera Search lucene-solr repo as a temporary 
hack, but used '--REDACTED--' I think that is more clear than the ** 
redaction string. 

Given the affect this could have on tools/scripts that read output, I think 
it's not a huge deal if we changed it, but I don't see a strong reason to do it 
and that should usually favour back compat, even if we would guess those 
affected might be very few. We can do it by default in 7 and anyone looking for 
this in 6.5 and beyond will know they need it and it didn't exist in 6.4 and < 
and can turn it on. Seems like the least friction.

> Hiding keystore and truststore passwords from /admin/info/* outputs
> ---
>
> Key: SOLR-10076
> URL: https://issues.apache.org/jira/browse/SOLR-10076
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10076.patch
>
>
> Passing keystore and truststore password is done by system properties, via 
> cmd line parameter.
> As result, {{/admin/info/properties}} and {{/admin/info/system}} will print 
> out the received password.
> Proposing solution to automatically redact value of any system property 
> before output, containing the word {{password}}, and replacing its value with 
> {{**}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.4-Linux (32bit/jdk1.8.0_121) - Build # 160 - Unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/160/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:399) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:281)
  at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:399)
at 
org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$0(ReplicationHandler.java:281)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([FECC6B382E176A46]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12119 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.4-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_FECC6B382E176A46-001/init-core-data-001
   [junit4]   2> 1080829 INFO  
(SUITE-TestReplicationHandler-seed#[FECC6B382E176A46]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None)
   [junit4]   2> 1080832 INFO  

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1181 - Unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1181/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:109)
  at sun.reflect.GeneratedConstructorAccessor185.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:759)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:821)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1072)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:937)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:829)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:949)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:582)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:109)
at sun.reflect.GeneratedConstructorAccessor185.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:759)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:821)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1072)
at org.apache.solr.core.SolrCore.(SolrCore.java:937)
at org.apache.solr.core.SolrCore.(SolrCore.java:829)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:949)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:582)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([3D54B1C1CEB34489]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:301)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 

[jira] [Updated] (SOLR-10255) Large psuedo-stored fields via BinaryDocValuesField

2017-03-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10255:

Attachment: SOLR-10255.patch

Here's a patch that's in-progress with a bunch of nocommits/discussion points.  
It theoretically works but *there are no tests yet* so I doubt it :-).
* I added a "large" flag to FieldType but in hindsight perhaps this belongs on 
TextField because I'm only adding it there?  BTW a ramification of this is that 
you wouldn't be able to set it on the field definition, only the fieldType.  I 
could see this being useful on BinaryField but I don't intend to work on that.
* The BinaryDocValuesField is given a separate name from the base name, 
{{___large_}} prefix.  I didn't have to do this but I want to allow for 
TextField to some day have conventional SortedSetDocValues on 
analyzed/tokenized text.  In Lucene we can't have both types of DocValues for 
the same field name.
* I sorta cheat and we pretend the field is still "stored" but in reality it's 
not... at least it's not "stored" in the Lucene sense.  This is deliberate 
because I want this field to be compatible with various other Solr features 
that don't know anything about this new "large" concept.
* One unfortunate thing here is that the doc related loading in 
SolrIndexSearcher now has to call {{DocValues.getBinary(getSlowAtomicReader(), 
TextField.LARGE_NAME_PREFIX + largeField)}} and then call 
{{advanceExact(docId)}} for each field in the schema that's marked as large.  
This is done so that we know if the field even has a large value for this 
document.  It's almost always necessary to do this if there are any declared 
large fields.  This may not be a big deal in the scheme of things?  One 
possible solution is for {{TextField.createFields()}} to add a special stored 
field named perhaps {{___largeFields}}} and supply the field name as a value.

In a separate issue I'll propose a compressed DocValuesFormat that Solr's 
SchemaCodecFactory will supply for fields starting with "___large_". Or maybe I 
might have it be an auto-registed internal field type in the schema; we'll see.

BTW this approach is incompatible with multiValued fields since BinaryDocValues 
has this limitation.

_I'd really appreciate peer review, even if it's just a cursory look at the 
patch_

> Large psuedo-stored fields via BinaryDocValuesField
> ---
>
> Key: SOLR-10255
> URL: https://issues.apache.org/jira/browse/SOLR-10255
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-10255.patch
>
>
> (sub-issue of SOLR-10117)  This is a proposal for a better way for Solr to 
> handle "large" text fields.  Large docs that are in Lucene StoredFields slow 
> requests that don't involve access to such fields.  This is fundamental to 
> the fact that StoredFields are row-stored.  Worse, the Solr documentCache 
> will wind up holding onto massive Strings.  While the latter could be tackled 
> on it's own somehow as it's the most serious issue, nevertheless it seems 
> wrong that such large fields are in row-stored storage to begin with.  After 
> all, relational DBs seemed to have figured this out and put CLOBs/BLOBs in a 
> separate place.  Here, we do similarly by using, Lucene 
> {{BinaryDocValuesField}}.  BDVF isn't well known in the DocValues family as 
> it's not for typical DocValues purposes like sorting/faceting etc.  The 
> default DocValuesFormat doesn't compress these but we could write one that 
> does.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10117) Big docs and the DocumentCache; umbrella issue

2017-03-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902542#comment-15902542
 ] 

David Smiley commented on SOLR-10117:
-

spinning off SOLR-10255 for BinaryDocValues based approach.  I could have used 
a JIRA sub-task but I'm not a fan when the issue space is a bit exploratory.

> Big docs and the DocumentCache; umbrella issue
> --
>
> Key: SOLR-10117
> URL: https://issues.apache.org/jira/browse/SOLR-10117
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10117_large_fields.patch
>
>
> This is an umbrella issue for improved handling of large documents (large 
> stored fields), generally related to the DocumentCache or SolrIndexSearcher's 
> doc() methods.  Highlighting is affected as it's the primary consumer of this 
> data.  "Large" here is multi-megabyte, especially tens even hundreds of 
> megabytes. We'd like to support such users without forcing them to choose 
> between no DocumentCache (bad performance), or having one but hitting OOM due 
> to massive Strings winding up in there.  I've contemplated this for longer 
> than I'd like to admit and it's a complicated issue with difference concerns 
> to balance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10255) Large psuedo-stored fields via BinaryDocValuesField

2017-03-08 Thread David Smiley (JIRA)
David Smiley created SOLR-10255:
---

 Summary: Large psuedo-stored fields via BinaryDocValuesField
 Key: SOLR-10255
 URL: https://issues.apache.org/jira/browse/SOLR-10255
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley


(sub-issue of SOLR-10117)  This is a proposal for a better way for Solr to 
handle "large" text fields.  Large docs that are in Lucene StoredFields slow 
requests that don't involve access to such fields.  This is fundamental to the 
fact that StoredFields are row-stored.  Worse, the Solr documentCache will wind 
up holding onto massive Strings.  While the latter could be tackled on it's own 
somehow as it's the most serious issue, nevertheless it seems wrong that such 
large fields are in row-stored storage to begin with.  After all, relational 
DBs seemed to have figured this out and put CLOBs/BLOBs in a separate place.  
Here, we do similarly by using, Lucene {{BinaryDocValuesField}}.  BDVF isn't 
well known in the DocValues family as it's not for typical DocValues purposes 
like sorting/faceting etc.  The default DocValuesFormat doesn't compress these 
but we could write one that does.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10117) Big docs and the DocumentCache; umbrella issue

2017-03-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10117:

Issue Type: Improvement  (was: Bug)

> Big docs and the DocumentCache; umbrella issue
> --
>
> Key: SOLR-10117
> URL: https://issues.apache.org/jira/browse/SOLR-10117
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10117_large_fields.patch
>
>
> This is an umbrella issue for improved handling of large documents (large 
> stored fields), generally related to the DocumentCache or SolrIndexSearcher's 
> doc() methods.  Highlighting is affected as it's the primary consumer of this 
> data.  "Large" here is multi-megabyte, especially tens even hundreds of 
> megabytes. We'd like to support such users without forcing them to choose 
> between no DocumentCache (bad performance), or having one but hitting OOM due 
> to massive Strings winding up in there.  I've contemplated this for longer 
> than I'd like to admit and it's a complicated issue with difference concerns 
> to balance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902496#comment-15902496
 ] 

David Smiley commented on LUCENE-7734:
--

I realized this changes the signature of a public class... (I needed to do an 
'ant clean" first).  On 6x I could add an additional one-liner constructor for 
the existing FieldType typed argument?

> FieldType copy constructor should accept IndexableFieldType
> ---
>
> Key: LUCENE-7734
> URL: https://issues.apache.org/jira/browse/LUCENE-7734
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7734.patch
>
>
> {{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has 
> a copy-constructor but it demands a {{FieldType}}.  It should accept  
> {{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-08 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902487#comment-15902487
 ] 

Ben Manes commented on SOLR-10205:
--

For writes you might prefer to use an atomic computation instead of a racy 
get-compute-put. The stampeding writers will cause a storm of removal 
notifications indicating the value was replaced. I think that would result in 
more frequently needing to free and acquire slots in the bank. This would 
reduce I/O costs as well, of course. Caffeine performs this by using a 
lock-free lookup that falls back to a computeIfAbsent, so that a hit won't 
thrash on locks if the entry is present.

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5, master (7.0)
>
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9601) DIH: Radicially simplify Tika example to only show relevant configuration

2017-03-08 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-9601:

Attachment: tika2_20170308.tgz

It is a little hard to generate a readable DIFF between the original Tika 
example and one I created. So, for ease of testing, I just created it as a 
separate *tika2* core that can be dropped next to the other DIH cores.

I removed all of the unused gunk, so the remaining files are tiny. I wish I 
could remove the infoStream section, but the default is false and I am not sure 
I should.

I've also added a prototype-oriented demo of wildcard, renamed and simplified 
text field definition and did other minor cleanup in what is left.

I am not sure if I need to worry about docValues here. 

Also, I have commented out uniqueKey section, but the corresponding *id* field 
definition is missing. But it was missing in the original example too, so I am 
not sure it is worth adding in the commented out section. 

This is a big change (even if with tiny results files), so I would appreciate 
people commenting on it before I actually commit it.

> DIH: Radicially simplify Tika example to only show relevant configuration
> -
>
> Key: SOLR-9601
> URL: https://issues.apache.org/jira/browse/SOLR-9601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
> extraction)
>Affects Versions: 6.x, master (7.0)
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>  Labels: examples, usability
> Attachments: tika2_20170308.tgz
>
>
> Solr DIH examples are legacy examples to show how DIH work. However, they 
> include full configurations that may obscure teaching points. This is no 
> longer needed as we have 3 full-blown examples in the configsets. 
> Specifically for Tika, the field types definitions were at some point 
> simplified to have less support files in the configuration directory. This, 
> however, means that we now have field definitions that have same names as 
> other examples, but different definitions. 
> Importantly, Tika does not use most (any?) of those modified definitions. 
> They are there just for completeness. Similarly, the solrconfig.xml includes 
> extract handler even though we are demonstrating a different path of using 
> Tika. Somebody grepping through config files may get confused about what 
> configuration aspects contributes to what experience.
> I am planning to significantly simplify configuration and schema of Tika 
> example to **only** show DIH Tika extraction path. It will end-up a very 
> short and focused example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10254:
-

Assignee: Joel Bernstein

> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902428#comment-15902428
 ] 

Adrien Grand commented on LUCENE-7734:
--

Yeah I think seeing FieldType as the default impl of IndexableFieldType makes 
sense, +1 to the patch.

> FieldType copy constructor should accept IndexableFieldType
> ---
>
> Key: LUCENE-7734
> URL: https://issues.apache.org/jira/browse/LUCENE-7734
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7734.patch
>
>
> {{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has 
> a copy-constructor but it demands a {{FieldType}}.  It should accept  
> {{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902424#comment-15902424
 ] 

David Smiley commented on LUCENE-7734:
--

I thought FieldType was supposed to track IndexableFieldType, to be it's 
default impl.  Nevertheless, if one day we want additional state that should be 
copied, we could add an overloaded version.

> FieldType copy constructor should accept IndexableFieldType
> ---
>
> Key: LUCENE-7734
> URL: https://issues.apache.org/jira/browse/LUCENE-7734
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7734.patch
>
>
> {{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has 
> a copy-constructor but it demands a {{FieldType}}.  It should accept  
> {{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902421#comment-15902421
 ] 

Adrien Grand commented on LUCENE-7734:
--

I'm unsure whether it is an issue or not, but it seems to imply that we could 
not add new properties to {{FieldType}} without adding them to 
{{IndexableFieldType}} as well?

> FieldType copy constructor should accept IndexableFieldType
> ---
>
> Key: LUCENE-7734
> URL: https://issues.apache.org/jira/browse/LUCENE-7734
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7734.patch
>
>
> {{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has 
> a copy-constructor but it demands a {{FieldType}}.  It should accept  
> {{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7716) Reduce specialization in TopFieldCollector

2017-03-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902417#comment-15902417
 ] 

Adrien Grand commented on LUCENE-7716:
--

Woops, I had not noticed. Thanks [~hossman].

> Reduce specialization in TopFieldCollector
> --
>
> Key: LUCENE-7716
> URL: https://issues.apache.org/jira/browse/LUCENE-7716
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7716.patch
>
>
> TopFieldCollector optimizes the single-comparator case. I think we could 
> replace this specialization with a MultiLeafFieldComparator wrapper, 
> similarly to how MultiCollector works. This would have the benefit of 
> replacing code duplication of non-trivial logic with a simple wrapper that 
> delegates calls to its sub comparators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7722) Remove BoostedQuery

2017-03-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902416#comment-15902416
 ] 

Adrien Grand commented on LUCENE-7722:
--

bq. Looking closer at BoostingQuery, I think the same effect could be had by 
using a BooleanQuery and wrapping the 'suppressing' subquery with a 
negative-valued BoostQuery? In addition, BoostingQuery has no tests that 
actually run the query...

+1 I think those queries are a bit esoteric so we should not spend to much 
energy or make the API more complicated just to be sure we keep supporting the 
same functionality. Recommending negative-boosted queries as a replacement 
sounds good to me.

bq. On reader-dependent DoubleValuesSource implementations, I think we need to 
add something like a rewrite() function to make the dependancy explicit. 
Otherwise you could have odd interactions with things like the QueryCache.

I'm not sure exactly how you think of that rewrite, but for the record we need 
to make sure to never end up referencing IndexReader or Weight objects from 
Query objects, or it could cause similar leaks to LUCENE-7657.

Since this need for per-reader specialization only exists for queries, I'm 
wondering whether we could make it optional somehow. For instance maybe we 
could have {{Function 
DoubleValuesSource.fromQuery(Query)}} and add a new constructor 
{{FunctionScoreQuery(Query,Function)}} which 
would be used by values sources that need per-index-reader specialization while 
simple (and common) usage of this API that only need reader-independant values 
sources could keep using the current API (which I like for its simplicity)?

> Remove BoostedQuery
> ---
>
> Key: LUCENE-7722
> URL: https://issues.apache.org/jira/browse/LUCENE-7722
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> We already  have FunctionScoreQuery, which is more flexible than BoostedQuery 
> as it can combine scores in arbitrary ways and only requests scores on the 
> underlying scorer if they are needed. So let's remove BoostedQuery?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902384#comment-15902384
 ] 

ASF subversion and git services commented on SOLR-10254:


Commit 03178717f8a9e54e5db61e1ba5f34723269cc2c8 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0317871 ]

SOLR-10254: Fix pre-commit


> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902383#comment-15902383
 ] 

ASF subversion and git services commented on SOLR-10254:


Commit f74419eb95f72d05afe5c067c26756995bf3d174 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f74419e ]

SOLR-10254: significantTerms Streaming Expression should work in non-SolrCloud 
mode


> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902371#comment-15902371
 ] 

ASF subversion and git services commented on SOLR-10254:


Commit c85aac2a65472d0d80050a703c99844e694c1584 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c85aac2 ]

SOLR-10254: Fix pre-commit


> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902370#comment-15902370
 ] 

ASF subversion and git services commented on SOLR-10254:


Commit 682c6a7d5145129e8ae01ff00505ddf5a564d396 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=682c6a7 ]

SOLR-10254: significantTerms Streaming Expression should work in non-SolrCloud 
mode


> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10254:
--
Attachment: SOLR-10254.patch

> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10254.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902339#comment-15902339
 ] 

Joel Bernstein commented on SOLR-10254:
---

Breaking this into a sub-task as it will be the first streaming expression to 
work in non-SolrCloud mode.

> significantTerms Streaming Expression should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10254
> URL: https://issues.apache.org/jira/browse/SOLR-10254
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10254) significantTerms Streaming Expression should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10254:
-

 Summary: significantTerms Streaming Expression should work in 
non-SolrCloud mode
 Key: SOLR-10254
 URL: https://issues.apache.org/jira/browse/SOLR-10254
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10253) Make tests that are as expensive as our expensive @Nightlys @Nightly themselves.

2017-03-08 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10253:
--

 Summary: Make tests that are as expensive as our expensive 
@Nightlys @Nightly themselves.
 Key: SOLR-10253
 URL: https://issues.apache.org/jira/browse/SOLR-10253
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller


If we want these tests to run non @Nightly they should be sped up for that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10233) Add support for different replica types in Solr

2017-03-08 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902214#comment-15902214
 ] 

Cao Manh Dat commented on SOLR-10233:
-

[~tomasflobbe] Sounds good to me. I'm planning to do more work on the test ( 
can take one or two days ) before commit it to master.

> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-10233.patch, SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtimeReplicas=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _append_ or _realtime_ 
> replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   "leader":"true",
>   **"type": "realtime"**},
> "core_node10":{
>   "core":"gettingstarted_shard1_replica2",
>   "base_url":"http://127.0.0.1:7574/solr;,
>   "node_name":"127.0.0.1:7574_solr",
>   "state":"active",
>   **"type": "passive"**}},
>   }},
> 

[jira] [Commented] (SOLR-10249) Allow index fetching to return a detailed result instead of a true/false value

2017-03-08 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902139#comment-15902139
 ] 

Jeff Miller commented on SOLR-10249:


Diffs for my local testing

diff --git a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java 
b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
index 90e515a..2483e69 100644
--- a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
+++ b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
@@ -153,7 +153,7 @@ public class RecoveryStrategy extends Thread implements 
Closeable {
 solrParams.set(ReplicationHandler.MASTER_URL, leaderUrl);
 
 if (isClosed()) return; // we check closed on return
-boolean success = replicationHandler.doFetch(solrParams, false);
+boolean success = replicationHandler.doFetch(solrParams, 
false).getStatus();
 
 if (!success) {
   throw new SolrException(ErrorCode.SERVER_ERROR, "Replication for 
recovery failed.");
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java 
b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
index f706637..a65299a 100644
--- a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
@@ -754,7 +754,7 @@ public class CdcrRequestHandler extends RequestHandlerBase 
implements SolrCoreAw
 // we do not want the raw tlog files from the source
 solrParams.set(ReplicationHandler.TLOG_FILES, false);
 
-success = replicationHandler.doFetch(solrParams, false);
+success = replicationHandler.doFetch(solrParams, false).getStatus();
 
 // this is required because this callable can race with 
HttpSolrCall#destroy
 // which clears the request info.
diff --git a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java 
b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
index b9d9f51..281e660 100644
--- a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
+++ b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
@@ -106,6 +106,8 @@ import static 
org.apache.solr.common.params.CommonParams.JAVABIN;
 import static org.apache.solr.common.params.CommonParams.NAME;
 import static org.apache.solr.handler.ReplicationHandler.*;
 
+import com.google.common.base.Strings;
+
 /**
  *  Provides functionality of downloading changed index files as well as 
config files and a timer for scheduling fetches from the
  * master. 
@@ -161,6 +163,52 @@ public class IndexFetcher {
 
   private Integer soTimeout;
 
+  private static final String INTERRUPT_RESPONSE_MESSAGE = "Interrupted while 
waiting for modify lock";
+
+  public static class IndexFetchResult {
+private final String message;
+private final boolean status;
+private final Throwable exception;
+
+public static final String FAILED_BY_INTERRUPT_MESSAGE = "Fetching index 
failed by interrupt";
+public static final String FAILED_BY_EXCEPTION_MESSAGE = "Fetching index 
failed by exception";
+
+/** pre-defined results */
+public static final IndexFetchResult ALREADY_IN_SYNC = new 
IndexFetchResult("Local index commit is already in sync with peer", true, null);
+public static final IndexFetchResult INDEX_FETCH_FAILURE = new 
IndexFetchResult("Fetching lastest index is failed", false, null);
+public static final IndexFetchResult INDEX_FETCH_SUCCESS = new 
IndexFetchResult("Fetching latest index is successful", true, null);
+public static final IndexFetchResult LOCK_OBTAIN_FAILED = new 
IndexFetchResult("Obtaining SnapPuller lock failed", false, null);
+public static final IndexFetchResult MASTER_VERSION_ZERO = new 
IndexFetchResult("Index in peer is empty and never committed yet", true, null);
+public static final IndexFetchResult NO_INDEX_COMMIT_EXIST = new 
IndexFetchResult("No IndexCommit in local index", false, null);
+public static final IndexFetchResult PEER_INDEX_COMMIT_DELETED = new 
IndexFetchResult("No files to download because IndexCommit in peer was 
deleted", false, null);
+// SFDC: adding a new failure result when replication is aborted because 
of local activity
+public static final IndexFetchResult LOCAL_ACTIVITY_DURING_REPLICATION = 
new IndexFetchResult("Local index modification during replication", false, 
null);
+
+IndexFetchResult(String message, boolean status, Throwable exception) {
+  this.message = message;
+  this.status = status;
+  this.exception = exception;
+}
+
+/*
+ * @return exception thrown if failed by exception or interrupt, otherwise 
null
+ */
+public Throwable getException() {
+  return this.exception;
+}
+
+/*
+ * @return true if index fetch was successful, false otherwise
+ */
+public boolean getStatus() {
+  return this.status;
+}
+
+public String 

[jira] [Created] (SOLR-10252) Example spellcheck config uses _text_ as default field

2017-03-08 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-10252:


 Summary: Example spellcheck config uses _text_ as default field
 Key: SOLR-10252
 URL: https://issues.apache.org/jira/browse/SOLR-10252
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spellchecker
Affects Versions: 6.4.2
Reporter: Cassandra Targett


SOLR-8381 made the {{_text_}} field the default field for spellchecking for the 
basic_configs and data_driven_schema_configs example configsets. This is a 
copyField that gets all it's data from every other field in the index.

This field is also of text_general type, which has a default analysis chain 
that includes stopwords and synonyms. If someone has a large synonym list, 
perhaps with a lot of overlapping matches, this would cause spell checking to 
occur on every one of those terms. I recently saw a parsed query that looked 
like this:

{code}"+(((_text_:partn _text_:gesellschaft _text_:teilhab _text_:konkubinat 
_text_:eheahn _text_:eheahn _text_:konkubinatspaar _text_:konkubinatspartn 
_text_:konkubinatsvertrag _text_:lebenspartn _text_:nichteheahn 
_text_:nichteheahn _text_:nichtehe _text_:wild _text_:registriert 
_text_:eingetrag _text_:eingetrag _text_:registriert _text_:vertragspartei 
_text_:kontrahent _text_:partei _text_:vertragspartn)/no_coord) 
((_text_:gemeinschaft _text_:lebensgemeinschaft _text_:gemeinschaft 
_text_:lebensgemeinschaft _text_:lebensgemeinschaft _text_:ehe 
_text_:partnerschaft _text_:partnerschaft _text_:partn 
_text_:partnerschaft)/no_coord) _text_:gleichgeschlecht _text_:paar) 
+_text_:gestorb"
{code}

Since we recommend that users use a lightly analyzed field for spell checking, 
using {{_text_}} and text_general seems a problematic example for us to start 
people out with. The example above is a lot of extra work for little reason.

I'm not sure what a better field is - those two examples are minimal by design, 
and we can't be sure what field they might have in the index to make it work 
out of the box. However, perhaps we can consider a better field type? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7580) Spans tree scoring

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902083#comment-15902083
 ] 

ASF GitHub Bot commented on LUCENE-7580:


GitHub user PaulElschot opened a pull request:

https://github.com/apache/lucene-solr/pull/166

LUCENE-7580 of 8 Mar 2017.

Resolves a conflict with recent simplification of NearSpanUnordered.
Includes recent SpanSynonymQuery.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PaulElschot/lucene-solr lucene7580-20170308

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #166






> Spans tree scoring
> --
>
> Key: LUCENE-7580
> URL: https://issues.apache.org/jira/browse/LUCENE-7580
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: 6.x
>
> Attachments: LUCENE-7580.patch, LUCENE-7580.patch, LUCENE-7580.patch, 
> LUCENE-7580.patch
>
>
> Recurse the spans tree to compose a score based on the type of subqueries and 
> what matched



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #166: LUCENE-7580 of 8 Mar 2017.

2017-03-08 Thread PaulElschot
GitHub user PaulElschot opened a pull request:

https://github.com/apache/lucene-solr/pull/166

LUCENE-7580 of 8 Mar 2017.

Resolves a conflict with recent simplification of NearSpanUnordered.
Includes recent SpanSynonymQuery.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PaulElschot/lucene-solr lucene7580-20170308

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #166






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7615) SpanSynonymQuery

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902073#comment-15902073
 ] 

ASF GitHub Bot commented on LUCENE-7615:


GitHub user PaulElschot opened a pull request:

https://github.com/apache/lucene-solr/pull/165

LUCENE-7615 of 8 March 2017.

Adds support for SpanSynonymQuery in xml queryparser.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PaulElschot/lucene-solr lucene7615-20170308

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/165.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #165


commit 676c13c0c70e3f344ad6fb430eb5868270be83aa
Author: Paul Elschot <paul.j.elsc...@gmail.com>
Date:   2017-03-08T22:10:40Z

LUCENE-7615 of 8 March 2017.

Adds support for SpanSynonymQuery in xml queryparser.




> SpanSynonymQuery
> 
>
> Key: LUCENE-7615
> URL: https://issues.apache.org/jira/browse/LUCENE-7615
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7615.patch, LUCENE-7615.patch
>
>
> A SpanQuery that tries to score as SynonymQuery.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #165: LUCENE-7615 of 8 March 2017.

2017-03-08 Thread PaulElschot
GitHub user PaulElschot opened a pull request:

https://github.com/apache/lucene-solr/pull/165

LUCENE-7615 of 8 March 2017.

Adds support for SpanSynonymQuery in xml queryparser.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PaulElschot/lucene-solr lucene7615-20170308

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/165.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #165


commit 676c13c0c70e3f344ad6fb430eb5868270be83aa
Author: Paul Elschot <paul.j.elsc...@gmail.com>
Date:   2017-03-08T22:10:40Z

LUCENE-7615 of 8 March 2017.

Adds support for SpanSynonymQuery in xml queryparser.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 746 - Unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/746/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  org.apache.solr.DistributedIntervalFacetingTest.test

Error Message:
Failed to list contents of 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test-files/solr

Stack Trace:
java.io.IOException: Failed to list contents of 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test-files/solr
at 
__randomizedtesting.SeedInfo.seed([C65FD94EE294EBE0:4E0BE6944C688618]:0)
at org.apache.commons.io.FileUtils.doCopyDirectory(FileUtils.java:1426)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1388)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1268)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1237)
at 
org.apache.solr.BaseDistributedSearchTestCase.seedSolrHome(BaseDistributedSearchTestCase.java:1099)
at 
org.apache.solr.BaseDistributedSearchTestCase.createServers(BaseDistributedSearchTestCase.java:340)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1016)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddShard

Error Message:
Error from server at https://127.0.0.1:58060/solr: Failed to create shard

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from 

[jira] [Updated] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7734:
-
Attachment: LUCENE_7734.patch

> FieldType copy constructor should accept IndexableFieldType
> ---
>
> Key: LUCENE-7734
> URL: https://issues.apache.org/jira/browse/LUCENE-7734
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7734.patch
>
>
> {{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has 
> a copy-constructor but it demands a {{FieldType}}.  It should accept  
> {{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7734) FieldType copy constructor should accept IndexableFieldType

2017-03-08 Thread David Smiley (JIRA)
David Smiley created LUCENE-7734:


 Summary: FieldType copy constructor should accept 
IndexableFieldType
 Key: LUCENE-7734
 URL: https://issues.apache.org/jira/browse/LUCENE-7734
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Attachments: LUCENE_7734.patch

{{FieldType}} is a concrete implementation of {{IndexableFieldType}}.  It has a 
copy-constructor but it demands a {{FieldType}}.  It should accept  
{{IndexableFieldType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1711 - Still Unstable

2017-03-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1711/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap

Error Message:
Document mismatch on target after sync expected:<1000> but was:<0>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<1000> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([98F3BF2A2E9C7120:4F24905D9AC3E967]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap(CdcrBootstrapTest.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12492 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrBootstrapTest
   [junit4]   2> Creating 

[jira] [Updated] (SOLR-10231) Cursor value always different for last page with sorting by a date based function using NOW

2017-03-08 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10231:

Summary: Cursor value always different for last page with sorting by a date 
based function using NOW  (was: Cursor value always different for last page 
with sorting by function)

this isn't a general problem with sorting by function, the problem is specific 
to sorting by a date based function that involves the {{NOW}} constant.

 The problem is that every time this function is computed for a document, the 
value can change -- so when requests asks for everything with a cursor value 
"after" the computed value of the last doc on the previous request, you're 
getting overlap with some existing documents -- and ultimately the cursor never 
ends, because the "last" doc constantly computes a sort value that comes 
"after" the sort value it computed the "last" time the request was made.

(what's happening is essentially the same as what you would see if, between 
every request for the "next" page of the cursor using "sort=counter+asc", 
someone did an atomic update on the doc to {{inc counter}} ... but in this case 
the counter increase is just happening because time elapses)



The best work around I can suggest would be to to include a fixed value for the 
{{NOW}} param in any requests involving sorting by date math -- that way the 
computed sort values will be consistent across all the subsequent requests.

(Perhaps the NOW value should also be encoded into the cursor values so this 
happens automatically under the covers? ... not sure if that's a good idea in 
general, would need to think about it more)


> Cursor value always different for last page with sorting by a date based 
> function using NOW
> ---
>
> Key: SOLR-10231
> URL: https://issues.apache.org/jira/browse/SOLR-10231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 4.10.2
>Reporter: Dmitry Kan
>
> Cursor based results fetching is a deal breaker for search performance.
> It works extremely well when paging using sort by field(s).
> Example, that works (Id is unique field in the schema):
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?q=*:*=DocumentId:76581059=AoIGAC5TU1ItNzY1ODEwNTktMQ===DocumentId=UserId+asc%2CId+desc=1
> {code}
> Response:
> {code}
> 
> 
> 0
> 4
> 
> *:*
> DocumentId
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> DocumentId:76581059
> UserId asc,Id desc
> 1
> 
> 
> 
> AoIGAC5TU1ItNzY1ODEwNTktMQ==
> 
> {code}
> nextCursorMark equals to cursorMark and so we know this is last page.
> However, sorting by function behaves differently:
> Query:
> {code}
> http://solr-host:8983/solr/documents/select?rows=1=*:*=DocumentId:76581059=AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE==DocumentId=min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))%20asc,Id%20desc
> {code}
> Response:
> {code}
> 
> 
> 0
> 6
> 
> *:*
> DocumentId
> AoIFQf9yCCAuU1NSLTc2NTgxMDU5LTE=
> DocumentId:76581059
> 
> min(ms(NOW,DynamicDateField_1),ms(NOW,DynamicDateField_12),ms(NOW,DynamicDateField_3),ms(NOW,DynamicDateField_5))
>  asc,Id desc
> 
> 1
> 
> 
> 
> 
> 76581059
> 
> 
> AoIFQf9yFyAuU1NSLTc2NTgxMDU5LTE=
> 
> {code}
> nextCursorMark does not equal to cursorMark, which suggests there are more 
> results. Which is not true (numFound=1). And so the client goes into infinite 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901842#comment-15901842
 ] 

Erick Erickson commented on SOLR-10250:
---

I try to encourage patches as they preserve the history in a single place that 
stays here forever. An external repo can always go away.

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8876) Morphlines fails with "No command builder registered for ..." when using Java 9 due to morphline "importCommands" config option attempting to resolve classname globs

2017-03-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901824#comment-15901824
 ] 

Mark Miller commented on SOLR-8876:
---

I'm actually going to withdraw my veto from Steve's removal of the map-reduce 
contribs, so that would handle this case. Another option is to remove the cell 
contrib and add a more generic interface that morphlines can be plugged into 
external (or whatever logic). I've given it some time and it doesn't seem we 
are going to address the current issues in the near term though.

> Morphlines fails with "No command builder registered for ..." when using Java 
> 9 due to morphline "importCommands" config option attempting to resolve 
> classname globs
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>Assignee: Hoss Man
>  Labels: Java9
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8876.patch
>
>
> When running Solr in java9, and using the morphlines contrib(s) users may 
> encounter vague errors such as...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> This error comes directly from the morphlines code, and relates to the use of 
> wildcards in the {{importCommands}} declaration of of morphlines {{\*.conf}} 
> files used -- for example...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> Using wildcards like {{\*}} and {{\*\*}} in morphline's {{importCommands}} 
> config options do not work in java9 due to changes in the underlying JVM 
> classloader.
> This issue is tracked up stream in: 
> https://github.com/kite-sdk/kite/issues/469
> 
> *WORK AROUND*
> The workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration, one for each distinct command used in that 
> {{conf}} file.
> Example:
> {noformat}
> # Old config, does not work in java9
> # importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> # replaced with...
> # using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to 
> classpath scanning limitations
> # so we enumarate every command (builder) we know this config uses below. 
> (see SOLR-8876)
> importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
>   
> "org.apache.solr.morphlines.solr.SanitizeUnknownSolrFieldsBuilder",
>   "org.apache.solr.morphlines.solr.LoadSolrBuilder"]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8876) Morphlines fails with "No command builder registered for ..." when using Java 9 due to morphline "importCommands" config option attempting to resolve classname globs

2017-03-08 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-8876.

   Resolution: Workaround
 Assignee: Hoss Man
Fix Version/s: master (7.0)
   6.5

Actually, the "Workaround" resolution seems fitting for this.

> Morphlines fails with "No command builder registered for ..." when using Java 
> 9 due to morphline "importCommands" config option attempting to resolve 
> classname globs
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>Assignee: Hoss Man
>  Labels: Java9
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8876.patch
>
>
> When running Solr in java9, and using the morphlines contrib(s) users may 
> encounter vague errors such as...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> This error comes directly from the morphlines code, and relates to the use of 
> wildcards in the {{importCommands}} declaration of of morphlines {{\*.conf}} 
> files used -- for example...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> Using wildcards like {{\*}} and {{\*\*}} in morphline's {{importCommands}} 
> config options do not work in java9 due to changes in the underlying JVM 
> classloader.
> This issue is tracked up stream in: 
> https://github.com/kite-sdk/kite/issues/469
> 
> *WORK AROUND*
> The workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration, one for each distinct command used in that 
> {{conf}} file.
> Example:
> {noformat}
> # Old config, does not work in java9
> # importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> # replaced with...
> # using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to 
> classpath scanning limitations
> # so we enumarate every command (builder) we know this config uses below. 
> (see SOLR-8876)
> importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
>   
> "org.apache.solr.morphlines.solr.SanitizeUnknownSolrFieldsBuilder",
>   "org.apache.solr.morphlines.solr.LoadSolrBuilder"]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10074) TestConfig appears to be incompatible with custom ant test location properties that should be supported.

2017-03-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10074.

   Resolution: Invalid
Fix Version/s: master (7.0)
   6.5

This was on my end. An issue with our .gitignore file ignoring a test lib 
folder with test files in it due to .gitignore but then having had that 
directory explicitly added. Meant my test beasting setup did not copy over that 
test lib dir.

> TestConfig appears to be incompatible with custom ant test location 
> properties that should be supported.
> 
>
> Key: SOLR-10074
> URL: https://issues.apache.org/jira/browse/SOLR-10074
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10213) Copy Fields: remove wiki vs. cwiki overlap (and gap)

2017-03-08 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901736#comment-15901736
 ] 

Christine Poerschke commented on SOLR-10213:


Thanks Erick and Cassandra for your guidance and comments. I have gone and 
added a small paragraph re: the 'are copy fields recursive/cascading' question 
to the cwiki page, and have also deleted comments already addressed and/or 
replied to. 

> Copy Fields: remove wiki vs. cwiki overlap (and gap)
> 
>
> Key: SOLR-10213
> URL: https://issues.apache.org/jira/browse/SOLR-10213
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> We just stumbled across the 'are copy fields recursive/cascading' question 
> again and on https://wiki.apache.org/solr/SchemaXml#Copy_Fields found the 
> answer which is "no" in the shape of the _The copy is done at the stream 
> source level and no copy feeds into another copy._ sentence but 
> https://cwiki.apache.org/confluence/display/solr/Copying+Fields didn't seem 
> to obviously have that answer although there is a _"... can/does copying 
> happen recursively?"_ question hidden in the comments section.
> This ticket here proposes to:
> * fully remove the wiki section content in favour of just a pointer to the 
> Solr Reference guide (cwiki)
> * review if anything on the wiki is missing and should be added to the cwiki
> * maybe: tidy up/remove some of the comments on the cwiki (the ones unrelated 
> to the cwiki page itself)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901731#comment-15901731
 ] 

Hoss Man commented on SOLR-10250:
-

bq. Thanks for feedback

Boris: I'm sorry, I wasn't giving you feedback on your patch -- i was just 
triaging/linking the issues, and wanted to make sure people reading them 
realized that just fixing SolrCloudClient is only part of the problem.

bq. Any reason it's been opened for so long? 

just a question of people having time to dig into it and write up a patch ... 
it's probably not the sort of thing many people rely on, or if they do then 
(like you) they aren't using it with solr cloud -- or it was easier (for their 
purposes) to work around with an RTG then to dig into solr to help with a fix.

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10200) Streaming Expressions should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10200:
--
Attachment: SOLR-10200.patch

> Streaming Expressions should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10200
> URL: https://issues.apache.org/jira/browse/SOLR-10200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10200.patch, SOLR-10200.patch, SOLR-10200.patch, 
> SOLR-10200.patch
>
>
> Currently Streaming Expressions select shards using an internal ZooKeeper 
> client. This ticket will allow stream sources to except a *shards* parameter 
> so that non-SolrCloud deployments can set the shards manually.
> The shards parameters will be added as http parameters in the following 
> format:
> collectionA.shards=url1,url1,...=url1,url2...
> The /stream handler will then add the shards to the StreamContext so all 
> stream sources can check to see if their collection has the shards set 
> manually.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10200) Streaming Expressions should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10200:
--
Attachment: SOLR-10200.patch

Added a test that exercises the /stream hander and a negative test. Moving on 
to manual testing.

> Streaming Expressions should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10200
> URL: https://issues.apache.org/jira/browse/SOLR-10200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10200.patch, SOLR-10200.patch, SOLR-10200.patch
>
>
> Currently Streaming Expressions select shards using an internal ZooKeeper 
> client. This ticket will allow stream sources to except a *shards* parameter 
> so that non-SolrCloud deployments can set the shards manually.
> The shards parameters will be added as http parameters in the following 
> format:
> collectionA.shards=url1,url1,...=url1,url2...
> The /stream handler will then add the shards to the StreamContext so all 
> stream sources can check to see if their collection has the shards set 
> manually.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Boris Naguet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901690#comment-15901690
 ] 

Boris Naguet edited comment on SOLR-10250 at 3/8/17 6:11 PM:
-

Thanks for feedback

OK, I'm currently using 
_.sendUpdatesOnlyToShardLeaders().sendDirectUpdatesToShardLeadersOnly()_ on 
SolrJ (though I'm not sure about it) but there's still the leader election 
possibility...

Any reason it's been opened for so long? 
Not a widely used/recommanded usage?

We're totally depending on that in our platform... but of course we can still 
make a "GET after POST" when it's not returned


was (Author: borisnaguet):
Thanks for feedback

OK, I'm currently using 
_.sendUpdatesOnlyToShardLeaders().sendDirectUpdatesToShardLeadersOnly()_ on 
SolrJ (though I'm not sure about it) but there's still the leader election 
possibility...

Any reason it's been opened for so long? 
Not a widely used/recommanded usage?

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Boris Naguet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901690#comment-15901690
 ] 

Boris Naguet commented on SOLR-10250:
-

Thanks for feedback

OK, I'm currently using 
_.sendUpdatesOnlyToShardLeaders().sendDirectUpdatesToShardLeadersOnly()_ on 
SolrJ (though I'm not sure about it) but there's still the leader election 
possibility...

Any reason it's been opened for so long? 
Not a widely used/recommanded usage?

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8876) Morphlines fails with "No command builder registered for ..." when using Java 9 due to morphline "importCommands" config option attempting to resolve classname globs

2017-03-08 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8876:
---
Description: 
When running Solr in java9, and using the morphlines contrib(s) users may 
encounter vague errors such as...

{noformat}
No command builder registered for COMMAND_NAME
{noformat}

This error comes directly from the morphlines code, and relates to the use of 
wildcards in the {{importCommands}} declaration of of morphlines {{\*.conf}} 
files used -- for example...

{noformat}
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
{noformat}

Using wildcards like {{\*}} and {{\*\*}} in morphline's {{importCommands}} 
config options do not work in java9 due to changes in the underlying JVM 
classloader.

This issue is tracked up stream in: https://github.com/kite-sdk/kite/issues/469



*WORK AROUND*

The workaround is to only use fully qualified command class names in 
{{importCommands}} declaration, one for each distinct command used in that 
{{conf}} file.

Example:

{noformat}
# Old config, does not work in java9
# importCommands : ["org.kitesdk.**", "org.apache.solr.**"]

# replaced with...

# using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to classpath 
scanning limitations
# so we enumarate every command (builder) we know this config uses below. (see 
SOLR-8876)
importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
  
"org.apache.solr.morphlines.solr.SanitizeUnknownSolrFieldsBuilder",
  "org.apache.solr.morphlines.solr.LoadSolrBuilder"]
{noformat}

  was:
When running Solr in java9, and using the morphlines contrib(s) users may 
encounter vague errors such as...

{noformat}
No command builder registered for COMMAND_NAME
{noformat}

This error comes directly from the morphlines code, and relates to the use of 
wildcards in the {{importCommands}} declaration of of morphlines {{*.conf}} 
files used -- for example...

{noformat}
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
{noformat}

Using wildcards like {{*}} and {{**}} in morphline's {{importCommands}} config 
options do not work in java9 due to changes in the underlying JVM classloader.

This issue is tracked up stream in: https://github.com/kite-sdk/kite/issues/469



*WORK AROUND*

The workaround is to only use fully qualified command class names in 
{{importCommands}} declaration, one for each distinct command used in that 
{{conf}} file.

Example:

{noformat}
# Old config, does not work in java9
# importCommands : ["org.kitesdk.**", "org.apache.solr.**"]

# replaced with...

# using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to classpath 
scanning limitations
# so we enumarate every command (builder) we know this config uses below. (see 
SOLR-8876)
importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
  
"org.apache.solr.morphlines.solr.SanitizeUnknownSolrFieldsBuilder",
  "org.apache.solr.morphlines.solr.LoadSolrBuilder"]
{noformat}


> Morphlines fails with "No command builder registered for ..." when using Java 
> 9 due to morphline "importCommands" config option attempting to resolve 
> classname globs
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>  Labels: Java9
> Attachments: SOLR-8876.patch
>
>
> When running Solr in java9, and using the morphlines contrib(s) users may 
> encounter vague errors such as...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> This error comes directly from the morphlines code, and relates to the use of 
> wildcards in the {{importCommands}} declaration of of morphlines {{\*.conf}} 
> files used -- for example...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> Using wildcards like {{\*}} and {{\*\*}} in morphline's {{importCommands}} 
> config options do not work in java9 due to changes in the underlying JVM 
> classloader.
> This issue is tracked up stream in: 
> https://github.com/kite-sdk/kite/issues/469
> 
> *WORK AROUND*
> The workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration, one for each distinct command used in that 
> {{conf}} file.
> Example:
> {noformat}
> # Old config, does not work in java9
> # importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> # replaced with...
> # using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to 
> classpath scanning limitations
> # so we enumarate every command 

[jira] [Commented] (SOLR-10251) reliable TestReplicationHandler.doTestReplicateAfterCoreReload failure -- more (identical) commits then expected

2017-03-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901683#comment-15901683
 ] 

Hoss Man commented on SOLR-10251:
-

A few misc observations...
* this line is comparing the commits on master _now_ to the commits on master 
just prior to a core reload
** so failure has nothing to do with replicaiton
** Looks like a merge is happening before/after reload -- but before test gets 
list of commits?
*** Possible from RandomMergePolicy?
* At this line where this test fails, a non-nightly run won't have indexed a 
single doc -- so this particular failure will only be observable with 
{{-Dtests.nightly=true}} ...{code}int docs = TEST_NIGHTLY ? 20 : 0;
{code}
* i don't understand the point of this test at all ... it doesn't compare 
anything between master/slave except after a commit -- so where does the 
"AfterCoreReload" part come into play?
** it's particularly wonky given that half of the asserts comparing 
master/slave are about haven an identical {{numFound=0}} for a {{\*:\*}} search 
against an empty index! (unless nightly)

> reliable TestReplicationHandler.doTestReplicateAfterCoreReload failure -- 
> more (identical) commits then expected
> 
>
> Key: SOLR-10251
> URL: https://issues.apache.org/jira/browse/SOLR-10251
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestReplicateAfterCoreReload -Dtests.seed=6F2AD3669775C0E9 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=ky-KG 
> -Dtests.timezone=Etc/GMT+10 -Dtests.asserts=true 
> -Dtests.file.encoding=ANSI_X3.4-1968
>[junit4] FAILURE 57.2s | 
> TestReplicationHandler.doTestReplicateAfterCoreReload <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> expected:<[{indexVersion=1488994926427,generation=2,filelist=[_7e.cfe, 
> _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _cz.fdt, 
> _cz.fdx, _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, _cz_Lucene50_0.doc, 
> _cz_Lucene50_0.tim, _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, _d0.si, 
> segments_2]}]> but 
> was:<[{indexVersion=1488994926427,generation=2,filelist=[_7e.cfe, _7e.cfs, 
> _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _cz.fdt, _cz.fdx, 
> _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, _cz_Lucene50_0.doc, _cz_Lucene50_0.tim, 
> _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, _d0.si, segments_2]}, 
> {indexVersion=1488994926427,generation=3,filelist=[_7e.cfe, _7e.cfs, _7e.si, 
> _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _d1.cfe, _d1.cfs, _d1.si, 
> segments_3]}]>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6F2AD3669775C0E9:4AFDC856E73DCEEA]:0)
>[junit4]>  at 
> org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1279)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> reformating the expected vs actual...
> {noformat}
> expected:
>   <[{indexVersion=1488994926427,
>  generation=2,
>  filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
> _b5.cfs, _b5.si, _cz.fdt, _cz.fdx, _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, 
> _cz_Lucene50_0.doc, _cz_Lucene50_0.tim, _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, 
> _d0.si, 
>segments_2]
>}]> 
> but was:
>   <[{indexVersion=1488994926427,
>  generation=2,
>  filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
> _b5.cfs, _b5.si, _cz.fdt, _cz.fdx, _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, 
> _cz_Lucene50_0.doc, _cz_Lucene50_0.tim, _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, 
> _d0.si, 
>segments_2]
> }, 
> {indexVersion=1488994926427,
>  generation=3,
>  filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
> _b5.cfs, _b5.si, _d1.cfe, _d1.cfs, _d1.si, 
>segments_3]
>}]>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10251) reliable TestReplicationHandler.doTestReplicateAfterCoreReload failure -- more (identical) commits then expected

2017-03-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10251:
---

 Summary: reliable 
TestReplicationHandler.doTestReplicateAfterCoreReload failure -- more 
(identical) commits then expected
 Key: SOLR-10251
 URL: https://issues.apache.org/jira/browse/SOLR-10251
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestReplicationHandler -Dtests.method=doTestReplicateAfterCoreReload 
-Dtests.seed=6F2AD3669775C0E9 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.locale=ky-KG -Dtests.timezone=Etc/GMT+10 -Dtests.asserts=true 
-Dtests.file.encoding=ANSI_X3.4-1968
   [junit4] FAILURE 57.2s | 
TestReplicationHandler.doTestReplicateAfterCoreReload <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 
expected:<[{indexVersion=1488994926427,generation=2,filelist=[_7e.cfe, _7e.cfs, 
_7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _cz.fdt, _cz.fdx, 
_cz.fnm, _cz.nvd, _cz.nvm, _cz.si, _cz_Lucene50_0.doc, _cz_Lucene50_0.tim, 
_cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, _d0.si, segments_2]}]> but 
was:<[{indexVersion=1488994926427,generation=2,filelist=[_7e.cfe, _7e.cfs, 
_7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _cz.fdt, _cz.fdx, 
_cz.fnm, _cz.nvd, _cz.nvm, _cz.si, _cz_Lucene50_0.doc, _cz_Lucene50_0.tim, 
_cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, _d0.si, segments_2]}, 
{indexVersion=1488994926427,generation=3,filelist=[_7e.cfe, _7e.cfs, _7e.si, 
_7g.cfe, _7g.cfs, _7g.si, _b5.cfe, _b5.cfs, _b5.si, _d1.cfe, _d1.cfs, _d1.si, 
segments_3]}]>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6F2AD3669775C0E9:4AFDC856E73DCEEA]:0)
   [junit4]>at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1279)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

reformating the expected vs actual...

{noformat}
expected:
  <[{indexVersion=1488994926427,
 generation=2,
 filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
_b5.cfs, _b5.si, _cz.fdt, _cz.fdx, _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, 
_cz_Lucene50_0.doc, _cz_Lucene50_0.tim, _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, 
_d0.si, 
   segments_2]
   }]> 

but was:
  <[{indexVersion=1488994926427,
 generation=2,
 filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
_b5.cfs, _b5.si, _cz.fdt, _cz.fdx, _cz.fnm, _cz.nvd, _cz.nvm, _cz.si, 
_cz_Lucene50_0.doc, _cz_Lucene50_0.tim, _cz_Lucene50_0.tip, _d0.cfe, _d0.cfs, 
_d0.si, 
   segments_2]
}, 
{indexVersion=1488994926427,
 generation=3,
 filelist=[_7e.cfe, _7e.cfs, _7e.si, _7g.cfe, _7g.cfs, _7g.si, _b5.cfe, 
_b5.cfs, _b5.si, _d1.cfe, _d1.cfs, _d1.si, 
   segments_3]
   }]>
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901639#comment-15901639
 ] 

Hoss Man commented on SOLR-10250:
-

linking to SOLR-8733,

Note that SolrCloudClient agregating responses from multiple leaders is only 
part of the problem -- if an update is internally forwarded (either due ot 
updates being sent to nodes arbitrarily, or because of leader election while in 
transit) then the version# will also be missing.

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Boris Naguet (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Naguet updated SOLR-10250:

Comment: was deleted

(was: Here is the PR:
https://github.com/apache/lucene-solr/pull/164

Do you need it as a patch?)

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Boris Naguet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901634#comment-15901634
 ] 

Boris Naguet commented on SOLR-10250:
-

Here is the PR:
https://github.com/apache/lucene-solr/pull/164

Do you need it as a patch?

> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901632#comment-15901632
 ] 

ASF GitHub Bot commented on SOLR-10250:
---

GitHub user BorisNaguet opened a pull request:

https://github.com/apache/lucene-solr/pull/164

SOLR-10250: SolrCloudClient doesn't return 'adds' in Response when' 
versions' is requested

SolrCloudClient doesn't return 'adds' in Response when' versions' is 
requested

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/BorisNaguet/lucene-solr 
SOLR-10250-versions-adds

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/164.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #164


commit 99bc8fa1c8ae996426afc9521e7604a1234c
Author: Boris Naguet 
Date:   2017-03-08T17:37:46Z

SOLR-10250: SolrCloudClient doesn't return 'adds' in Response when
'versions' is requested




> SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested
> --
>
> Key: SOLR-10250
> URL: https://issues.apache.org/jira/browse/SOLR-10250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4.1
>Reporter: Boris Naguet
>  Labels: locking, optimistic
>
> Hello,
> On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
> We use optimistic locking, and when we post new documents we directly ask the 
> version with the "versions" parameter. The response is in a "adds" field.
> I can't even find a doc explaining that but it works :)
> With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr 
> response has these "adds" but they're lost by the SolrJ client when 
> aggregating responses from different Shards.
> I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #164: SOLR-10250: SolrCloudClient doesn't return 'a...

2017-03-08 Thread BorisNaguet
GitHub user BorisNaguet opened a pull request:

https://github.com/apache/lucene-solr/pull/164

SOLR-10250: SolrCloudClient doesn't return 'adds' in Response when' 
versions' is requested

SolrCloudClient doesn't return 'adds' in Response when' versions' is 
requested

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/BorisNaguet/lucene-solr 
SOLR-10250-versions-adds

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/164.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #164


commit 99bc8fa1c8ae996426afc9521e7604a1234c
Author: Boris Naguet 
Date:   2017-03-08T17:37:46Z

SOLR-10250: SolrCloudClient doesn't return 'adds' in Response when
'versions' is requested




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10250) SolrCloudClient doesn't return 'adds' in Response when 'versions' is requested

2017-03-08 Thread Boris Naguet (JIRA)
Boris Naguet created SOLR-10250:
---

 Summary: SolrCloudClient doesn't return 'adds' in Response when 
'versions' is requested
 Key: SOLR-10250
 URL: https://issues.apache.org/jira/browse/SOLR-10250
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 6.4.1
Reporter: Boris Naguet


Hello,

On our project we run Solr 4.2 but we're migrating to latest SolrCloud.
We use optimistic locking, and when we post new documents we directly ask the 
version with the "versions" parameter. The response is in a "adds" field.
I can't even find a doc explaining that but it works :)

With Solr 5 (we did a few tests some time ago), 6.2 and 6.4.1 the Solr response 
has these "adds" but they're lost by the SolrJ client when aggregating 
responses from different Shards.

I have a patch that I'll propose via Github.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Robert Muir
On Wed, Mar 8, 2017 at 12:04 PM, Chris Hostetter
 wrote:
>
> : If you don't like the limit for your specific test: use
> : @SuppressFileSystems annotation to suppress it.
> :
> : But it is really insane for a unit test to use so many index files,
> : and it is important to reproduce such bugs when they do happen.
>
> i'm not disagreeing with the value of HandleLimitFS.
>
> I'm saying that in tests like TestIndexSorting.testRandom3 -- where the
> point is to create 2 distinct indexes and compare some things about them
> -- having a single limit for the entire JVM isn't as useful as if there
> was an easy way to just limit the number of open files per index (or for
> the test to declare "treat these indexes as if they were on distinct
> filesystems").

This isn't how operating systems work though. They don't care about
how many indexes or filesystems you have, its a file handle limit for
the process (entire JVM). So this simply reflects that.

2048 is already far too much: on my mac the default limit is only 256.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-08 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898867#comment-15898867
 ] 

Amrit Sarkar edited comment on SOLR-10209 at 3/8/17 5:29 PM:
-

Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is lost if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.


was (Author: sarkaramr...@gmail.com):
Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is list if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10242) Cores created by Solr RESTORE end up with stale searches after indexing

2017-03-08 Thread John Marquiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Marquiss updated SOLR-10242:
-
Description: 
Index files created by the Solr RESTORE feature are placed in a directory with 
a name like "restore.20170307173236270" instead of the standard "index" 
directory. This seems to break Solr's ability to detect index changes leading 
to stale searchers on the restored cores.

Detailed information including steps to replicate can be found in this 
solr-user mail thread. [http://markmail.org/message/wsm56jgbh53fx24u]

(The markmail site seems to be down... linking the relevant messages from the 
Apache archive)
[http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3CCO2PR06MB6345317732A4D7C22C00BCCFD2F0%40CO2PR06MB634.namprd06.prod.outlook.com%3E]
[http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3CCO2PR06MB6342202F82CFD4A2F5617AEFD2F0%40CO2PR06MB634.namprd06.prod.outlook.com%3E]

  was:
Index files created by the Solr RESTORE feature are placed in a directory with 
a name like "restore.20170307173236270" instead of the standard "index" 
directory. This seems to break Solr's ability to detect index changes leading 
to stale searchers on the restored cores.

Detailed information including steps to replicate can be found in this 
solr-user mail thread. [http://markmail.org/message/wsm56jgbh53fx24u]


> Cores created by Solr RESTORE end up with stale searches after indexing
> ---
>
> Key: SOLR-10242
> URL: https://issues.apache.org/jira/browse/SOLR-10242
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, search
>Affects Versions: 6.3
> Environment: Behavior observed on both Linux and Windows:
> Linux version 3.10.0-327.36.3.el7.x86_64 
> (mockbu...@x86-037.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red 
> Hat 4.8.5-4) (GCC) ) #1 SMP Thu Oct 20 04:56:07 EDT 2016
> java version "1.8.0_77"
> Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
> Windows 10 Enterprise Version 1607 Build 14393.693
> java version "1.8.0_121"
> Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
>Reporter: John Marquiss
>
> Index files created by the Solr RESTORE feature are placed in a directory 
> with a name like "restore.20170307173236270" instead of the standard "index" 
> directory. This seems to break Solr's ability to detect index changes leading 
> to stale searchers on the restored cores.
> Detailed information including steps to replicate can be found in this 
> solr-user mail thread. [http://markmail.org/message/wsm56jgbh53fx24u]
> (The markmail site seems to be down... linking the relevant messages from the 
> Apache archive)
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3CCO2PR06MB6345317732A4D7C22C00BCCFD2F0%40CO2PR06MB634.namprd06.prod.outlook.com%3E]
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3CCO2PR06MB6342202F82CFD4A2F5617AEFD2F0%40CO2PR06MB634.namprd06.prod.outlook.com%3E]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Chris Hostetter

: If you don't like the limit for your specific test: use
: @SuppressFileSystems annotation to suppress it.
: 
: But it is really insane for a unit test to use so many index files,
: and it is important to reproduce such bugs when they do happen.

i'm not disagreeing with the value of HandleLimitFS.  

I'm saying that in tests like TestIndexSorting.testRandom3 -- where the 
point is to create 2 distinct indexes and compare some things about them 
-- having a single limit for the entire JVM isn't as useful as if there 
was an easy way to just limit the number of open files per index (or for 
the test to declare "treat these indexes as if they were on distinct 
filesystems").


a knob/hook like this would also be useful in distributed Solr tests, to 
say "we want this simulated solr nodeA to act as if it has it's own 
filesystem independent from nodeB's filesystem" -- that way we can still 
have the benefit of sanity checks that code isn't using too many files 
(per *NODE*) and we wouldn't need to use a sledghammer of completely 
supressing HandleLimitFS in tests.

Perhaps, in an ideal world, when tests call 
LuceneTestCase.createTempDir(...) they could (optionally) pass in some 
identifier for what (conceptual/virtual) filesystem they want to use -- so 
they default is to assume all temp dirs created in a test come from the 
same (mock) filesystem using HandleLimitFS with a (shared) max ... but 
tests like TestIndexSorting.testRandom3 could request that the 2 distinct 
indexes live ontheir own filesystems; and things like Solr's cloud test 
scaffolding could request that each node get their own "virtual" 
filesystems (with their own limits)

?

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8876) Morphlines fails with "No command builder registered for ..." when using Java 9 due to morphline "importCommands" config option attempting to resolve classname globs

2017-03-08 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8876:
---
Description: 
When running Solr in java9, and using the morphlines contrib(s) users may 
encounter vague errors such as...

{noformat}
No command builder registered for COMMAND_NAME
{noformat}

This error comes directly from the morphlines code, and relates to the use of 
wildcards in the {{importCommands}} declaration of of morphlines {{*.conf}} 
files used -- for example...

{noformat}
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
{noformat}

Using wildcards like {{*}} and {{**}} in morphline's {{importCommands}} config 
options do not work in java9 due to changes in the underlying JVM classloader.

This issue is tracked up stream in: https://github.com/kite-sdk/kite/issues/469



*WORK AROUND*

The workaround is to only use fully qualified command class names in 
{{importCommands}} declaration, one for each distinct command used in that 
{{conf}} file.

Example:

{noformat}
# Old config, does not work in java9
# importCommands : ["org.kitesdk.**", "org.apache.solr.**"]

# replaced with...

# using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to classpath 
scanning limitations
# so we enumarate every command (builder) we know this config uses below. (see 
SOLR-8876)
importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
  
"org.apache.solr.morphlines.solr.SanitizeUnknownSolrFieldsBuilder",
  "org.apache.solr.morphlines.solr.LoadSolrBuilder"]
{noformat}

  was:
morphline configs we use in our contrib tests  have {{importCommands}} that 
look like this...

{noformat}
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
{noformat}

...but under java9 these tests fail with errors like...

{noformat}
No command builder registered for COMMAND_NAME
{noformat}

...because of how morphlines attempts to locate classes matching those globs -- 
this type of classpath scanning does not work in java9.

workaround is to only use fully qualified command class names in 
{{importCommands}} declaration.  No other (obviuos) java9 problems seem to 
exist with solr's use of morphlines (based on current test coverage)

Summary: Morphlines fails with "No command builder registered for ..." 
when using Java 9 due to morphline "importCommands" config option attempting to 
resolve classname globs  (was: Morphlines tests fail with Java 9 due to 
morphline "importCommands" attempting to resolve classname globs in config 
files)

i've committed the work around to our test configs.

i'm updating the summary & description to target users who may face this 
problem.

we should leave this issue open until the upstream bug is "fixed" in a future 
version and we've upgraded morphlines to use that version.

> Morphlines fails with "No command builder registered for ..." when using Java 
> 9 due to morphline "importCommands" config option attempting to resolve 
> classname globs
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>  Labels: Java9
> Attachments: SOLR-8876.patch
>
>
> When running Solr in java9, and using the morphlines contrib(s) users may 
> encounter vague errors such as...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> This error comes directly from the morphlines code, and relates to the use of 
> wildcards in the {{importCommands}} declaration of of morphlines {{*.conf}} 
> files used -- for example...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> Using wildcards like {{*}} and {{**}} in morphline's {{importCommands}} 
> config options do not work in java9 due to changes in the underlying JVM 
> classloader.
> This issue is tracked up stream in: 
> https://github.com/kite-sdk/kite/issues/469
> 
> *WORK AROUND*
> The workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration, one for each distinct command used in that 
> {{conf}} file.
> Example:
> {noformat}
> # Old config, does not work in java9
> # importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> # replaced with...
> # using globs (foo.bar.* or foo.bar.**) will not work in Java9 due to 
> classpath scanning limitations
> # so we enumarate every command (builder) we know this config uses below. 
> (see SOLR-8876)
> importCommands : ["org.kitesdk.morphline.stdlib.LogDebugBuilder",
>   
> 

[jira] [Commented] (SOLR-8876) Morphlines tests fail with Java 9 due to morphline "importCommands" attempting to resolve classname globs in config files

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901583#comment-15901583
 ] 

ASF subversion and git services commented on SOLR-8876:
---

Commit 4bc0636c1d188def7b221ed5c1235e9b6688471b in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4bc0636 ]

SOLR-8876: change morphline test config files to work around 'importCommands' 
bug when using java9

(cherry picked from commit 8756be05404758155b850748f807245fdaab6a8b)


> Morphlines tests fail with Java 9 due to morphline "importCommands" 
> attempting to resolve classname globs in config files
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>  Labels: Java9
> Attachments: SOLR-8876.patch
>
>
> morphline configs we use in our contrib tests  have {{importCommands}} that 
> look like this...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> ...but under java9 these tests fail with errors like...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> ...because of how morphlines attempts to locate classes matching those globs 
> -- this type of classpath scanning does not work in java9.
> workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration.  No other (obviuos) java9 problems seem to 
> exist with solr's use of morphlines (based on current test coverage)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8876) Morphlines tests fail with Java 9 due to morphline "importCommands" attempting to resolve classname globs in config files

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901584#comment-15901584
 ] 

ASF subversion and git services commented on SOLR-8876:
---

Commit 8756be05404758155b850748f807245fdaab6a8b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8756be0 ]

SOLR-8876: change morphline test config files to work around 'importCommands' 
bug when using java9


> Morphlines tests fail with Java 9 due to morphline "importCommands" 
> attempting to resolve classname globs in config files
> -
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Uwe Schindler
>  Labels: Java9
> Attachments: SOLR-8876.patch
>
>
> morphline configs we use in our contrib tests  have {{importCommands}} that 
> look like this...
> {noformat}
> importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
> {noformat}
> ...but under java9 these tests fail with errors like...
> {noformat}
> No command builder registered for COMMAND_NAME
> {noformat}
> ...because of how morphlines attempts to locate classes matching those globs 
> -- this type of classpath scanning does not work in java9.
> workaround is to only use fully qualified command class names in 
> {{importCommands}} declaration.  No other (obviuos) java9 problems seem to 
> exist with solr's use of morphlines (based on current test coverage)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Uwe Schindler
Thanks Robert for reminding me. Because there was a similar discussion about 
this in the Jenkins Mac Slave. In fact the issue was not caused by this limit 
as Jenkins Slave already started with 2048 limit.

Please don't raise the hardcoded test limit. It will likely break on Jenkins 
anyways. Better have a hard and low limit in test framework instead of hard to 
reproduce failures in Jenkins.

Uwe

Am 8. März 2017 17:50:04 MEZ schrieb Robert Muir :
>If you don't like the limit for your specific test: use
>@SuppressFileSystems annotation to suppress it.
>
>But it is really insane for a unit test to use so many index files,
>and it is important to reproduce such bugs when they do happen.
>
>On Wed, Mar 8, 2017 at 11:46 AM, Chris Hostetter
> wrote:
>>
>> The exception is being thrown by
>org.apache.lucene.mockfile.HandleLimitFS,
>> so the OS level utlimit isn't relevant (as long as it's greter then
>2048,
>> hardcoded in TestRuleTemporaryFilesCleanup)
>>
>> With the test creating 2 diff indexes, that means each index index
>gets an
>> effective max open files limit of ~1024 files ... and with
>> RandomSimilarity it might be leaving a lot of small segments on
>"disk" for
>> both of those indexes -- which will have at least 100,000 docs in
>each
>> because this is a nightly run
>>
>> I haven't tested this (my co is currently dirty and i'm in the middle
>of
>> something) but i suspect the seed will reproduce anywhere.
>>
>> See also SOLR-10234 where i recently pointed out similar concerns
>about
>> TestRuleTemporaryFilesCleanup's fixed limit of 2048 for the entire
>JVM,
>> even when the JVM itself is trying to simulate multiple diff indexes
>(or
>> completley distint nodes in the solr cloud test case).
>>
>>
>>
>> : Date: Wed, 8 Mar 2017 11:29:45 -0500
>> : From: Steve Rowe 
>> : Reply-To: dev@lucene.apache.org
>> : To: dev@lucene.apache.org
>> : Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 -
>Still
>> : Unstable
>> :
>> :
>> : > On Mar 8, 2017, at 8:38 AM, Apache Jenkins Server
> wrote:
>> : >
>> : > Build:
>https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/
>> : >
>> : > 2 tests failed.
>> : > FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3
>> : >
>> : > Error Message:
>> : >
>/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>Too many open files
>> : >
>> : > Stack Trace:
>> : > java.nio.file.FileSystemException:
>/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>Too many open files
>> :
>> : I logged in as the jenkins user on lucene1-us-west.apache.org (the
>‘lucene' jenkins slave), and ‘ulimit -aHS’ says (in part):
>> :
>> :open files  (-n) 1048576
>> :
>> : I think this is the maximum value.
>> :
>> : Not sure what can be done here?
>> :
>> : --
>> : Steve
>> : www.lucidworks.com
>> :
>> :
>> :
>-
>> : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> : For additional commands, e-mail: dev-h...@lucene.apache.org
>> :
>> :
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[jira] [Updated] (SOLR-10200) Streaming Expressions should work in non-SolrCloud mode

2017-03-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10200:
--
Attachment: SOLR-10200.patch

Added a small test case which exercises the new logic for selecting the shards 
for a collection. This test case does not yet excercise the /stream changes 
though.

A test case that exercises the /stream handler is next.

> Streaming Expressions should work in non-SolrCloud mode
> ---
>
> Key: SOLR-10200
> URL: https://issues.apache.org/jira/browse/SOLR-10200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10200.patch, SOLR-10200.patch
>
>
> Currently Streaming Expressions select shards using an internal ZooKeeper 
> client. This ticket will allow stream sources to except a *shards* parameter 
> so that non-SolrCloud deployments can set the shards manually.
> The shards parameters will be added as http parameters in the following 
> format:
> collectionA.shards=url1,url1,...=url1,url2...
> The /stream handler will then add the shards to the StreamContext so all 
> stream sources can check to see if their collection has the shards set 
> manually.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Robert Muir
If you don't like the limit for your specific test: use
@SuppressFileSystems annotation to suppress it.

But it is really insane for a unit test to use so many index files,
and it is important to reproduce such bugs when they do happen.

On Wed, Mar 8, 2017 at 11:46 AM, Chris Hostetter
 wrote:
>
> The exception is being thrown by org.apache.lucene.mockfile.HandleLimitFS,
> so the OS level utlimit isn't relevant (as long as it's greter then 2048,
> hardcoded in TestRuleTemporaryFilesCleanup)
>
> With the test creating 2 diff indexes, that means each index index gets an
> effective max open files limit of ~1024 files ... and with
> RandomSimilarity it might be leaving a lot of small segments on "disk" for
> both of those indexes -- which will have at least 100,000 docs in each
> because this is a nightly run
>
> I haven't tested this (my co is currently dirty and i'm in the middle of
> something) but i suspect the seed will reproduce anywhere.
>
> See also SOLR-10234 where i recently pointed out similar concerns about
> TestRuleTemporaryFilesCleanup's fixed limit of 2048 for the entire JVM,
> even when the JVM itself is trying to simulate multiple diff indexes (or
> completley distint nodes in the solr cloud test case).
>
>
>
> : Date: Wed, 8 Mar 2017 11:29:45 -0500
> : From: Steve Rowe 
> : Reply-To: dev@lucene.apache.org
> : To: dev@lucene.apache.org
> : Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still
> : Unstable
> :
> :
> : > On Mar 8, 2017, at 8:38 AM, Apache Jenkins Server 
>  wrote:
> : >
> : > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/
> : >
> : > 2 tests failed.
> : > FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3
> : >
> : > Error Message:
> : > 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>  Too many open files
> : >
> : > Stack Trace:
> : > java.nio.file.FileSystemException: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>  Too many open files
> :
> : I logged in as the jenkins user on lucene1-us-west.apache.org (the ‘lucene' 
> jenkins slave), and ‘ulimit -aHS’ says (in part):
> :
> :open files  (-n) 1048576
> :
> : I think this is the maximum value.
> :
> : Not sure what can be done here?
> :
> : --
> : Steve
> : www.lucidworks.com
> :
> :
> : -
> : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : For additional commands, e-mail: dev-h...@lucene.apache.org
> :
> :
>
> -Hoss
> http://www.lucidworks.com/
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_121) - Build # 769 - Unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/769/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:55967/solr;,   
"node_name":"127.0.0.1:55967_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:55962/solr;,   
"node_name":"127.0.0.1:55962_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:55967/solr;,
  "node_name":"127.0.0.1:55967_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:55962/solr;,
  "node_name":"127.0.0.1:55962_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([932B32B55E95B3CA:C37EAAB607B405D7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Chris Hostetter

The exception is being thrown by org.apache.lucene.mockfile.HandleLimitFS, 
so the OS level utlimit isn't relevant (as long as it's greter then 2048, 
hardcoded in TestRuleTemporaryFilesCleanup) 

With the test creating 2 diff indexes, that means each index index gets an 
effective max open files limit of ~1024 files ... and with 
RandomSimilarity it might be leaving a lot of small segments on "disk" for 
both of those indexes -- which will have at least 100,000 docs in each 
because this is a nightly run

I haven't tested this (my co is currently dirty and i'm in the middle of 
something) but i suspect the seed will reproduce anywhere.

See also SOLR-10234 where i recently pointed out similar concerns about 
TestRuleTemporaryFilesCleanup's fixed limit of 2048 for the entire JVM, 
even when the JVM itself is trying to simulate multiple diff indexes (or 
completley distint nodes in the solr cloud test case).



: Date: Wed, 8 Mar 2017 11:29:45 -0500
: From: Steve Rowe 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still
: Unstable
: 
: 
: > On Mar 8, 2017, at 8:38 AM, Apache Jenkins Server 
 wrote:
: > 
: > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/
: > 
: > 2 tests failed.
: > FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3
: > 
: > Error Message:
: > 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
 Too many open files
: > 
: > Stack Trace:
: > java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
 Too many open files
: 
: I logged in as the jenkins user on lucene1-us-west.apache.org (the ‘lucene' 
jenkins slave), and ‘ulimit -aHS’ says (in part):
: 
:open files  (-n) 1048576
: 
: I think this is the maximum value.
: 
: Not sure what can be done here?
: 
: --
: Steve
: www.lucidworks.com
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10248) Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the ObjectReleaseTracker.

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901526#comment-15901526
 ] 

ASF subversion and git services commented on SOLR-10248:


Commit e35881a63aa9de72cf5c539396266e0d0e676956 in lucene-solr's branch 
refs/heads/master from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e35881a ]

SOLR-10248: Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the 
ObjectReleaseTracker.


> Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the 
> ObjectReleaseTracker.
> 
>
> Key: SOLR-10248
> URL: https://issues.apache.org/jira/browse/SOLR-10248
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> Currently this leads to some code duplication / cruft and having to wait 
> independently for searchers first or second is not really nice (some objects 
> contain, some contained, some dependencies).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10248) Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the ObjectReleaseTracker.

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901528#comment-15901528
 ] 

ASF subversion and git services commented on SOLR-10248:


Commit 2692d08fd5779386e0c9e579739ab47a0cb2448b in lucene-solr's branch 
refs/heads/branch_6x from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2692d08 ]

SOLR-10248: Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the 
ObjectReleaseTracker.


> Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the 
> ObjectReleaseTracker.
> 
>
> Key: SOLR-10248
> URL: https://issues.apache.org/jira/browse/SOLR-10248
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> Currently this leads to some code duplication / cruft and having to wait 
> independently for searchers first or second is not really nice (some objects 
> contain, some contained, some dependencies).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Robert Muir
The exception is not from the operating system: it is from the test framework.

https://github.com/apache/lucene-solr/blob/master/lucene/test-framework/src/java/org/apache/lucene/mockfile/HandleLimitFS.java#L48

This is currently limited to 2048. We should not increase it: the idea
is to catch buggy/crazy tests just like this one.

https://github.com/apache/lucene-solr/blob/master/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleTemporaryFilesCleanup.java#L117-L119


On Wed, Mar 8, 2017 at 11:29 AM, Steve Rowe  wrote:
>
>> On Mar 8, 2017, at 8:38 AM, Apache Jenkins Server 
>>  wrote:
>>
>> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/
>>
>> 2 tests failed.
>> FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3
>>
>> Error Message:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>>  Too many open files
>>
>> Stack Trace:
>> java.nio.file.FileSystemException: 
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>>  Too many open files
>
> I logged in as the jenkins user on lucene1-us-west.apache.org (the ‘lucene' 
> jenkins slave), and ‘ulimit -aHS’ says (in part):
>
>open files  (-n) 1048576
>
> I think this is the maximum value.
>
> Not sure what can be done here?
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10249) Allow index fetching to return a detailed result instead of a true/false value

2017-03-08 Thread Jeff Miller (JIRA)
Jeff Miller created SOLR-10249:
--

 Summary: Allow index fetching to return a detailed result instead 
of a true/false value
 Key: SOLR-10249
 URL: https://issues.apache.org/jira/browse/SOLR-10249
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: replication (java)
Affects Versions: 6.4.1
 Environment: Any
Reporter: Jeff Miller
Priority: Trivial
 Fix For: 6.4


This gives us the ability to see into why a replication might of failed and act 
on it if we need to.  We use this enhancement for logging conditions so we can 
quantify what is happening with replication, get success rates, etc.

The idea is to create a public static class IndexFetchResult as an inner class 
to IndexFetcher that has strings that hold statuses that could occur while 
fetching an index.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Steve Rowe

> On Mar 8, 2017, at 8:38 AM, Apache Jenkins Server  
> wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/
> 
> 2 tests failed.
> FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3
> 
> Error Message:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>  Too many open files
> 
> Stack Trace:
> java.nio.file.FileSystemException: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
>  Too many open files

I logged in as the jenkins user on lucene1-us-west.apache.org (the ‘lucene' 
jenkins slave), and ‘ulimit -aHS’ says (in part):

   open files  (-n) 1048576

I think this is the maximum value.

Not sure what can be done here?

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10248) Merge SolrTestCaseJ4's SolrIndexSearcher tracking into the ObjectReleaseTracker.

2017-03-08 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10248:
--

 Summary: Merge SolrTestCaseJ4's SolrIndexSearcher tracking into 
the ObjectReleaseTracker.
 Key: SOLR-10248
 URL: https://issues.apache.org/jira/browse/SOLR-10248
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller


Currently this leads to some code duplication / cruft and having to wait 
independently for searchers first or second is not really nice (some objects 
contain, some contained, some dependencies).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10247) Support non-numeric metrics

2017-03-08 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-10247:


 Summary: Support non-numeric metrics
 Key: SOLR-10247
 URL: https://issues.apache.org/jira/browse/SOLR-10247
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 
 Fix For: 6.5, master (7.0)


Metrics API currently supports only numeric values. However, it's useful also 
to report non-numeric values such as eg. version, disk type, component state, 
some system properties, etc.

Codahale {{Gauge}} metric type can be used for this purpose, and convenience 
methods can be added to {{SolrMetricManager}} to make it easier to use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10237) Poly-Fields should error if subfield has docValues=true

2017-03-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901444#comment-15901444
 ] 

David Smiley commented on SOLR-10237:
-

bq. Maybe... I'm not totally sold. I think there are valid use cases for 
wanting to modify the returned list before adding it to the Document.

Remember Document is just a wrapper around an ArrayList.  A caller that wanted 
to manipulate the list could simply use a Document instance for a transient 
purpose; even re-using it by calling doc.clear().

bq. Not sure I follow, how would this refactor help?

It's not a necessity.  

> Poly-Fields should error if subfield has docValues=true
> ---
>
> Key: SOLR-10237
> URL: https://issues.apache.org/jira/browse/SOLR-10237
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10237.patch
>
>
> DocValues aren’t really supported in poly-fields at this point, but they 
> don’t complain if the schema definition of the subfield has docValues=true. 
> This leaves the index in an inconsistent state, since the SchemaField has 
> docValues=true but there are no DV for the field.
> Since this breaks compatibility, I think we should just emit a warning unless 
> the subfield is an instance of {{PointType}}. With 
> {{\[Int/Long/Float/Double/Date\]PointType}} subfields, this is particularly 
> important, since they use {{IndexOrDocValuesQuery}}, that would return 
> incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10073) TestCoreDiscovery appears to be incompatible with custom ant test location properties that should be supported.

2017-03-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10073.

   Resolution: Duplicate
Fix Version/s: master (7.0)
   6.5

Looks like this was SOLR-10244

> TestCoreDiscovery appears to be incompatible with custom ant test location 
> properties that should be supported.
> ---
>
> Key: SOLR-10073
> URL: https://issues.apache.org/jira/browse/SOLR-10073
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10244) TestCoreDiscovery fails if you run it as root.

2017-03-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10244.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> TestCoreDiscovery fails if you run it as root.
> --
>
> Key: SOLR-10244
> URL: https://issues.apache.org/jira/browse/SOLR-10244
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10244) TestCoreDiscovery fails if you run it as root.

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901432#comment-15901432
 ] 

ASF subversion and git services commented on SOLR-10244:


Commit fd661c667716dcb70d8aa4410b394ecbed819e22 in lucene-solr's branch 
refs/heads/branch_6x from [~markrmil...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fd661c6 ]

SOLR-10244: TestCoreDiscovery fails if you run it as root.


> TestCoreDiscovery fails if you run it as root.
> --
>
> Key: SOLR-10244
> URL: https://issues.apache.org/jira/browse/SOLR-10244
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10244) TestCoreDiscovery fails if you run it as root.

2017-03-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901428#comment-15901428
 ] 

ASF subversion and git services commented on SOLR-10244:


Commit 6a6e30329843a86de1063a2c8f324eb3f9dbfd91 in lucene-solr's branch 
refs/heads/master from [~markrmil...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a6e303 ]

SOLR-10244: TestCoreDiscovery fails if you run it as root.


> TestCoreDiscovery fails if you run it as root.
> --
>
> Key: SOLR-10244
> URL: https://issues.apache.org/jira/browse/SOLR-10244
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.4-Linux (32bit/jdk-9-ea+159) - Build # 156 - Unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/156/
Java: 32bit/jdk-9-ea+159 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistributedQueueTest.testPeekElements

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([F9FFC156975F6965:4D17B7747663D78]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:178)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12397 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistributedQueueTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6438 - Still unstable!

2017-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6438/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestConfigSetImmutable

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores\core:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores\core

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores\core:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores\core
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009\cores
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\tempDir-009

at __randomizedtesting.SeedInfo.seed([70C4858613A2780C]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11842 lines...]
   [junit4] Suite: org.apache.solr.core.TestConfigSetImmutable
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestConfigSetImmutable_70C4858613A2780C-001\init-core-data-001
   [junit4]   2> 1302062 INFO  
(SUITE-TestConfigSetImmutable-seed#[70C4858613A2780C]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields
   [junit4]   2> 1302070 INFO  
(SUITE-TestConfigSetImmutable-seed#[70C4858613A2780C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1302073 INFO  
(TEST-TestConfigSetImmutable.testAddSchemaFieldsImmutable-seed#[70C4858613A2780C])
 [] o.a.s.SolrTestCaseJ4 ###Starting testAddSchemaFieldsImmutable
   [junit4]   2> 1302962 INFO  
(TEST-TestConfigSetImmutable.testAddSchemaFieldsImmutable-seed#[70C4858613A2780C])
 [] o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 1302962 INFO  

[jira] [Updated] (SOLR-9530) Add an Atomic Update Processor

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9530:
---
Attachment: SOLR-9530.patch

As per discussion with Noble,

Refactored the code to optimise and remove unwanted elements.

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch, 
> SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10245) Error partial update location type

2017-03-08 Thread Silvestre Losada (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Silvestre Losada resolved SOLR-10245.
-
Resolution: Fixed

> Error partial update location type
> --
>
> Key: SOLR-10245
> URL: https://issues.apache.org/jira/browse/SOLR-10245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0
> Environment: Ubuntu 14.04
>Reporter: Silvestre Losada
>
> Hi, have an issue with partial updates + solr location type
> In my schema I have the following fields
>   multiValued="false"/>
> config is
>  multiValued="false"/>
>  subFieldSuffix="_coordinates"/>
> There is anohter field called numItems
> Im trying to do a partial update on numitems
> curl http://10.14.0.30:8080/solr/core/update/json -d 
> '[{"Id":"1100543535","numItems":{"set":"8"}}]'
> After update the fiels _coordinates has two values
> And I get the following error
> 1{"responseHeader":{"status":400,"QTime":3},"error":{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"ERROR:
>  [doc=1100543535] multiple values encountered for non multiValued field 
> Location_0_coordinates: [43.7501, 43.7501]","code":400}}
> I'm not updating that field, and if solr make some update internally I expect 
> to make a set not add,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10245) Error partial update location type

2017-03-08 Thread Silvestre Losada (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901364#comment-15901364
 ] 

Silvestre Losada commented on SOLR-10245:
-

Apologies for submit it, I think it was set to true, and I messed up migrating 
solr version. I was looking in mailing lists and didn't see it. Thank you so 
much for your help.

> Error partial update location type
> --
>
> Key: SOLR-10245
> URL: https://issues.apache.org/jira/browse/SOLR-10245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0
> Environment: Ubuntu 14.04
>Reporter: Silvestre Losada
>
> Hi, have an issue with partial updates + solr location type
> In my schema I have the following fields
>   multiValued="false"/>
> config is
>  multiValued="false"/>
>  subFieldSuffix="_coordinates"/>
> There is anohter field called numItems
> Im trying to do a partial update on numitems
> curl http://10.14.0.30:8080/solr/core/update/json -d 
> '[{"Id":"1100543535","numItems":{"set":"8"}}]'
> After update the fiels _coordinates has two values
> And I get the following error
> 1{"responseHeader":{"status":400,"QTime":3},"error":{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"ERROR:
>  [doc=1100543535] multiple values encountered for non multiValued field 
> Location_0_coordinates: [43.7501, 43.7501]","code":400}}
> I'm not updating that field, and if solr make some update internally I expect 
> to make a set not add,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10245) Error partial update location type

2017-03-08 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901332#comment-15901332
 ] 

Alexandre Rafalovitch edited comment on SOLR-10245 at 3/8/17 2:27 PM:
--

The Dynamic field *\*_coordinates* should be *stored=false*. You have it set to 
true. If you change it back to false and reindex, the problem should go away.

The question is why is it true for your setup. The Solr examples have it as 
false, including useDocValuesAsStored="false" as well.

The root cause is that the content of the field is created from the other 
source, on update, when the document is reconstructed, the coordinates field 
gets its own stored value and then a second copy of it when the parent location 
type splits into the coordinates fields internally.


was (Author: arafalov):
The Dynamic field *_coordinates should be *stored=false*. You have it set to 
true. If you change it back to false and reindex, the problem should go away.

The question is why is it true for your setup. The Solr examples have it as 
false, including useDocValuesAsStored="false" as well.

The root cause is that the content of the field is created from the other 
source, on update, when the document is reconstructed, the coordinates field 
gets its own stored value and then a second copy of it when the parent location 
type splits into the coordinates fields internally.

> Error partial update location type
> --
>
> Key: SOLR-10245
> URL: https://issues.apache.org/jira/browse/SOLR-10245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0
> Environment: Ubuntu 14.04
>Reporter: Silvestre Losada
>
> Hi, have an issue with partial updates + solr location type
> In my schema I have the following fields
>   multiValued="false"/>
> config is
>  multiValued="false"/>
>  subFieldSuffix="_coordinates"/>
> There is anohter field called numItems
> Im trying to do a partial update on numitems
> curl http://10.14.0.30:8080/solr/core/update/json -d 
> '[{"Id":"1100543535","numItems":{"set":"8"}}]'
> After update the fiels _coordinates has two values
> And I get the following error
> 1{"responseHeader":{"status":400,"QTime":3},"error":{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"ERROR:
>  [doc=1100543535] multiple values encountered for non multiValued field 
> Location_0_coordinates: [43.7501, 43.7501]","code":400}}
> I'm not updating that field, and if solr make some update internally I expect 
> to make a set not add,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10245) Error partial update location type

2017-03-08 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901332#comment-15901332
 ] 

Alexandre Rafalovitch commented on SOLR-10245:
--

The Dynamic field *_coordinates should be *stored=false*. You have it set to 
true. If you change it back to false and reindex, the problem should go away.

The question is why is it true for your setup. The Solr examples have it as 
false, including useDocValuesAsStored="false" as well.

The root cause is that the content of the field is created from the other 
source, on update, when the document is reconstructed, the coordinates field 
gets its own stored value and then a second copy of it when the parent location 
type splits into the coordinates fields internally.

> Error partial update location type
> --
>
> Key: SOLR-10245
> URL: https://issues.apache.org/jira/browse/SOLR-10245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0
> Environment: Ubuntu 14.04
>Reporter: Silvestre Losada
>
> Hi, have an issue with partial updates + solr location type
> In my schema I have the following fields
>   multiValued="false"/>
> config is
>  multiValued="false"/>
>  subFieldSuffix="_coordinates"/>
> There is anohter field called numItems
> Im trying to do a partial update on numitems
> curl http://10.14.0.30:8080/solr/core/update/json -d 
> '[{"Id":"1100543535","numItems":{"set":"8"}}]'
> After update the fiels _coordinates has two values
> And I get the following error
> 1{"responseHeader":{"status":400,"QTime":3},"error":{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"ERROR:
>  [doc=1100543535] multiple values encountered for non multiValued field 
> Location_0_coordinates: [43.7501, 43.7501]","code":400}}
> I'm not updating that field, and if solr make some update internally I expect 
> to make a set not add,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 303 - Still Unstable

2017-03-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/303/

2 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
 Too many open files

Stack Trace:
java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexSorting_4609011308FB57E6-001/tempDir-004/_e4_Lucene50_0.tim:
 Too many open files
at 
__randomizedtesting.SeedInfo.seed([4609011308FB57E6:E4D14FC96C097EE0]:0)
at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:197)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:166)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:202)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2741)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:749)
at 
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.(BlockTreeTermsReader.java:153)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:445)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:112)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at 
org.apache.lucene.index.BufferedUpdatesStream$SegmentState.(BufferedUpdatesStream.java:384)
at 
org.apache.lucene.index.BufferedUpdatesStream.openSegmentStates(BufferedUpdatesStream.java:416)
at 
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:261)
at 
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3464)
at 
org.apache.lucene.index.IndexWriter.applyDeletesAndPurge(IndexWriter.java:4992)
at 
org.apache.lucene.index.DocumentsWriter$ApplyDeletesEvent.process(DocumentsWriter.java:717)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5042)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5033)
at 
org.apache.lucene.index.IndexWriter.deleteDocuments(IndexWriter.java:1509)
at 
org.apache.lucene.index.TestIndexSorting.testRandom3(TestIndexSorting.java:2237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[jira] [Resolved] (LUCENE-7695) Unknown query type SynonymQuery in ComplexPhraseQueryParser

2017-03-08 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved LUCENE-7695.
--
Resolution: Fixed

> Unknown query type SynonymQuery in ComplexPhraseQueryParser
> ---
>
> Key: LUCENE-7695
> URL: https://issues.apache.org/jira/browse/LUCENE-7695
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4
>Reporter: Markus Jelsma
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7695.patch, LUCENE-7695.patch, LUCENE-7695.patch, 
> LUCENE-7695.patch, LUCENE-7695.patch
>
>
> We sometimes receive this exception using ComplexPhraseQueryParser via Solr 
> 6.4.0. Some terms do fine, others don't.
> This query:
> {code}
> {!complexphrase}owmskern_title:"vergunning" 
> {code}
> returns results just fine. The next one:
> {code}
> {!complexphrase}owmskern_title:"vergunningen~"
> {code}
> Gives results as well! But this one:
> {code}
> {!complexphrase}owmskern_title:"vergunningen"
> {code}
> Returns the following exception:
> {code}
> IllegalArgumentException: Unknown query type 
> "org.apache.lucene.search.SynonymQuery" found in phrase query string 
> "algemene plaatselijke verordening"
> at 
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:313)
> at 
> org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:265)
> at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:684)
> at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:734)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:241)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1919)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1636)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:611)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:533)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10246) Support grouped faceting for date field type

2017-03-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901307#comment-15901307
 ] 

Mikhail Khludnev commented on SOLR-10246:
-

group.facet is well know for slowness, there is a recommendation to use 
unique(id) aggregation in JSON facets. Giving that, I hardly imagine the subj 
will be developed, however, I'm not sure date facets in JSON Facets module.  

> Support grouped faceting for date field type
> 
>
> Key: SOLR-10246
> URL: https://issues.apache.org/jira/browse/SOLR-10246
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Vitaly Lavrov
>
> According to documentation "Grouped faceting supports facet.field and 
> facet.range but currently doesn't support date and pivot faceting".
> Are there any plans to support dates?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7700) Move throughput control and merge aborting out of IndexWriter's core?

2017-03-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901248#comment-15901248
 ] 

Dawid Weiss commented on LUCENE-7700:
-

Ok, that was a trivial regression:
{code}
--- a/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
@@ -177,7 +177,7 @@ public abstract class MergePolicy {
 }

 final void setMergeThread(Thread owner) {
-  assert owner == null;
+  assert this.owner == null;
   this.owner = owner;
 }
   }
{code}

> Move throughput control and merge aborting out of IndexWriter's core?
> -
>
> Key: LUCENE-7700
> URL: https://issues.apache.org/jira/browse/LUCENE-7700
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7700.patch, LUCENE-7700.patch, LUCENE-7700.patch
>
>
> Here is a bit of a background:
> - I wanted to implement a custom merging strategy that would have a custom 
> i/o flow control (global),
> - currently, the CMS is tightly bound with a few classes -- MergeRateLimiter, 
> OneMerge, IndexWriter.
> Looking at the code it seems to me that everything with respect to I/O 
> control could be nicely pulled out into classes that explicitly control the 
> merging process, that is only MergePolicy and MergeScheduler. By default, one 
> could even run without any additional I/O accounting overhead (which is 
> currently in there, even if one doesn't use the CMS's throughput control).
> Such refactoring would also give a chance to nicely move things where they 
> belong -- job aborting into OneMerge (currently in RateLimiter), rate limiter 
> lifecycle bound to OneMerge (MergeScheduler could then use per-merge or 
> global accounting, as it pleases).
> Just a thought and some initial refactorings for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7700) Move throughput control and merge aborting out of IndexWriter's core?

2017-03-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901244#comment-15901244
 ] 

Dawid Weiss commented on LUCENE-7700:
-

I screwed up something in the latest patch because I'm getting assertion 
errors, will fix.

> Move throughput control and merge aborting out of IndexWriter's core?
> -
>
> Key: LUCENE-7700
> URL: https://issues.apache.org/jira/browse/LUCENE-7700
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7700.patch, LUCENE-7700.patch, LUCENE-7700.patch
>
>
> Here is a bit of a background:
> - I wanted to implement a custom merging strategy that would have a custom 
> i/o flow control (global),
> - currently, the CMS is tightly bound with a few classes -- MergeRateLimiter, 
> OneMerge, IndexWriter.
> Looking at the code it seems to me that everything with respect to I/O 
> control could be nicely pulled out into classes that explicitly control the 
> merging process, that is only MergePolicy and MergeScheduler. By default, one 
> could even run without any additional I/O accounting overhead (which is 
> currently in there, even if one doesn't use the CMS's throughput control).
> Such refactoring would also give a chance to nicely move things where they 
> belong -- job aborting into OneMerge (currently in RateLimiter), rate limiter 
> lifecycle bound to OneMerge (MergeScheduler could then use per-merge or 
> global accounting, as it pleases).
> Just a thought and some initial refactorings for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9838:
---
Attachment: (was: SOLR-9838.patch)

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9838.patch
>
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9838:
---
Attachment: SOLR-9838.patch

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9838.patch
>
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7700) Move throughput control and merge aborting out of IndexWriter's core?

2017-03-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901091#comment-15901091
 ] 

Dawid Weiss edited comment on LUCENE-7700 at 3/8/17 11:06 AM:
--

Thanks for comments Mike!

bq. Looks like javadocs for the private MergeRateLimiter.maybePause method are 
stale?

Corrected. I also changed some internal comments concerning waits < 1ms. (these 
are
possible with the new locks API, but we still don't bother) and introduced some 
more informative constants where appropriate.

bq.Why are we creating MergeRateLimiter on init of MergeThread and then again 
in CMS.wrapForMerge? Seems like we could cast the current thread to MergeThread 
and get its already-created instance?

Good catch, corrected.

bq. Why not updateIOThrottle in the main CMS thread, not the merge thread? 
Else, I think we have an added delay, from when a backlog'd merge shows up, to 
when the already running merge threads kick up their IO throttle?

I admit I didn't try to understand all of the CMS's rate-limit logic as it was 
quite complex, so
I don't understand you exactly here. Start of the merge thread seemed like a 
sensible place to run the update IO throttle update, but I moved it back -- 
doesn't seem to hurt anything.

bq. Maybe add a comment to OneMergeProgress.owner and .setMergeThread that it's 
only used for catching misuse?

Done.

bq. Can we rename OneMergeProgress.pauseTimes -> pauseTimesNanos or NS?

Hehe... sure, sure. 

bq. You can just remove the //private final Directory mergeDirectory from IW.

Done.

bq. Hmm it looks like CFS building is still unthrottled?

Already discussed. 

Running tests now.


was (Author: dweiss):
Thanks for comments Mike!

bq. Looks like javadocs for the private MergeRateLimiter.maybePause method are 
stale?

Corrected. I also changed some internal comments concerning waits < 1ms. (these 
are
possible with the new locks API, but we still don't bother) and introduced some 
more informative constants where appropriate.

bq.Why are we creating MergeRateLimiter on init of MergeThread and then again 
in CMS.wrapForMerge? 
Seems like we could cast the current thread to MergeThread and get its 
already-created instance?

Good catch, corrected.

bq. Why not updateIOThrottle in the main CMS thread, not the merge thread? 
Else, I think we have 
an added delay, from when a backlog'd merge shows up, to when the already 
running merge threads kick up their IO throttle?

I admit I didn't try to understand all of the CMS's rate-limit logic as it was 
quite complex, so
I don't understand you exactly here. Start of the merge thread seemed like a 
sensible place to run 
the update IO throttle update, but I moved it back -- doesn't seem to hurt 
anything.

bq. Maybe add a comment to OneMergeProgress.owner and .setMergeThread that it's 
only used for catching misuse?

Done.

bq. Can we rename OneMergeProgress.pauseTimes -> pauseTimesNanos or NS?

Hehe... sure, sure. 

bq. You can just remove the //private final Directory mergeDirectory from IW.

Done.

bq. Hmm it looks like CFS building is still unthrottled?

Already discussed. 

Running tests now.

> Move throughput control and merge aborting out of IndexWriter's core?
> -
>
> Key: LUCENE-7700
> URL: https://issues.apache.org/jira/browse/LUCENE-7700
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7700.patch, LUCENE-7700.patch, LUCENE-7700.patch
>
>
> Here is a bit of a background:
> - I wanted to implement a custom merging strategy that would have a custom 
> i/o flow control (global),
> - currently, the CMS is tightly bound with a few classes -- MergeRateLimiter, 
> OneMerge, IndexWriter.
> Looking at the code it seems to me that everything with respect to I/O 
> control could be nicely pulled out into classes that explicitly control the 
> merging process, that is only MergePolicy and MergeScheduler. By default, one 
> could even run without any additional I/O accounting overhead (which is 
> currently in there, even if one doesn't use the CMS's throughput control).
> Such refactoring would also give a chance to nicely move things where they 
> belong -- job aborting into OneMerge (currently in RateLimiter), rate limiter 
> lifecycle bound to OneMerge (MergeScheduler could then use per-merge or 
> global accounting, as it pleases).
> Just a thought and some initial refactorings for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7700) Move throughput control and merge aborting out of IndexWriter's core?

2017-03-08 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-7700:

Attachment: LUCENE-7700.patch

Thanks for comments Mike!

bq. Looks like javadocs for the private MergeRateLimiter.maybePause method are 
stale?

Corrected. I also changed some internal comments concerning waits < 1ms. (these 
are
possible with the new locks API, but we still don't bother) and introduced some 
more informative constants where appropriate.

bq.Why are we creating MergeRateLimiter on init of MergeThread and then again 
in CMS.wrapForMerge? 
Seems like we could cast the current thread to MergeThread and get its 
already-created instance?

Good catch, corrected.

bq. Why not updateIOThrottle in the main CMS thread, not the merge thread? 
Else, I think we have 
an added delay, from when a backlog'd merge shows up, to when the already 
running merge threads kick up their IO throttle?

I admit I didn't try to understand all of the CMS's rate-limit logic as it was 
quite complex, so
I don't understand you exactly here. Start of the merge thread seemed like a 
sensible place to run 
the update IO throttle update, but I moved it back -- doesn't seem to hurt 
anything.

bq. Maybe add a comment to OneMergeProgress.owner and .setMergeThread that it's 
only used for catching misuse?

Done.

bq. Can we rename OneMergeProgress.pauseTimes -> pauseTimesNanos or NS?

Hehe... sure, sure. 

bq. You can just remove the //private final Directory mergeDirectory from IW.

Done.

bq. Hmm it looks like CFS building is still unthrottled?

Already discussed. 

Running tests now.

> Move throughput control and merge aborting out of IndexWriter's core?
> -
>
> Key: LUCENE-7700
> URL: https://issues.apache.org/jira/browse/LUCENE-7700
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7700.patch, LUCENE-7700.patch, LUCENE-7700.patch
>
>
> Here is a bit of a background:
> - I wanted to implement a custom merging strategy that would have a custom 
> i/o flow control (global),
> - currently, the CMS is tightly bound with a few classes -- MergeRateLimiter, 
> OneMerge, IndexWriter.
> Looking at the code it seems to me that everything with respect to I/O 
> control could be nicely pulled out into classes that explicitly control the 
> merging process, that is only MergePolicy and MergeScheduler. By default, one 
> could even run without any additional I/O accounting overhead (which is 
> currently in there, even if one doesn't use the CMS's throughput control).
> Such refactoring would also give a chance to nicely move things where they 
> belong -- job aborting into OneMerge (currently in RateLimiter), rate limiter 
> lifecycle bound to OneMerge (MergeScheduler could then use per-merge or 
> global accounting, as it pleases).
> Just a thought and some initial refactorings for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10076) Hiding keystore and truststore passwords from /admin/info/* outputs

2017-03-08 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901050#comment-15901050
 ] 

Mano Kovacs edited comment on SOLR-10076 at 3/8/17 10:41 AM:
-

Thank you [~markrmil...@gmail.com] for your comment.

bq. We probably want to push users towards configuring this in a way it's not 
on the command line though, right?
I agree that this is more like a workaround in the current state. It could also 
work as a second layer of protection if passwords being passed in command line. 
I would assume that getting the list of running processes on a server would 
require higher privileges than accessing the admin-ui, which suggests that the 
passwords should not be exposed there.
Also, the {{/admin/info/properties}} API would expose password were set 
differently.

bq. I know our start scripts recently still set some of this ssl stuff via the 
command line, but if that is still the case, we should fix that too.
Is there a jira for that? I would be happy looking into it.


was (Author: manokovacs):
Thank you [~markrmil...@gmail.com] for your comment.

bq. We probably want to push users towards configuring this in a way it's not 
on the command line though, right?
I agree that this is more like a workaround in the current state. It could also 
work as a second layer of protection if passwords being passed in command line. 
I would assume that getting the list of running processes on a server would 
require higher privileges than accessing the admin-ui, which suggests that the 
passwords should not be exposed there.

bq. I know our start scripts recently still set some of this ssl stuff via the 
command line, but if that is still the case, we should fix that too.
Is there a jira for that? I would be happy looking into it.

> Hiding keystore and truststore passwords from /admin/info/* outputs
> ---
>
> Key: SOLR-10076
> URL: https://issues.apache.org/jira/browse/SOLR-10076
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10076.patch
>
>
> Passing keystore and truststore password is done by system properties, via 
> cmd line parameter.
> As result, {{/admin/info/properties}} and {{/admin/info/system}} will print 
> out the received password.
> Proposing solution to automatically redact value of any system property 
> before output, containing the word {{password}}, and replacing its value with 
> {{**}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >