[JENKINS] Lucene-Solr-BadApples-NightlyTests-8.x - Build # 29 - Still Failing

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/29/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at 

[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name 
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported
 * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the docs

h2. Importing using {{curl}}

importing json file
{code:java}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true
{code}
importing javabin format file
{code:java}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin 
http://localhost:7574/solr/gettingstarted/update?commit=true
{code}

  was:
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name 
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported
 * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the docs

h2. Importing using {{curl}}

importing json file
{code:java}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code:java}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
>  * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the 
> docs
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin 
> http://localhost:7574/solr/gettingstarted/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-13682:
-

Assignee: Noble Paul

> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
>  * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the 
> docs
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902778#comment-16902778
 ] 

Noble Paul edited comment on SOLR-13682 at 8/9/19 4:49 AM:
---

bq. Perhaps optimize for the normal case of exporting a collection in the local 
cluster,

this is for the most common usecase . the last part is the collection name. 
Only when you are playing with Solr you run all these from the local box. 
Ideally, you will be running a cluster with a handful of nodes and you would 
want to run your export in another machine where Solr is not running.
 bq. Also, consider making the default format jsonl 

OK
bq. and default output stdout 

That would be a bad experience , we are gonna emit a few megabytes of data. We 
can have an extra option to do so




was (Author: noble.paul):
bq. Perhaps optimize for the normal case of exporting a collection in the local 
cluster,

this is for the most common usecase . the last part is the collection name
 bq. Also, consider making the default format jsonl 

OK
bq. and default output stdout 

That would be a bad experience , we are gonna emit a few megabytes of data. We 
can have an extra option to do so



> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
>  * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the 
> docs
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name 
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported
 * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the docs

h2. Importing using {{curl}}

importing json file
{code:java}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code:java}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}

  was:
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name 
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported

h2. Importing using {{curl}}

importing json file
{code:java}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code:java}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
>  * {{limit}} : no:of docs. default is 100 , send  {{-1}} to import all the 
> docs
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903567#comment-16903567
 ] 

ASF subversion and git services commented on SOLR-13682:


Commit acc3e47218c11a26839ad912e600294db2d0fda8 in lucene-solr's branch 
refs/heads/jira/SOLR-13682 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=acc3e47 ]

SOLR-13682: refactored and cleaned up


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name 
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported

h2. Importing using {{curl}}

importing json file
{code:java}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code:java}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}

  was:
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name .(if this starts with "http://; the output will 
be piped to that url. Can be used to pipe docs to another cluster)
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported

h2. Importing using {{curl}}
importing json file
{code}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name 
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
> h2. Importing using {{curl}}
> importing json file
> {code:java}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code:java}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name .(if this starts with "http://; the output will 
be piped to that url. Can be used to pipe docs to another cluster)
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported

h2. Importing using {{curl}}
importing json file
{code}
curl -X POST -d @gettingstarted.json 
http://localhost:18983/solr/copy/update/json/docs?commit=true
{code}
importing javabin format file
{code}
curl -X POST --header "Content-Type: application/javabin" --data-binary 
@gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
{code}

  was:
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name .(if this starts with "http://; the output will 
be piped to that url. Can be used to pipe docs to another cluster)
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name .(if this starts with "http://; the output will 
> be piped to that url. Can be used to pipe docs to another cluster)
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported
> h2. Importing using {{curl}}
> importing json file
> {code}
> curl -X POST -d @gettingstarted.json 
> http://localhost:18983/solr/copy/update/json/docs?commit=true
> {code}
> importing javabin format file
> {code}
> curl -X POST --header "Content-Type: application/javabin" --data-binary 
> @gettingstarted.javabin http://localhost:7574/solr/mycore/update?commit=true
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-12.0.1) - Build # 389 - Unstable!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/389/
Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.SystemCollectionCompatTest.testBackCompat

Error Message:
re-indexing warning not found

Stack Trace:
java.lang.AssertionError: re-indexing warning not found
at 
__randomizedtesting.SeedInfo.seed([CE07934BD70ED24F:BEF230E2B7C67B39]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SystemCollectionCompatTest.testBackCompat(SystemCollectionCompatTest.java:206)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 14902 lines...]
   [junit4] Suite: org.apache.solr.cloud.SystemCollectionCompatTest
   [junit4]   2> 3133107 INFO  
(SUITE-SystemCollectionCompatTest-seed#[CE07934BD70ED24F]-worker) [ ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & 

[JENKINS] Lucene-Solr-Tests-master - Build # 3525 - Failure

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3525/

All tests passed

Build Log:
[...truncated 64568 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1362994552
 [ecj-lint] Compiling 48 source files to /tmp/ecj1362994552
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2015:
 The following error occurred while executing this line:

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 175 - Failure

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/175/

7 tests failed.
FAILED:  
org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGens

Error Message:
java.nio.file.FileSystemException: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/lucene/build/core/test/J1/temp/lucene.index.TestDemoParallelLeafReader_626AD41089E8E3CA-001/tempDir-004/segs/blqi1gsgqm1zctd8kqawvsffw_159/_1.si:
 Too many open files

Stack Trace:
java.lang.RuntimeException: java.nio.file.FileSystemException: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/lucene/build/core/test/J1/temp/lucene.index.TestDemoParallelLeafReader_626AD41089E8E3CA-001/tempDir-004/segs/blqi1gsgqm1zctd8kqawvsffw_159/_1.si:
 Too many open files
at 
__randomizedtesting.SeedInfo.seed([626AD41089E8E3CA:7FA8799C96113D16]:0)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader$1.wrap(TestDemoParallelLeafReader.java:204)
at 
org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:62)
at 
org.apache.lucene.index.FilterDirectoryReader.(FilterDirectoryReader.java:91)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader.(TestDemoParallelLeafReader.java:196)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader.doWrapDirectoryReader(TestDemoParallelLeafReader.java:212)
at 
org.apache.lucene.index.FilterDirectoryReader.wrapDirectoryReader(FilterDirectoryReader.java:107)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:112)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:165)
at 
org.apache.lucene.index.ReaderManager.refreshIfNeeded(ReaderManager.java:105)
at 
org.apache.lucene.index.ReaderManager.refreshIfNeeded(ReaderManager.java:36)
at 
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at 
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:225)
at 
org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGens(TestDemoParallelLeafReader.java:986)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[GitHub] [lucene-solr] shalinmangar closed pull request #777: SOLR-11724: Fix for 'Cdcr Bootstrapping does not cause ''index copying'' to follower nodes on Target' BUG

2019-08-08 Thread GitBox
shalinmangar closed pull request #777: SOLR-11724: Fix for 'Cdcr Bootstrapping 
does not cause ''index copying'' to follower nodes on Target' BUG 
URL: https://github.com/apache/lucene-solr/pull/777
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] shalinmangar commented on issue #777: SOLR-11724: Fix for 'Cdcr Bootstrapping does not cause ''index copying'' to follower nodes on Target' BUG

2019-08-08 Thread GitBox
shalinmangar commented on issue #777: SOLR-11724: Fix for 'Cdcr Bootstrapping 
does not cause ''index copying'' to follower nodes on Target' BUG 
URL: https://github.com/apache/lucene-solr/pull/777#issuecomment-519759928
 
 
   This is closed by the commit made in SOLR-13141


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11724:
-
Fix Version/s: (was: 8.0)
   Status: Resolved  (was: Patch Available)

This is fixed by SOLR-13141

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.4, 7.3.1
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-13141.
--
Resolution: Fixed

Thanks to everyone who reported and investigated this problem.

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication 
> wasnt working at all.txt, type 2 - only few documents were being 
> replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903518#comment-16903518
 ] 

ASF subversion and git services commented on SOLR-13141:


Commit f4dc168301cf5d3b582209c1a9420420ff1c3d64 in lucene-solr's branch 
refs/heads/branch_8x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f4dc168 ]

SOLR-13141: CDCR bootstrap does not replicate index to the replicas of target 
cluster.

The leader node on the target cluster will now increment its term after 
bootstrap succeeds so that all replicas of this leader are forced to recover 
and fetch the latest index from the leader.

(cherry picked from commit e59f41b6712b4feb9b810b34108a43281c33e515)


> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication 
> wasnt working at all.txt, type 2 - only few documents were being 
> replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903517#comment-16903517
 ] 

ASF subversion and git services commented on SOLR-13141:


Commit e59f41b6712b4feb9b810b34108a43281c33e515 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e59f41b ]

SOLR-13141: CDCR bootstrap does not replicate index to the replicas of target 
cluster.

The leader node on the target cluster will now increment its term after 
bootstrap succeeds so that all replicas of this leader are forced to recover 
and fetch the latest index from the leader.


> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication 
> wasnt working at all.txt, type 2 - only few documents were being 
> replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903512#comment-16903512
 ] 

Shalin Shekhar Mangar commented on SOLR-13141:
--

Latest patch that increments the term only if the bootstrap was successful.

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication 
> wasnt working at all.txt, type 2 - only few documents were being 
> replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13141:
-
Attachment: SOLR-13141.patch

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication 
> wasnt working at all.txt, type 2 - only few documents were being 
> replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13141:
-
Fix Version/s: 8.3
   master (9.0)

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13141.patch, type 1 - replication wasnt working at 
> all.txt, type 2 - only few documents were being replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-13141:


Assignee: Shalin Shekhar Mangar

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Attachments: SOLR-13141.patch, type 1 - replication wasnt working at 
> all.txt, type 2 - only few documents were being replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13141:
-
Summary: CDCR bootstrap does not replicate index to the replicas of target 
cluster  (was: replicationFactor param cause problems with CDCR)

> CDCR bootstrap does not replicate index to the replicas of target cluster
> -
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Priority: Critical
> Attachments: SOLR-13141.patch, type 1 - replication wasnt working at 
> all.txt, type 2 - only few documents were being replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8076 - Still Failing!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8076/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestPullReplicaErrorHandling.testPullReplicaDisconnectsFromZooKeeper

Error Message:
Didn't get expected doc count. Expected: 10, Found: 0

Stack Trace:
java.lang.AssertionError: Didn't get expected doc count. Expected: 10, Found: 0
at 
__randomizedtesting.SeedInfo.seed([661FC13AF972758C:7956E1B9DC987AEC]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.TestPullReplicaErrorHandling.assertNumDocs(TestPullReplicaErrorHandling.java:254)
at 
org.apache.solr.cloud.TestPullReplicaErrorHandling.assertNumDocs(TestPullReplicaErrorHandling.java:259)
at 
org.apache.solr.cloud.TestPullReplicaErrorHandling.testPullReplicaDisconnectsFromZooKeeper(TestPullReplicaErrorHandling.java:230)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:

[GitHub] [lucene-solr] MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP]

2019-08-08 Thread GitBox
MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default 
behavior of the basic authentication plugin. [WIP]
URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-519736169
 
 
   @janhoy I've appended Work In Progress to this PR's title because I'm facing 
a strange build error. It does not seem related:
   ``` [ecj-lint] --
[ecj-lint] 1. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
[ecj-lint]  import javax.naming.NamingException;
[ecj-lint] 
[ecj-lint] The type javax.naming.NamingException is not accessible
[ecj-lint] --
[ecj-lint] 2. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
[ecj-lint]  public class MockInitialContextFactory implements 
InitialContextFactory {
[ecj-lint]   ^
[ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
[ecj-lint] --
[ecj-lint] 3. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
[ecj-lint]  private final javax.naming.Context context;
[ecj-lint]
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 4. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
[ecj-lint]  context = mock(javax.naming.Context.class);
[ecj-lint]  ^^^
[ecj-lint] context cannot be resolved to a variable
[ecj-lint] --
[ecj-lint] 5. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
[ecj-lint]  context = mock(javax.naming.Context.class);
[ecj-lint] 
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 6. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
[ecj-lint]  when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
[ecj-lint]   ^^^
[ecj-lint] context cannot be resolved
[ecj-lint] --
[ecj-lint] 7. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
[ecj-lint]  } catch (NamingException e) {
[ecj-lint]   ^^^
[ecj-lint] NamingException cannot be resolved to a type
[ecj-lint] --
[ecj-lint] 8. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
[ecj-lint]  public javax.naming.Context getInitialContext(Hashtable env) {
[ecj-lint] 
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 9. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
[ecj-lint]  return context;
[ecj-lint] ^^^
[ecj-lint] context cannot be resolved to a variable
[ecj-lint] --
[ecj-lint] 9 problems (9 errors)
   
   BUILD FAILED
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/build.xml:101: 
The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/build.xml:651:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/common-build.xml:479:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2015:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.```
   
   I also saw similar errors on the lucene-dev mailing 

[GitHub] [lucene-solr] MarcusSorealheis commented on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP]

2019-08-08 Thread GitBox
MarcusSorealheis commented on issue #805: SOLR-13649 change the default 
behavior of the basic authentication plugin. [WIP]
URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-519736169
 
 
   @janhoy I've appended Work In Progress to this PR's title because I'm facing 
a strange build error. It does not seem related:
   ``` [ecj-lint] --
[ecj-lint] 1. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
[ecj-lint]  import javax.naming.NamingException;
[ecj-lint] 
[ecj-lint] The type javax.naming.NamingException is not accessible
[ecj-lint] --
[ecj-lint] 2. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
[ecj-lint]  public class MockInitialContextFactory implements 
InitialContextFactory {
[ecj-lint]   ^
[ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
[ecj-lint] --
[ecj-lint] 3. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
[ecj-lint]  private final javax.naming.Context context;
[ecj-lint]
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 4. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
[ecj-lint]  context = mock(javax.naming.Context.class);
[ecj-lint]  ^^^
[ecj-lint] context cannot be resolved to a variable
[ecj-lint] --
[ecj-lint] 5. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
[ecj-lint]  context = mock(javax.naming.Context.class);
[ecj-lint] 
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 6. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
[ecj-lint]  when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
[ecj-lint]   ^^^
[ecj-lint] context cannot be resolved
[ecj-lint] --
[ecj-lint] 7. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
[ecj-lint]  } catch (NamingException e) {
[ecj-lint]   ^^^
[ecj-lint] NamingException cannot be resolved to a type
[ecj-lint] --
[ecj-lint] 8. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
[ecj-lint]  public javax.naming.Context getInitialContext(Hashtable env) {
[ecj-lint] 
[ecj-lint] The type javax.naming.Context is not accessible
[ecj-lint] --
[ecj-lint] 9. ERROR in 
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
[ecj-lint]  return context;
[ecj-lint] ^^^
[ecj-lint] context cannot be resolved to a variable
[ecj-lint] --
[ecj-lint] 9 problems (9 errors)
   
   BUILD FAILED
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/build.xml:101: 
The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/build.xml:651:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/common-build.xml:479:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2015:
 The following error occurred while executing this line:
   
/Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.```
   
   I also[ saw similar errors on the lucene-dev mailing 

[jira] [Commented] (SOLR-13399) compositeId support for shard splitting

2019-08-08 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903418#comment-16903418
 ] 

Yonik Seeley commented on SOLR-13399:
-

Ah, yep... spltiByPrefix definitely should not be defaulting to true!  It ended 
up normally doing nothing (since id_prefix was normally not populated), but 
that changed when the last commit to use the indexed "if" field was added.  
I'll fix the default to be false.

> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch, 
> ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc count for each term already.)  Perhaps the implementation could be a 
> flag on the *id* field... something like *indexPrefixes* and poly-fields that 
> would cause the indexing to be automatically done and alleviate having to 
> pass in an additional field during indexing and during the call to 
> *SPLITSHARD*.  This whole part is an optimization though and could be split 
> off into its own issue if desired.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13399) compositeId support for shard splitting

2019-08-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903409#comment-16903409
 ] 

Hoss Man commented on SOLR-13399:
-

i would assume it's related to the (numSubShards) changes in SplitShardCmd ?

At first glance, that code path looks like it's specific to SPLIT_BY_PREFIX, 
but apparently your previous commit has it defaulting to "true" ? (see 
SplitShardCmd.java L212)
{noformat}
$ git show 19ddcfd282f3b9eccc50da83653674e510229960 -- 
core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java | cat
commit 19ddcfd282f3b9eccc50da83653674e510229960
Author: yonik 
Date:   Tue Aug 6 14:09:54 2019 -0400

SOLR-13399: ability to use id field for compositeId histogram

diff --git 
a/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java 
b/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java
index 4d623be..6c5921e 100644
--- 
a/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java
+++ 
b/solr/core/src/java/org/apache/solr/cloud/api/collections/SplitShardCmd.java
@@ -212,16 +212,14 @@ public class SplitShardCmd implements 
OverseerCollectionMessageHandler.Cmd {
   if (message.getBool(CommonAdminParams.SPLIT_BY_PREFIX, true)) {
 t = timings.sub("getRanges");
 
-log.info("Requesting split ranges from replica " + 
parentShardLeader.getName() + " as part of slice " + slice + " of collection "
-+ collectionName + " on " + parentShardLeader);
-
 ModifiableSolrParams params = new ModifiableSolrParams();
 params.set(CoreAdminParams.ACTION, 
CoreAdminParams.CoreAdminAction.SPLIT.toString());
 params.set(CoreAdminParams.GET_RANGES, "true");
 params.set(CommonAdminParams.SPLIT_METHOD, splitMethod.toLower());
 params.set(CoreAdminParams.CORE, parentShardLeader.getStr("core"));
-int numSubShards = message.getInt(NUM_SUB_SHARDS, 
DEFAULT_NUM_SUB_SHARDS);
-params.set(NUM_SUB_SHARDS, Integer.toString(numSubShards));
+// Only 2 is currently supported
+// int numSubShards = message.getInt(NUM_SUB_SHARDS, 
DEFAULT_NUM_SUB_SHARDS);
+// params.set(NUM_SUB_SHARDS, Integer.toString(numSubShards));
 
 {
   final ShardRequestTracker shardRequestTracker = 
ocmh.asyncRequestTracker(asyncId);
@@ -236,7 +234,7 @@ public class SplitShardCmd implements 
OverseerCollectionMessageHandler.Cmd {
 NamedList shardRsp = (NamedList)successes.getVal(0);
 String splits = (String)shardRsp.get(CoreAdminParams.RANGES);
 if (splits != null) {
-  log.info("Resulting split range to be used is " + splits);
+  log.info("Resulting split ranges to be used: " + splits + " 
slice=" + slice + " leader=" + parentShardLeader);
   // change the message to use the recommended split ranges
   message = message.plus(CoreAdminParams.RANGES, splits);
 }

{noformat}
 

 (I could be totally of base though -- i don't really understand 90% of what 
this test is doing, and the place where it fails doesn't seem to be trying to 
split into more then 2 subshards, so even if the SplitSHardCmd changes i 
pointed out are buggy, i'm not sure why it would cause this particular failure)

 

> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch, 
> ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc 

[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903403#comment-16903403
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit fb93340bf110fbdf98237dc67fda1446f0a6894f in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fb93340 ]

SOLR-13105: More loading changes 2


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Namgyu Kim
+1 for issues@ and builds@
Thanks Jan :)


2019년 8월 9일 (금) 오전 3:02, Jan Høydahl 님이 작성:

> We already have commits@, so let’s go with issues@ and builds@
>
> Jan Høydahl
>
> 8. aug. 2019 kl. 18:32 skrev Namgyu Kim :
>
> +1 to Jan's idea :D
> I think it's better to split the mailing list.
>
> But I have a little opinion about the name.
> Which one is better, the singular(build-issue) or the
> plural(builds-issues)?
> Personally, "build-issue" looks better because names of the our mailing
> list are used as a singular. (not jave-users but java-user)
>
> What do you think about this?
>
> 2019년 8월 8일 (목) 오후 9:42, Jan Høydahl 님이 작성:
>
>> I'll let this email topic run over the weekend to attract more eyeballs.
>> Even if it's not a VOTE thread, feel free to add your +1 or -1's, and if
>> others also seem in favour of this idea then I'll start working on it next
>> week. To sum up what I believe to be the current consensus:
>>
>> A new issues@ list (announce only) for JIRA and Github notifications
>> A new build@ list (announce only) for Jenkins notifications
>>
>> Whether [Created|Resolved] mails for JIRA/PR should also go to Dev list
>> is still an open question. To help decide, here's the expected volume for
>> those (from reporter.apache.org:
>> 357 issues opened in JIRA, past quarter, 270 issues closed in JIRA, past
>> quarter, 155 PRs opened on GitHub, past quarter, 143 PRs closed on GitHub,
>> past quarter
>> …which sums up to about 300/month or 10/day.
>> An alternative to this could be some script that runs once a day and
>> emits ONE email per day with a digest with links to new/closed JIRAs and
>> PRs last 24h.
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 8. aug. 2019 kl. 14:15 skrev Erick Erickson :
>>
>> +1 to Jan’s idea of the bot-originated lists be announce only…..
>>
>> Personally I’ve been able to make some sense out of the messages by
>>
>> 1> switching to the mac mail client (not an option for others, I know).
>> It threads pretty well and for those topics where there are 10 replies I
>> only have to glance at one to see if I’m interested enough to pursue.
>>
>> 2> I have a _lot_ of filters set up.
>>
>> I have to admit that one of the motivations for moving to the mail
>> program on the mac was because gmail’s filters are such a disaster. Or I
>> just totally missed how to configure them. For instance, changing the order
>> of execution was impossible, so when I wanted to make a new filter execute
>> first I had to redefine the entire list…..
>>
>> On Aug 8, 2019, at 5:31 AM, Alexandre Rafalovitch 
>> wrote:
>>
>> I apply the following (gmail) rules, just in case it helps somebody.
>> With this combination, I am able to track human conversations
>> reasonably well.
>>
>> Human conversation:
>> Matches: from:(-g...@apache.org) subject:(-[jira]) list:<
>> dev.lucene.apache.org>
>> Do this: Skip Inbox, Apply label "ML/Lucene-dev"
>>
>> All JIRA issues, regardless of other filters
>> Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
>> Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam
>>
>> New JIRA issues (that I check to see if I want to track/comment before
>> I remove the label)
>> Matches: subject:("[Created]") list:()
>> Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
>> send it to Spam
>>
>> Updates on JIRA issues from me (I already know them)
>> Matches: from:(Alexandre Rafalovitch (JIRA) )
>> Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"
>>
>> All JIRA issues I am involved in or marked to track
>> Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
>> Do this: Skip Inbox, Apply label "Solr-Jiras"
>>
>> Delete JENKINS stuff, as I am currently not contributing
>> Matches: subject:([JENKINS]) list:()
>> Do this: Delete it
>>
>> Git emails that I am not really tracking right now, but do keep
>> Matches: from:(g...@apache.org) list:()
>> Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
>> Never send it to Spam
>>
>> Moderation emails I help with
>> Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
>> Do this: Skip Inbox, Apply label "Solr-Moderate"
>>
>> Matches: list:""
>> Do this: Skip Inbox, Apply label "ML/SolrUsers"
>>
>> Regards,
>>   Alex.
>>
>> On Wed, 7 Aug 2019 at 07:54, David Smiley 
>> wrote:
>>
>>
>> It's a problem.  I am mentoring a colleague who is stressed with the
>> prospect of keeping up with our community because of the volume of email,
>> and so it's a serious barrier to community involvement.  I too have email
>> filters to help me, and it took some time to work out a system.  We could
>> share our filter descriptions for this with workflow?  I'm sure I could
>> learn from you all on your approaches, and new collaborators would
>> appreciate this advise.
>>
>> I think automated builds (Jenkins/CI) could warrant its own list.
>> Separate lists would make setting up email filters easier in general.
>>
>> I 

[jira] [Commented] (SOLR-13399) compositeId support for shard splitting

2019-08-08 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903395#comment-16903395
 ] 

Yonik Seeley commented on SOLR-13399:
-

Weird... I don't know how that commit could have caused a failure in 
ShardSplitTest, but I'll investigate.

> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch, 
> ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc count for each term already.)  Perhaps the implementation could be a 
> flag on the *id* field... something like *indexPrefixes* and poly-fields that 
> would cause the indexing to be automatically done and alleviate having to 
> pass in an additional field during indexing and during the call to 
> *SPLITSHARD*.  This whole part is an optimization though and could be split 
> off into its own issue if desired.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Jan Høydahl


> Maybe just JIRA is fine here since it's the master place holding the issue 
> status. 
+1

> we could get the dev list subscribed for a daily summary email.  WDYT?
This is quick to test out and would probably work well.

Or perhaps, as Alexandre does with his filtering, only subscribe to [Created] 
from Jira, there’s 4-5 per day which is not that bad, and then it is easier to 
scan your mailbox for interesting subject lines for issues to “follow” directly?

Jan
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13399) compositeId support for shard splitting

2019-08-08 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13399:

Attachment: ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt
Status: Reopened  (was: Reopened)


git bisect has identified 19ddcfd282f3b9eccc50da83653674e510229960 as the cause 
of recent (reproducible) jenkins test failures in ShardSplitTest...

https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-NightlyTests-8.x/174/
https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-repro/3507/

(Jenkins found the failures on branch_8x, but i was able to reproduce the same 
exact seed on master, and used that branch for bisecting.  Attaching logs from 
my local master run.)

{noformat}
ant test -Dtestcase=ShardSplitTest -Dtests.method=test 
-Dtests.seed=AE04B5C9BA6E9A4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true  -Dtests.locale=sr-Latn 
-Dtests.timezone=Etc/GMT-11 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
{noformat}

{noformat}
   [junit4] FAILURE  273s J2 | ShardSplitTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Wrong doc count on 
shard1_0. See SOLR-5309 expected:<257> but was:<316>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([AE04B5C9BA6E9A4:82B47486355A845C]:0)
   [junit4]>at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:1002)
   [junit4]>at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:794)
   [junit4]>at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:111)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}


> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch, 
> ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc count for each term already.)  Perhaps the implementation could be a 
> flag on the *id* field... something like *indexPrefixes* and poly-fields that 
> would cause the indexing to be automatically done and alleviate having to 
> pass in an additional field during indexing and during the call to 
> *SPLITSHARD*.  This whole part is an optimization though and could be split 
> off into its own issue if desired.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13684) failOnVersionConflicts param should be added to the DistributedUpdateProcessor whitelist params

2019-08-08 Thread Noble Paul (JIRA)
Noble Paul created SOLR-13684:
-

 Summary: failOnVersionConflicts param should be added to the 
DistributedUpdateProcessor whitelist params
 Key: SOLR-13684
 URL: https://issues.apache.org/jira/browse/SOLR-13684
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul


This param should be passed on to the leader when forwarding 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-08 Thread GitBox
dsmiley commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519654862
 
 
   > Does the indexed data contain luncene version info which created the 
index? If it contains, We can use old code to search in old indexed data.
   
   It does contain the version of Lucene, but not the Version used in analysis 
components (see class Version.java).  In a sense the SpatialStrategy impls play 
the role of an analysis component & more).  In order for a user of Lucene (e.g. 
Solr and it's users) to be able to use an existing index when they upgrade 
Lucene, a user can configure a specific version for analysis components 
consistent with the version that was used to write the index (will get set on 
an Analyzer.setVersion(...).  If this is Solr, then this is set via 
luceneMatchVersion; raw Lucene usage requires explicit control here.
   
   There has yet to be a SpatialStrategy that tweaked it's operation based on 
the Version; the only change or two that comes to mind were backwards 
compatible (e.g. fewer needless terms).  So there's no trace of Version usage 
in spatial-extras as of yet.
   
   I know this is kinda a pain to concern yourself with.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread David Smiley
On Thu, Aug 8, 2019 at 2:42 PM Jan Høydahl  wrote:

> ...
> Whether [Created|Resolved] mails for JIRA/PR should also go to Dev list is
> still an open question. To help decide, here's the expected volume for
> those (from reporter.apache.org:
> 357 issues opened in JIRA, past quarter, 270 issues closed in JIRA, past
> quarter, 155 PRs opened on GitHub, past quarter, 143 PRs closed on GitHub,
> past quarter
> …which sums up to about 300/month or 10/day.
> An alternative to this could be some script that runs once a day and emits
> ONE email per day with a digest with links to new/closed JIRAs and PRs last
> 24h.
>
>
The summary script sounds great to me. Maybe just JIRA is fine here since
it's the master place holding the issue status.  JIRA supports creating
saved searches called "Filters" and you can add an email based
"Subscription" to them.  I was just exploring this option.  We could create
a saved filter and document to interested users on how to subscribe
themselves, and/or we could get the dev list subscribed for a daily summary
email.  WDYT?


[jira] [Comment Edited] (LUCENE-8369) Remove the spatial module as it is obsolete

2019-08-08 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903204#comment-16903204
 ] 

Nicholas Knize edited comment on LUCENE-8369 at 8/8/19 6:14 PM:


As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module and maintain package private visibility on dependency 
classes
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; keep core dependency class visibility package 
private and use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to 
expose package private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?

We could also do 2. above for the next Lucene 8.x release, and explore the 
module option as a separate issue for the 9.0 release?


was (Author: nknize):
As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module and maintain package private visibility on dependency 
classes
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; keep core dependency class visibility package 
private and use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to 
expose package private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8369) Remove the spatial module as it is obsolete

2019-08-08 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903204#comment-16903204
 ] 

Nicholas Knize edited comment on LUCENE-8369 at 8/8/19 6:12 PM:


As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module and maintain package private visibility on dependency 
classes
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; keep core dependency class visibility package 
private and use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to 
expose package private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?


was (Author: nknize):
As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module and maintain package private visibility on dependency 
classes
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; leave core dependency class visibility alone and 
use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to expose package 
private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8369) Remove the spatial module as it is obsolete

2019-08-08 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903204#comment-16903204
 ] 

Nicholas Knize edited comment on LUCENE-8369 at 8/8/19 6:11 PM:


As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module and maintain package private visibility on dependency 
classes
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; leave core dependency class visibility alone and 
use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to expose package 
private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?


was (Author: nknize):
As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; leave core dependency class visibility alone and 
use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to expose package 
private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8369) Remove the spatial module as it is obsolete

2019-08-08 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903204#comment-16903204
 ] 

Nicholas Knize commented on LUCENE-8369:


As a side note: the two classes in the spatial module are no longer used and 
can be removed; leaving the spatial module empty.

So it sounds like we're converging on three options then:

1. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to core; 
delete spatial module
2. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module; make dependency classes in core public and label w/ _@lucene.internal_
3. Move {{LatLonShape}}, {{XYShape}}, queries and supporting classes to spatial 
module for Lucene 9 release; leave core dependency class visibility alone and 
use [java modules|https://www.jcp.org/en/jsr/detail?id=376] to expose package 
private classes to the spatial module?

I introduce the third option here because a. I think it might strike a nice 
balance between separating "esoteric" shape features (whatever that means) to 
the spatial module while maintaining proper API visibility, and b. with the 
move to Java 11 we can introduce the Java Platform Module system to achieve 
proper visibility.

I'll admit I'm no expert when it comes to the Java Module System but I seem to 
recall a conversation around this topic a few years back when Java 9 was 
released?

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Jan Høydahl
We already have commits@, so let’s go with issues@ and builds@

Jan Høydahl

> 8. aug. 2019 kl. 18:32 skrev Namgyu Kim :
> 
> +1 to Jan's idea :D
> I think it's better to split the mailing list.
> 
> But I have a little opinion about the name.
> Which one is better, the singular(build-issue) or the plural(builds-issues)?
> Personally, "build-issue" looks better because names of the our mailing list 
> are used as a singular. (not jave-users but java-user)
> 
> What do you think about this?
> 
> 2019년 8월 8일 (목) 오후 9:42, Jan Høydahl 님이 작성:
>> I'll let this email topic run over the weekend to attract more eyeballs. 
>> Even if it's not a VOTE thread, feel free to add your +1 or -1's, and if 
>> others also seem in favour of this idea then I'll start working on it next 
>> week. To sum up what I believe to be the current consensus:
>> 
>> A new issues@ list (announce only) for JIRA and Github notifications
>> A new build@ list (announce only) for Jenkins notifications
>> 
>> Whether [Created|Resolved] mails for JIRA/PR should also go to Dev list is 
>> still an open question. To help decide, here's the expected volume for those 
>> (from reporter.apache.org:
>> 357 issues opened in JIRA, past quarter, 270 issues closed in JIRA, past 
>> quarter, 155 PRs opened on GitHub, past quarter, 143 PRs closed on GitHub, 
>> past quarter
>> …which sums up to about 300/month or 10/day.
>> An alternative to this could be some script that runs once a day and emits 
>> ONE email per day with a digest with links to new/closed JIRAs and PRs last 
>> 24h.
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 8. aug. 2019 kl. 14:15 skrev Erick Erickson :
>>> 
>>> +1 to Jan’s idea of the bot-originated lists be announce only…..
>>> 
>>> Personally I’ve been able to make some sense out of the messages by
>>> 
>>> 1> switching to the mac mail client (not an option for others, I know). It 
>>> threads pretty well and for those topics where there are 10 replies I only 
>>> have to glance at one to see if I’m interested enough to pursue.
>>> 
>>> 2> I have a _lot_ of filters set up.
>>> 
>>> I have to admit that one of the motivations for moving to the mail program 
>>> on the mac was because gmail’s filters are such a disaster. Or I just 
>>> totally missed how to configure them. For instance, changing the order of 
>>> execution was impossible, so when I wanted to make a new filter execute 
>>> first I had to redefine the entire list…..
>>> 
 On Aug 8, 2019, at 5:31 AM, Alexandre Rafalovitch  
 wrote:
 
 I apply the following (gmail) rules, just in case it helps somebody.
 With this combination, I am able to track human conversations
 reasonably well.
 
 Human conversation:
 Matches: from:(-g...@apache.org) subject:(-[jira]) 
 list:
 Do this: Skip Inbox, Apply label "ML/Lucene-dev"
 
 All JIRA issues, regardless of other filters
 Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
 Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam
 
 New JIRA issues (that I check to see if I want to track/comment before
 I remove the label)
 Matches: subject:("[Created]") list:()
 Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
 send it to Spam
 
 Updates on JIRA issues from me (I already know them)
 Matches: from:(Alexandre Rafalovitch (JIRA) )
 Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"
 
 All JIRA issues I am involved in or marked to track
 Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
 Do this: Skip Inbox, Apply label "Solr-Jiras"
 
 Delete JENKINS stuff, as I am currently not contributing
 Matches: subject:([JENKINS]) list:()
 Do this: Delete it
 
 Git emails that I am not really tracking right now, but do keep
 Matches: from:(g...@apache.org) list:()
 Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
 Never send it to Spam
 
 Moderation emails I help with
 Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
 Do this: Skip Inbox, Apply label "Solr-Moderate"
 
 Matches: list:""
 Do this: Skip Inbox, Apply label "ML/SolrUsers"
 
 Regards,
   Alex.
 
> On Wed, 7 Aug 2019 at 07:54, David Smiley  
> wrote:
> 
> It's a problem.  I am mentoring a colleague who is stressed with the 
> prospect of keeping up with our community because of the volume of email, 
> and so it's a serious barrier to community involvement.  I too have email 
> filters to help me, and it took some time to work out a system.  We could 
> share our filter descriptions for this with workflow?  I'm sure I could 
> learn from you all on your approaches, and new collaborators would 
> appreciate this advise.
> 
> I think automated builds (Jenkins/CI) could warrant its own list.  

[jira] [Commented] (SOLR-13683) SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers

2019-08-08 Thread Niranjan Nanda (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903143#comment-16903143
 ] 

Niranjan Nanda commented on SOLR-13683:
---

[~shalinmangar] About the builder method {{withHttpClient}} - is there a chance 
to enhance it in future release? If so, any timeline?

Regarding other suggestions,
 * We are planning to use {{CloudHttp2SolrClient}}. I believe this uses Jetty's 
http client APIs instead of Apache's.
 * Our use case is indexing documents to Solr; hence, we do not use 
{{SolrRequest}} APIs. We use {{SolrClient.add(collectionName, 
Collection)}} API. Do you have any document which shows how 
to use {{SolrRequest}} API for indexing use cases?

Regardless I think it's easier to set the custom headers as part of the 
underneath HTTP Client (Apache/Jetty) instead of setting it for every request.

> SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers
> -
>
> Key: SOLR-13683
> URL: https://issues.apache.org/jira/browse/SOLR-13683
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.1.1
>Reporter: Niranjan Nanda
>Priority: Minor
>
> Currently {{Http2SolrClient}} does not allow configuring custom headers. For 
> example, how to pass Basic Auth headers? It should expose some builder APIs 
> to pass such headers.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8075 - Still Failing!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8075/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2066 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\temp\junit4-J0-20190808_141104_9577461703520743135139.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\temp\junit4-J1-20190808_141104_95612902558224837077922.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 313 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\temp\junit4-J0-20190808_142047_0766714600636623149621.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 6 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\temp\junit4-J1-20190808_142047_0766791261346260703501.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 1090 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\temp\junit4-J1-20190808_142242_3563158561808837747548.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\temp\junit4-J0-20190808_142242_3565021569213567341019.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 246 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\icu\test\temp\junit4-J0-20190808_142631_67617433478504905919538.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\icu\test\temp\junit4-J1-20190808_142631_67613549453212357073064.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 216 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J0-20190808_142654_28013744995092965040879.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J1-20190808_142654_28015211627759356937701.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 155 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[jira] [Commented] (SOLR-13579) Create resource management API

2019-08-08 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903131#comment-16903131
 ] 

Andrzej Bialecki  commented on SOLR-13579:
--

Thanks for the review:
 # Yes, it was a bug - there was a missing conditional that checked whether the 
other pool of the same type already has this component.
 # Definitely, but the API is still in flux - I'll add it once the API is 
somewhat stabilized.
 # not yet - again, it requires declaring commands and parameters in a separate 
JSON file, which at this point I think is premature when the implementation 
keeps changing.

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, 
> SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, 
> SOLR-13579.patch
>
>
> Resource management framework API supporting the goals outlined in SOLR-13578.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8934) Move Nori DictionaryBuilder tool from src/tools to src/

2019-08-08 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim resolved LUCENE-8934.

   Resolution: Fixed
Fix Version/s: master (9.0)
   8.x

> Move Nori DictionaryBuilder tool from src/tools to src/
> ---
>
> Key: LUCENE-8934
> URL: https://issues.apache.org/jira/browse/LUCENE-8934
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> After LUCENE-8904 tests in Nori tools are not running in the normal test 
> ({{ant test}}).
> As with Kuromoji(before LUCENE-8871), we need to run the {{ant test-tools}} 
> to test Nori's tools.
> Like Kuromoji, we can proceed with the normality test after moving the tools 
> of Nori to the main source tree.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8912) Remove ICU dependency of nori tools/test-tools

2019-08-08 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8912:
---
Fix Version/s: master (9.0)
   8.x

> Remove ICU dependency of nori tools/test-tools
> --
>
> Key: LUCENE-8912
> URL: https://issues.apache.org/jira/browse/LUCENE-8912
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {quote}After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.
> {quote}
> As mentioned in LUCENE-8904, I proceed this work from now on.
> It is what [~rcmuir] found first(LUCENE-8866) and then I just apply to Nori.
> Nori doesn't need the ICU library because it uses Normalizer2 only for NFKC 
> normalization like Kuromoji.
>  I think it's OK to remove the library dependency because it can be handled 
> by JDK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8912) Remove ICU dependency of nori tools/test-tools

2019-08-08 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim resolved LUCENE-8912.

Resolution: Fixed

> Remove ICU dependency of nori tools/test-tools
> --
>
> Key: LUCENE-8912
> URL: https://issues.apache.org/jira/browse/LUCENE-8912
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {quote}After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.
> {quote}
> As mentioned in LUCENE-8904, I proceed this work from now on.
> It is what [~rcmuir] found first(LUCENE-8866) and then I just apply to Nori.
> Nori doesn't need the ICU library because it uses Normalizer2 only for NFKC 
> normalization like Kuromoji.
>  I think it's OK to remove the library dependency because it can be handled 
> by JDK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8904) Enhance Nori DictionaryBuilder tool

2019-08-08 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim resolved LUCENE-8904.

Resolution: Fixed

> Enhance Nori DictionaryBuilder tool
> ---
>
> Key: LUCENE-8904
> URL: https://issues.apache.org/jira/browse/LUCENE-8904
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> It is the Nori version of [~sokolov]'s LUCENE-8863.
>  This patch has two changes.
>  1) Improve exception handling
>  2) Enable external dictionary for testing
> Overall, it is the same as LUCENE-8863.
> But there are some differences between Nori and Kuromoji.
> These can be slightly different on the code.
> 1) CSV field size
> Nori : 12
> Kuromoji : 13
> 2) left context ID == right context ID
> Nori : can be different
> Kuromoji : always same
> 3) Dictionary Type
> Nori : just one type
> Kuromoji : IPADIC, UNIDIC
> After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13579) Create resource management API

2019-08-08 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13579:
-
Attachment: SOLR-13579.patch

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, 
> SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, 
> SOLR-13579.patch
>
>
> Resource management framework API supporting the goals outlined in SOLR-13578.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8904) Enhance Nori DictionaryBuilder tool

2019-08-08 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8904:
---
Fix Version/s: master (9.0)
   8.x

> Enhance Nori DictionaryBuilder tool
> ---
>
> Key: LUCENE-8904
> URL: https://issues.apache.org/jira/browse/LUCENE-8904
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> It is the Nori version of [~sokolov]'s LUCENE-8863.
>  This patch has two changes.
>  1) Improve exception handling
>  2) Enable external dictionary for testing
> Overall, it is the same as LUCENE-8863.
> But there are some differences between Nori and Kuromoji.
> These can be slightly different on the code.
> 1) CSV field size
> Nori : 12
> Kuromoji : 13
> 2) left context ID == right context ID
> Nori : can be different
> Kuromoji : always same
> 3) Dictionary Type
> Nori : just one type
> Kuromoji : IPADIC, UNIDIC
> After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1415 - Still Failing

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1415/

No tests ran.

Build Log:
[...truncated 24456 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2590 links (2119 relative) to 3409 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 986 - Failure!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/986/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:681)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:695)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3409)
at 
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit(TestIndexingSequenceNumbers.java:228)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space




Build Log:
[...truncated 2115 lines...]
   [junit4] Suite: org.apache.lucene.index.TestIndexingSequenceNumbers
   [junit4]   2> 8 09, 2019 2:56:04 ?? 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> ??: Uncaught exception in thread: 
Thread[Thread-3606,5,TGRP-TestIndexingSequenceNumbers]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([3745A8DCA22B2E5D]:0)
   [junit4]   

Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Namgyu Kim
+1 to Jan's idea :D
I think it's better to split the mailing list.

But I have a little opinion about the name.
Which one is better, the singular(build-issue) or the plural(builds-issues)?
Personally, "build-issue" looks better because names of the our mailing
list are used as a singular. (not jave-users but java-user)

What do you think about this?

2019년 8월 8일 (목) 오후 9:42, Jan Høydahl 님이 작성:

> I'll let this email topic run over the weekend to attract more eyeballs.
> Even if it's not a VOTE thread, feel free to add your +1 or -1's, and if
> others also seem in favour of this idea then I'll start working on it next
> week. To sum up what I believe to be the current consensus:
>
> A new issues@ list (announce only) for JIRA and Github notifications
> A new build@ list (announce only) for Jenkins notifications
>
> Whether [Created|Resolved] mails for JIRA/PR should also go to Dev list is
> still an open question. To help decide, here's the expected volume for
> those (from reporter.apache.org:
> 357 issues opened in JIRA, past quarter, 270 issues closed in JIRA, past
> quarter, 155 PRs opened on GitHub, past quarter, 143 PRs closed on GitHub,
> past quarter
> …which sums up to about 300/month or 10/day.
> An alternative to this could be some script that runs once a day and emits
> ONE email per day with a digest with links to new/closed JIRAs and PRs last
> 24h.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 8. aug. 2019 kl. 14:15 skrev Erick Erickson :
>
> +1 to Jan’s idea of the bot-originated lists be announce only…..
>
> Personally I’ve been able to make some sense out of the messages by
>
> 1> switching to the mac mail client (not an option for others, I know). It
> threads pretty well and for those topics where there are 10 replies I only
> have to glance at one to see if I’m interested enough to pursue.
>
> 2> I have a _lot_ of filters set up.
>
> I have to admit that one of the motivations for moving to the mail program
> on the mac was because gmail’s filters are such a disaster. Or I just
> totally missed how to configure them. For instance, changing the order of
> execution was impossible, so when I wanted to make a new filter execute
> first I had to redefine the entire list…..
>
> On Aug 8, 2019, at 5:31 AM, Alexandre Rafalovitch 
> wrote:
>
> I apply the following (gmail) rules, just in case it helps somebody.
> With this combination, I am able to track human conversations
> reasonably well.
>
> Human conversation:
> Matches: from:(-g...@apache.org) subject:(-[jira]) list:<
> dev.lucene.apache.org>
> Do this: Skip Inbox, Apply label "ML/Lucene-dev"
>
> All JIRA issues, regardless of other filters
> Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
> Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam
>
> New JIRA issues (that I check to see if I want to track/comment before
> I remove the label)
> Matches: subject:("[Created]") list:()
> Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
> send it to Spam
>
> Updates on JIRA issues from me (I already know them)
> Matches: from:(Alexandre Rafalovitch (JIRA) )
> Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"
>
> All JIRA issues I am involved in or marked to track
> Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
> Do this: Skip Inbox, Apply label "Solr-Jiras"
>
> Delete JENKINS stuff, as I am currently not contributing
> Matches: subject:([JENKINS]) list:()
> Do this: Delete it
>
> Git emails that I am not really tracking right now, but do keep
> Matches: from:(g...@apache.org) list:()
> Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
> Never send it to Spam
>
> Moderation emails I help with
> Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
> Do this: Skip Inbox, Apply label "Solr-Moderate"
>
> Matches: list:""
> Do this: Skip Inbox, Apply label "ML/SolrUsers"
>
> Regards,
>   Alex.
>
> On Wed, 7 Aug 2019 at 07:54, David Smiley 
> wrote:
>
>
> It's a problem.  I am mentoring a colleague who is stressed with the
> prospect of keeping up with our community because of the volume of email,
> and so it's a serious barrier to community involvement.  I too have email
> filters to help me, and it took some time to work out a system.  We could
> share our filter descriptions for this with workflow?  I'm sure I could
> learn from you all on your approaches, and new collaborators would
> appreciate this advise.
>
> I think automated builds (Jenkins/CI) could warrant its own list.
> Separate lists would make setting up email filters easier in general.
>
> I like the idea of a list, like dev, but which does not include JIRA
> comments or GH code review comments, and does not include Jenkins/CI  This
> would be a good way for potential contributors to have a light-weight way
> of getting involved.  If they are involved or interested in specific
> issues, they can "watch" / "subscribe" to JIRA/GH 

[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-08 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903103#comment-16903103
 ] 

Andrzej Bialecki  commented on SOLR-9658:
-

Thanks Hoss for the detailed review! I updated the patch to include the changes 
after your review:
 * fixed ConcurrentLRUCache to sweep by idle time first.
 * fixed CleanupThread to not need the additional arg.
 * fixed the constructors to preserve back-compat, and fixed 
TemplateUpdateProcessorFactory to use original defaults (no maxIdleTime).
 * fixed unit tests to not rely on Thread.sleep - this required a bit more 
changes in order to expose the eviction listener in a uniform way across cache 
impls, plus adding support for artificially "advancing" the time. The new 
{{CacheListener}} callback allows us to add more instrumentation later, eg. to 
monitor the sweeps by type and collect pre/post sweep stats, etc.

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch, 
> SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-08 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: SOLR-9658.patch

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch, 
> SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8934) Move Nori DictionaryBuilder tool from src/tools to src/

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903090#comment-16903090
 ] 

ASF subversion and git services commented on LUCENE-8934:
-

Commit 2677ee2955062f91074c759daf953b2ebcd39b6c in lucene-solr's branch 
refs/heads/branch_8x from Namgyu Kim
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2677ee2 ]

LUCENE-8934: promote nori tools to main jar


> Move Nori DictionaryBuilder tool from src/tools to src/
> ---
>
> Key: LUCENE-8934
> URL: https://issues.apache.org/jira/browse/LUCENE-8934
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> After LUCENE-8904 tests in Nori tools are not running in the normal test 
> ({{ant test}}).
> As with Kuromoji(before LUCENE-8871), we need to run the {{ant test-tools}} 
> to test Nori's tools.
> Like Kuromoji, we can proceed with the normality test after moving the tools 
> of Nori to the main source tree.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8912) Remove ICU dependency of nori tools/test-tools

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903080#comment-16903080
 ] 

ASF subversion and git services commented on LUCENE-8912:
-

Commit 2cabbf81524fc3e94e53a7a3f00c7419d484c838 in lucene-solr's branch 
refs/heads/branch_8x from Namgyu Kim
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2cabbf8 ]

LUCENE-8912: remove nori/tools dependency on ICU


> Remove ICU dependency of nori tools/test-tools
> --
>
> Key: LUCENE-8912
> URL: https://issues.apache.org/jira/browse/LUCENE-8912
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {quote}After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.
> {quote}
> As mentioned in LUCENE-8904, I proceed this work from now on.
> It is what [~rcmuir] found first(LUCENE-8866) and then I just apply to Nori.
> Nori doesn't need the ICU library because it uses Normalizer2 only for NFKC 
> normalization like Kuromoji.
>  I think it's OK to remove the library dependency because it can be handled 
> by JDK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12239) Enabling index sorting causes "segment not sorted with indexSort=null"

2019-08-08 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903060#comment-16903060
 ] 

Christine Poerschke commented on SOLR-12239:


Looks like LUCENE-8505 subsequently changed the {{validateIndexSort}} logic 
slightly from 8.0 onwards but the issue here remains based on 
[https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java#L938-L949]
 reading i.e. the transition from "no sorting" to "some sorting" is considered 
invalid.

I'm curious if some segments being unsorted could perhaps be accommodated 
somehow?

> Enabling index sorting causes "segment not sorted with indexSort=null"
> --
>
> Key: SOLR-12239
> URL: https://issues.apache.org/jira/browse/SOLR-12239
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 7.1
>Reporter: Ishan Chattopadhyaya
>Priority: Major
>
> When index sorting is enabled on an existing collection/index (using 
> SortingMergePolicy), the collection reload causes the following exception:
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [mycoll_shard1_replica_n1]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.solr.core.CoreContainer.lambda$load$14(CoreContainer.java:671)
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [mycoll_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1045)
> at 
> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:642)
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.(SolrCore.java:989)
> at org.apache.solr.core.SolrCore.(SolrCore.java:844)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1029)
> ... 7 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2076)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2196)
> at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1072)
> at org.apache.solr.core.SolrCore.(SolrCore.java:961)
> ... 9 more
> Caused by: org.apache.lucene.index.CorruptIndexException: segment not sorted 
> with indexSort=null (resource=_0(7.1.0):C1)
> at 
> org.apache.lucene.index.IndexWriter.validateIndexSort(IndexWriter.java:1185)
> at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1108)
> at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:119)
> at 
> org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:94)
> at 
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:257)
> at 
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:131)
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2037)
> ... 12 more
> {code}
> This means that the user actually needs to delete the index segments, reload 
> the collection and then re-index. This is bad user experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8904) Enhance Nori DictionaryBuilder tool

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903053#comment-16903053
 ] 

ASF subversion and git services commented on LUCENE-8904:
-

Commit 70854dc1efbdc1d7efdb8ac0421c69d36ea6e31f in lucene-solr's branch 
refs/heads/branch_8x from Namgyu Kim
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=70854dc ]

LUCENE-8904: enhance Nori DictionaryBuilder tool


> Enhance Nori DictionaryBuilder tool
> ---
>
> Key: LUCENE-8904
> URL: https://issues.apache.org/jira/browse/LUCENE-8904
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> It is the Nori version of [~sokolov]'s LUCENE-8863.
>  This patch has two changes.
>  1) Improve exception handling
>  2) Enable external dictionary for testing
> Overall, it is the same as LUCENE-8863.
> But there are some differences between Nori and Kuromoji.
> These can be slightly different on the code.
> 1) CSV field size
> Nori : 12
> Kuromoji : 13
> 2) left context ID == right context ID
> Nori : can be different
> Kuromoji : always same
> 3) Dictionary Type
> Nori : just one type
> Kuromoji : IPADIC, UNIDIC
> After this job, I'll apply LUCENE-8866 and LUCENE-8871 to Nori.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2019-08-08 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903021#comment-16903021
 ] 

Gus Heck commented on SOLR-12801:
-

[~aninditagupta] this issue is about fixing the unit tests. SOLR-13457 is 
probably a better place to discuss issues with hard coded timeouts.

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903011#comment-16903011
 ] 

ASF subversion and git services commented on SOLR-13682:


Commit 88b3becaa5b79aafe497ff58758958715ba354fa in lucene-solr's branch 
refs/heads/jira/SOLR-13682 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=88b3bec ]

SOLR-13682: support for fields, query etc


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name .(if this starts with "http://; the output will 
> be piped to that url. Can be used to pipe docs to another cluster)
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903004#comment-16903004
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 30826335233d5ad37f51fcbf13f0169a47eb1e7d in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3082633 ]

SOLR-13105: More loading changes


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export -url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.json}}

additional options are
 * {{format}} : {{jsonl}} (default) or {{javabin}}
 * {{out}} : export file name .(if this starts with "http://; the output will 
be piped to that url. Can be used to pipe docs to another cluster)
 * {{query}} : a custom query , default is **:**
 * {{fields}}: a comma separated list of fields to be exported

  was:
example
{code:java}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
 * format : jsonl or javabin
 * out : export file name
 * .(if this starts with "http://; the output will be piped to that url. Can be 
used to pipe docs to another cluster)


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export -url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.json}}
> additional options are
>  * {{format}} : {{jsonl}} (default) or {{javabin}}
>  * {{out}} : export file name .(if this starts with "http://; the output will 
> be piped to that url. Can be used to pipe docs to another cluster)
>  * {{query}} : a custom query , default is **:**
>  * {{fields}}: a comma separated list of fields to be exported



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902942#comment-16902942
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 05ef5d74214188973c151acdf7cbed0a2e1a577a in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=05ef5d7 ]

SOLR-13105: Add text to loading page 9


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902943#comment-16902943
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit a594a091e98937293758fa91f518b8564712fe50 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a594a09 ]

SOLR-13105: Add analyze docs for data load


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-08-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902935#comment-16902935
 ] 

Lucene/Solr QA commented on SOLR-11724:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  3m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  3m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  3m 49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 
46s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11724 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977021/SOLR-11724.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ed137db |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/524/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/524/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Jan Høydahl
I'll let this email topic run over the weekend to attract more eyeballs. Even 
if it's not a VOTE thread, feel free to add your +1 or -1's, and if others also 
seem in favour of this idea then I'll start working on it next week. To sum up 
what I believe to be the current consensus:

A new issues@ list (announce only) for JIRA and Github notifications
A new build@ list (announce only) for Jenkins notifications

Whether [Created|Resolved] mails for JIRA/PR should also go to Dev list is 
still an open question. To help decide, here's the expected volume for those 
(from reporter.apache.org:
357 issues opened in JIRA, past quarter, 270 issues closed in JIRA, past 
quarter, 155 PRs opened on GitHub, past quarter, 143 PRs closed on GitHub, past 
quarter
…which sums up to about 300/month or 10/day.
An alternative to this could be some script that runs once a day and emits ONE 
email per day with a digest with links to new/closed JIRAs and PRs last 24h.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. aug. 2019 kl. 14:15 skrev Erick Erickson :
> 
> +1 to Jan’s idea of the bot-originated lists be announce only…..
> 
> Personally I’ve been able to make some sense out of the messages by
> 
> 1> switching to the mac mail client (not an option for others, I know). It 
> threads pretty well and for those topics where there are 10 replies I only 
> have to glance at one to see if I’m interested enough to pursue.
> 
> 2> I have a _lot_ of filters set up.
> 
> I have to admit that one of the motivations for moving to the mail program on 
> the mac was because gmail’s filters are such a disaster. Or I just totally 
> missed how to configure them. For instance, changing the order of execution 
> was impossible, so when I wanted to make a new filter execute first I had to 
> redefine the entire list…..
> 
>> On Aug 8, 2019, at 5:31 AM, Alexandre Rafalovitch  wrote:
>> 
>> I apply the following (gmail) rules, just in case it helps somebody.
>> With this combination, I am able to track human conversations
>> reasonably well.
>> 
>> Human conversation:
>> Matches: from:(-g...@apache.org) subject:(-[jira]) 
>> list:
>> Do this: Skip Inbox, Apply label "ML/Lucene-dev"
>> 
>> All JIRA issues, regardless of other filters
>> Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
>> Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam
>> 
>> New JIRA issues (that I check to see if I want to track/comment before
>> I remove the label)
>> Matches: subject:("[Created]") list:()
>> Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
>> send it to Spam
>> 
>> Updates on JIRA issues from me (I already know them)
>> Matches: from:(Alexandre Rafalovitch (JIRA) )
>> Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"
>> 
>> All JIRA issues I am involved in or marked to track
>> Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
>> Do this: Skip Inbox, Apply label "Solr-Jiras"
>> 
>> Delete JENKINS stuff, as I am currently not contributing
>> Matches: subject:([JENKINS]) list:()
>> Do this: Delete it
>> 
>> Git emails that I am not really tracking right now, but do keep
>> Matches: from:(g...@apache.org) list:()
>> Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
>> Never send it to Spam
>> 
>> Moderation emails I help with
>> Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
>> Do this: Skip Inbox, Apply label "Solr-Moderate"
>> 
>> Matches: list:""
>> Do this: Skip Inbox, Apply label "ML/SolrUsers"
>> 
>> Regards,
>>   Alex.
>> 
>> On Wed, 7 Aug 2019 at 07:54, David Smiley  wrote:
>>> 
>>> It's a problem.  I am mentoring a colleague who is stressed with the 
>>> prospect of keeping up with our community because of the volume of email, 
>>> and so it's a serious barrier to community involvement.  I too have email 
>>> filters to help me, and it took some time to work out a system.  We could 
>>> share our filter descriptions for this with workflow?  I'm sure I could 
>>> learn from you all on your approaches, and new collaborators would 
>>> appreciate this advise.
>>> 
>>> I think automated builds (Jenkins/CI) could warrant its own list.  Separate 
>>> lists would make setting up email filters easier in general.
>>> 
>>> I like the idea of a list, like dev, but which does not include JIRA 
>>> comments or GH code review comments, and does not include Jenkins/CI  This 
>>> would be a good way for potential contributors to have a light-weight way 
>>> of getting involved.  If they are involved or interested in specific 
>>> issues, they can "watch" / "subscribe" to JIRA/GH issues and consequently 
>>> they will get direct notifications from those systems.  Then people who 
>>> choose to get more involved, like us, can subscribe to the other list(s).
>>> 
>>> We do have instances where "ASF subversion and git services" can be 
>>> excessive due to feature branches that ought not to generate JIRA posts to 
>>> 

[jira] [Commented] (SOLR-13622) Add FileStream Streaming Expression

2019-08-08 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902933#comment-16902933
 ] 

Jason Gerlowski commented on SOLR-13622:


The test issue should be resolved now; thanks for pointing it out Hoss.

I'll close this in a few days if the test failures on Windows are truly 
resolved.

> Add FileStream Streaming Expression
> ---
>
> Key: SOLR-13622
> URL: https://issues.apache.org/jira/browse/SOLR-13622
> Project: Solr
>  Issue Type: New Feature
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13622.patch, SOLR-13622.patch
>
>
> The FileStream will read files from a local filesystem and Stream back each 
> line of the file as a tuple.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13622) Add FileStream Streaming Expression

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902931#comment-16902931
 ] 

ASF subversion and git services commented on SOLR-13622:


Commit 299d92da5cc6315a98ef656a66ab7b285ecb4e3d in lucene-solr's branch 
refs/heads/branch_8x from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=299d92d ]

SOLR-13622: Rename FilesStream -> CatStream

Also fixes an 'cat' OS-dependent bug in StreamExpressionTest.


> Add FileStream Streaming Expression
> ---
>
> Key: SOLR-13622
> URL: https://issues.apache.org/jira/browse/SOLR-13622
> Project: Solr
>  Issue Type: New Feature
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13622.patch, SOLR-13622.patch
>
>
> The FileStream will read files from a local filesystem and Stream back each 
> line of the file as a tuple.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13622) Add FileStream Streaming Expression

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902930#comment-16902930
 ] 

ASF subversion and git services commented on SOLR-13622:


Commit 2eb493d1700d59845ac120dcc485556b7e7fb422 in lucene-solr's branch 
refs/heads/master from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2eb493d ]

SOLR-13622: Rename FilesStream -> CatStream

Also fixes an 'cat' OS-dependent bug in StreamExpressionTest.


> Add FileStream Streaming Expression
> ---
>
> Key: SOLR-13622
> URL: https://issues.apache.org/jira/browse/SOLR-13622
> Project: Solr
>  Issue Type: New Feature
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13622.patch, SOLR-13622.patch
>
>
> The FileStream will read files from a local filesystem and Stream back each 
> line of the file as a tuple.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Erick Erickson
+1 to Jan’s idea of the bot-originated lists be announce only…..

Personally I’ve been able to make some sense out of the messages by

1> switching to the mac mail client (not an option for others, I know). It 
threads pretty well and for those topics where there are 10 replies I only have 
to glance at one to see if I’m interested enough to pursue.

2> I have a _lot_ of filters set up.

I have to admit that one of the motivations for moving to the mail program on 
the mac was because gmail’s filters are such a disaster. Or I just totally 
missed how to configure them. For instance, changing the order of execution was 
impossible, so when I wanted to make a new filter execute first I had to 
redefine the entire list…..

> On Aug 8, 2019, at 5:31 AM, Alexandre Rafalovitch  wrote:
> 
> I apply the following (gmail) rules, just in case it helps somebody.
> With this combination, I am able to track human conversations
> reasonably well.
> 
> Human conversation:
> Matches: from:(-g...@apache.org) subject:(-[jira]) 
> list:
> Do this: Skip Inbox, Apply label "ML/Lucene-dev"
> 
> All JIRA issues, regardless of other filters
> Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
> Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam
> 
> New JIRA issues (that I check to see if I want to track/comment before
> I remove the label)
> Matches: subject:("[Created]") list:()
> Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
> send it to Spam
> 
> Updates on JIRA issues from me (I already know them)
> Matches: from:(Alexandre Rafalovitch (JIRA) )
> Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"
> 
> All JIRA issues I am involved in or marked to track
> Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
> Do this: Skip Inbox, Apply label "Solr-Jiras"
> 
> Delete JENKINS stuff, as I am currently not contributing
> Matches: subject:([JENKINS]) list:()
> Do this: Delete it
> 
> Git emails that I am not really tracking right now, but do keep
> Matches: from:(g...@apache.org) list:()
> Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
> Never send it to Spam
> 
> Moderation emails I help with
> Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
> Do this: Skip Inbox, Apply label "Solr-Moderate"
> 
> Matches: list:""
> Do this: Skip Inbox, Apply label "ML/SolrUsers"
> 
> Regards,
>Alex.
> 
> On Wed, 7 Aug 2019 at 07:54, David Smiley  wrote:
>> 
>> It's a problem.  I am mentoring a colleague who is stressed with the 
>> prospect of keeping up with our community because of the volume of email, 
>> and so it's a serious barrier to community involvement.  I too have email 
>> filters to help me, and it took some time to work out a system.  We could 
>> share our filter descriptions for this with workflow?  I'm sure I could 
>> learn from you all on your approaches, and new collaborators would 
>> appreciate this advise.
>> 
>> I think automated builds (Jenkins/CI) could warrant its own list.  Separate 
>> lists would make setting up email filters easier in general.
>> 
>> I like the idea of a list, like dev, but which does not include JIRA 
>> comments or GH code review comments, and does not include Jenkins/CI  This 
>> would be a good way for potential contributors to have a light-weight way of 
>> getting involved.  If they are involved or interested in specific issues, 
>> they can "watch" / "subscribe" to JIRA/GH issues and consequently they will 
>> get direct notifications from those systems.  Then people who choose to get 
>> more involved, like us, can subscribe to the other list(s).
>> 
>> We do have instances where "ASF subversion and git services" can be 
>> excessive due to feature branches that ought not to generate JIRA posts to 
>> unrelated issues, and I think we should work to prevent that.
>> 
>> ~ David Smiley
>> Apache Lucene/Solr Search Developer
>> http://www.linkedin.com/in/davidwsmiley
>> 
>> 
>> On Wed, Aug 7, 2019 at 7:01 AM Tomoko Uchida  
>> wrote:
>>> 
>>> Hi
>>> 
>>> +1 for separated list(s) for JIRA/Github updates and Jenkins jobs.
>>> While I myself am not in trouble with assorting the mails thanks to
>>> gmail filters, I know an user (external dev) who unsubscribed this
>>> list. The one reason is the volume of the mail flow :)
>>> 
>>> Tomoko
>>> 
>>> 2019年8月7日(水) 8:17 Jan Høydahl :
 
 Hi
 
 The mail volume on dev@ is fairly high, betwen 2500-3500/month.
 To break down the numbers last month, see 
 https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
 
 Top 10 participants:
 -GitBox: 420 emails
 -ASF subversion and git services (JIRA): 351 emails
 -Apache Jenkins Server: 261 emails
 -Policeman Jenkins Server: 234 emails
 -Munendra S N (JIRA): 134 emails
 -Joel Bernstein (JIRA): 84 emails
 -Tomoko Uchida (JIRA): 77 emails
 -Jan Høydahl (JIRA): 52 emails
 -Andrzej Bialecki (JIRA): 47 emails
 -Adrien 

[GitHub] [lucene-solr] ErickErickson commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-08 Thread GitBox
ErickErickson commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519489983
 
 
   No, IndexUpgraderTool doesn’t really help. It rewrites the index in the 
current format, but does not (and cannot) make it look just as though the index 
had been indexed from scratch. From Robert Muir:
   
   “I think the key issue here is Lucene is an index not a database. Because it 
is a lossy index and does not retain all of the user's data, its not possible 
to safely migrate some things automagically. In the norms case IndexWriter 
needs to re-analyze the text ("re-index") and compute stats to get back the 
value, so it can be re-encoded. The function is y = f(x) and if x is not 
available its not possible, so lucene can't do it.”
   
   So Lucene will work with X-1, but not version X-2. As of 8.0, it will refuse 
to even open an index that has ever been touched by Solr 6x or earlier, 
regardless of running IndexUpgraderTool etc.
   
   Best,
   Erick
   
   > On Aug 8, 2019, at 12:55 AM, 鹿の遠音  wrote:
   > 
   > I find that Lucene has a tool called IndexUpgrader. Maybe updating index 
is a better solution for backward compatibility?
   > 
   > —
   > You are receiving this because you are subscribed to this thread.
   > Reply to this email directly, view it on GitHub, or mute the thread.
   > 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.3) - Build # 985 - Unstable!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/985/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing

Error Message:
Solr11035BandAid failed, counts differ after updates: expected:<199> but 
was:<200>

Stack Trace:
java.lang.AssertionError: Solr11035BandAid failed, counts differ after updates: 
expected:<199> but was:<200>
at 
__randomizedtesting.SeedInfo.seed([F36555BEBBC2FAA6:461CBE69BCA462E2]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.SolrTestCaseJ4.Solr11035BandAid(SolrTestCaseJ4.java:3144)
at 
org.apache.solr.cloud.ReindexCollectionTest.indexDocs(ReindexCollectionTest.java:405)
at 
org.apache.solr.cloud.ReindexCollectionTest.doTestSameTargetReindexing(ReindexCollectionTest.java:166)
at 
org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing(ReindexCollectionTest.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-13683) SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902887#comment-16902887
 ] 

Shalin Shekhar Mangar commented on SOLR-13683:
--

bq. What is the purpose of this method? 
Http2SolrClient.Builder().withHttpClient(Http2SolrClient httpClient)? Ideally 
this should allow setting Jetty's HttpClient object instead of an instance of 
its own type.

This certainly seems like a mistake. It should just directly accept Jetty's 
HttpClient instead of {{httpClient = builder.http2SolrClient.httpClient;}} that 
it does today in the constructor.

bq. Currently Http2SolrClient does not allow configuring custom headers. For 
example, how to pass Basic Auth headers? It should expose some builder APIs to 
pass such headers.

Actually none of solrj clients allow custom headers directly but you can use 
Apache HttpClient's RequestInterceptors to add custom headers on all requests. 
But if you just want to use basic auth then you can use 
SolrRequest.setBasicAuthCredentials() method to add the user and password. 
These will be base64 encoded and passed to the Authorization header 
automatically by Http2SolrClient.

> SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers
> -
>
> Key: SOLR-13683
> URL: https://issues.apache.org/jira/browse/SOLR-13683
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.1.1
>Reporter: Niranjan Nanda
>Priority: Minor
>
> Currently {{Http2SolrClient}} does not allow configuring custom headers. For 
> example, how to pass Basic Auth headers? It should expose some builder APIs 
> to pass such headers.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 262 - Unstable!

2019-08-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/262/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testCatTime

Error Message:
took over 10 seconds after collection creation to update aliases

Stack Trace:
java.lang.AssertionError: took over 10 seconds after collection creation to 
update aliases
at 
__randomizedtesting.SeedInfo.seed([1D758F011D6053A0:1A7EC00222930248]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.waitColAndAlias(RoutedAliasUpdateProcessorTest.java:77)
at 
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testCatTime(DimensionalRoutedAliasUpdateProcessorTest.java:480)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16162 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-13141) replicationFactor param cause problems with CDCR

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902877#comment-16902877
 ] 

Shalin Shekhar Mangar commented on SOLR-13141:
--

Attached patch (originally at SOLR-11724).

A less brittle way than what was proposed for SOLR-11724 would be to update the 
term for the leader once bootstrap finishes. This way the replicas will 
automatically go to recovery. This latest patch implements this idea.

> replicationFactor param cause problems with CDCR
> 
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Priority: Critical
> Attachments: SOLR-13141.patch, type 1 - replication wasnt working at 
> all.txt, type 2 - only few documents were being replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13141) replicationFactor param cause problems with CDCR

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13141:
-
Attachment: SOLR-13141.patch

> replicationFactor param cause problems with CDCR
> 
>
> Key: SOLR-13141
> URL: https://issues.apache.org/jira/browse/SOLR-13141
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Affects Versions: 7.5, 7.6
> Environment: This is system independent problem - exists on windows 
> and linux - reproduced by independent developers
>Reporter: Krzysztof Watral
>Priority: Critical
> Attachments: SOLR-13141.patch, type 1 - replication wasnt working at 
> all.txt, type 2 - only few documents were being replicated.txt
>
>
> i have encountered some problems with CDCR that are related to the value of 
> {{replicationFactor}} param.
> I ran the solr cloud on two datacenters with 2 nodes on each:
>  * dca:
>  ** dca_node_1
>  ** dca_node_2
>  * dcb
>  ** dcb_node_1
>  ** dcb_node_2
> Then in sequence:
>  * I configured the CDCR on copy of *_default* config set named 
> *_default_cdcr*
>  * I created collection "customer" on both DC from *_default_cdcr* config set 
> with the following parameters:
>  ** {{numShards}} = 2
>  ** {{maxShardsPerNode}} = 2
>  ** {{replicationFactor}} = 2
>  * I disabled cdcr buffer on collections
>  * I ran CDCR on both DC
> CDCR has started without errors in logs. During indexation I have encountered 
> problem [^type 2 - only few documents were being replicated.txt], restart 
> didn't help (documents has not been synchronized between DC )
> Then:
>  * I stopped CDCR on both DC
>  * I stopped all solr nodes
>  * I restarted zookeepers on both DC
>  * I started all solr nodes one by one
>  * few minutes later I stared CDCR on both DC
>  * CDCR has starded with errors (replication between DC is not working) - 
> [^type 1 - replication wasnt working at all.txt]
> {panel}
> I've also discovered that problems appears only in case, when the 
> {{replicationFactor}} parameter is higher than one
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902876#comment-16902876
 ] 

Shalin Shekhar Mangar commented on SOLR-11724:
--

I just noticed the linked issue SOLR-13141 so we can use that and let this be 
as it is today.

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902874#comment-16902874
 ] 

Shalin Shekhar Mangar commented on SOLR-13674:
--

The 7x branch has been made protected to reject any commits. See 
https://issues.apache.org/jira/browse/INFRA-18192. This is due to back-compat 
issues that make it almost impossible to release a new minor version from the 
7x branch.

This change can be back-ported to branch_7_7 (and to branch_8_2) in case a new 
bug fix release (7.7.3 or 8.2.1) is required but none of them are in plans 
today.

Would it be okay if you cherry-picked the commit to 7x on your private repos 
instead?

> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (9.0), 8.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902868#comment-16902868
 ] 

Shalin Shekhar Mangar commented on SOLR-11724:
--

A less brittle way would be to update the term for the leader once bootstrap 
finishes. This way the replicas will automatically go to recovery. The latest 
patch implements this idea. Tests pass.

What's the best way to fix this issue? This issue was supposed to be fixed in 
7.3 but the committed code didn't actually fix it. Should we reopen this issue 
or create a new one?

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11724:
-
Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread Irena Shaigorodsky (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902864#comment-16902864
 ] 

Irena Shaigorodsky commented on SOLR-13674:
---

[~shalinmangar], will it be possible to merge the change in branch_7x as well? 
This is the version that is currently in use for me.

> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (9.0), 8.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3507 - Unstable

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3507/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/174/consoleText

[repro] Revision: 8dd116a615821c7d9b539316b051f466009b5130

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.method=test 
-Dtests.seed=AE04B5C9BA6E9A4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-Latn -Dtests.timezone=Etc/GMT-11 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
21842999fe559bcbb4aebf7504aee6e8db45b38e
[repro] git fetch
[repro] git checkout 8dd116a615821c7d9b539316b051f466009b5130

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated 3577 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ShardSplitTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=AE04B5C9BA6E9A4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-Latn -Dtests.timezone=Etc/GMT-11 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 283472 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro] git checkout 21842999fe559bcbb4aebf7504aee6e8db45b38e

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13674:
-
Fix Version/s: 8.3
   master (9.0)
  Component/s: AutoScaling

> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (9.0), 8.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-13674.
--
Resolution: Fixed

This is merged into master and branch_8x. Thanks [~ishaigor]!

> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (9.0), 8.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902837#comment-16902837
 ] 

ASF subversion and git services commented on SOLR-13674:


Commit de522052c8b113a90613055585c864aa7bcdb300 in lucene-solr's branch 
refs/heads/branch_8x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=de52205 ]

SOLR-13674: NodeAddedTrigger does not support configuration of replica type 
hint.

A new replicaType property has been added to NodeAddTrigger so that new 
replicas of the given type are added when the preferredOp is addreplica. The 
default value of replicaType is `NRT`.

This closes #821.

(cherry picked from commit ed137dbe281cfb314af340673a7b646922a2e7d1)


> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902834#comment-16902834
 ] 

ASF subversion and git services commented on SOLR-13674:


Commit ed137dbe281cfb314af340673a7b646922a2e7d1 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ed137db ]

SOLR-13674: NodeAddedTrigger does not support configuration of replica type 
hint.

A new replicaType property has been added to NodeAddTrigger so that new 
replicas of the given type are added when the preferredOp is addreplica. The 
default value of replicaType is `NRT`.

This closes #821.


> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] asfgit closed pull request #821: SOLR-13674: Add relica type property to NodeAddedTrigger

2019-08-08 Thread GitBox
asfgit closed pull request #821: SOLR-13674: Add relica type property to 
NodeAddedTrigger
URL: https://github.com/apache/lucene-solr/pull/821
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902827#comment-16902827
 ] 

ASF subversion and git services commented on SOLR-13682:


Commit 007809cab7a8407a431d2e988a9c78655ffdbf62 in lucene-solr's branch 
refs/heads/jira/SOLR-13682 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=007809c ]

SOLR-13682: command line option to export data to a file


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
>  * format : jsonl or javabin
>  * out : export file name
>  * .(if this starts with "http://; the output will be piped to that url. Can 
> be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-08 Thread Alexandre Rafalovitch
I apply the following (gmail) rules, just in case it helps somebody.
With this combination, I am able to track human conversations
reasonably well.

Human conversation:
Matches: from:(-g...@apache.org) subject:(-[jira]) list:
Do this: Skip Inbox, Apply label "ML/Lucene-dev"

All JIRA issues, regardless of other filters
Matches: subject:([jira] {SOLR- LUCENE-}) list:"dev.lucene.apache.org"
Do this: Skip Inbox, Apply label "ML/Lucene-jira", Never send it to Spam

New JIRA issues (that I check to see if I want to track/comment before
I remove the label)
Matches: subject:("[Created]") list:()
Do this: Skip Inbox, Apply label "ML/Lucene-Jira-Interesting", Never
send it to Spam

Updates on JIRA issues from me (I already know them)
Matches: from:(Alexandre Rafalovitch (JIRA) )
Do this: Skip Inbox, Mark as read, Star it, Apply label "Solr-Jiras"

All JIRA issues I am involved in or marked to track
Matches: from:(j...@apache.org) to:(arafa...@gmail.com)
Do this: Skip Inbox, Apply label "Solr-Jiras"

Delete JENKINS stuff, as I am currently not contributing
Matches: subject:([JENKINS]) list:()
Do this: Delete it

Git emails that I am not really tracking right now, but do keep
Matches: from:(g...@apache.org) list:()
Do this: Skip Inbox, Mark as read, Apply label "ML/Lucene-GitBox",
Never send it to Spam

Moderation emails I help with
Matches: subject:(MODERATE for solr-u...@lucene.apache.org)
Do this: Skip Inbox, Apply label "Solr-Moderate"

Matches: list:""
Do this: Skip Inbox, Apply label "ML/SolrUsers"

Regards,
Alex.

On Wed, 7 Aug 2019 at 07:54, David Smiley  wrote:
>
> It's a problem.  I am mentoring a colleague who is stressed with the prospect 
> of keeping up with our community because of the volume of email, and so it's 
> a serious barrier to community involvement.  I too have email filters to help 
> me, and it took some time to work out a system.  We could share our filter 
> descriptions for this with workflow?  I'm sure I could learn from you all on 
> your approaches, and new collaborators would appreciate this advise.
>
> I think automated builds (Jenkins/CI) could warrant its own list.  Separate 
> lists would make setting up email filters easier in general.
>
> I like the idea of a list, like dev, but which does not include JIRA comments 
> or GH code review comments, and does not include Jenkins/CI  This would be a 
> good way for potential contributors to have a light-weight way of getting 
> involved.  If they are involved or interested in specific issues, they can 
> "watch" / "subscribe" to JIRA/GH issues and consequently they will get direct 
> notifications from those systems.  Then people who choose to get more 
> involved, like us, can subscribe to the other list(s).
>
> We do have instances where "ASF subversion and git services" can be excessive 
> due to feature branches that ought not to generate JIRA posts to unrelated 
> issues, and I think we should work to prevent that.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Aug 7, 2019 at 7:01 AM Tomoko Uchida  
> wrote:
>>
>> Hi
>>
>> +1 for separated list(s) for JIRA/Github updates and Jenkins jobs.
>> While I myself am not in trouble with assorting the mails thanks to
>> gmail filters, I know an user (external dev) who unsubscribed this
>> list. The one reason is the volume of the mail flow :)
>>
>> Tomoko
>>
>> 2019年8月7日(水) 8:17 Jan Høydahl :
>> >
>> > Hi
>> >
>> > The mail volume on dev@ is fairly high, betwen 2500-3500/month.
>> > To break down the numbers last month, see 
>> > https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
>> >
>> > Top 10 participants:
>> > -GitBox: 420 emails
>> > -ASF subversion and git services (JIRA): 351 emails
>> > -Apache Jenkins Server: 261 emails
>> > -Policeman Jenkins Server: 234 emails
>> > -Munendra S N (JIRA): 134 emails
>> > -Joel Bernstein (JIRA): 84 emails
>> > -Tomoko Uchida (JIRA): 77 emails
>> > -Jan Høydahl (JIRA): 52 emails
>> > -Andrzej Bialecki (JIRA): 47 emails
>> > -Adrien Grand (JIRA): 46 emails
>> >
>> > I have especially noticed how every single GitHub PR review comment 
>> > triggers its own email instead of one email per review session.
>> > Also, every commit/push triggers an email since a bot adds a comment to 
>> > JIRA for it.
>> >
>> > Personally I think the ratio of notifications vs human emails is a bit too 
>> > high. I fear external devs who just want to follow the project may get 
>> > overwhelmed and unsubscribe.
>> > One suggestion is therefore to add a new list where detailed JIRA comments 
>> > and Github comments / reviews go. All committers should of course 
>> > subscribe!
>> > I saw the Zookeeper project have a notifications@ list for GitHub comments 
>> > and issues@ for JIRA comments (Except the first [Created] email for a JIRA 
>> > will also go to dev@)
>> > The Maven project follows the same scheme and they also send Jenkins mails 
>> > to the notifications@ list. The 

[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902818#comment-16902818
 ] 

ASF subversion and git services commented on SOLR-13682:


Commit 08daf37d7226ce3630187fbec7e1c1227029e364 in lucene-solr's branch 
refs/heads/jira/SOLR-13682 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=08daf37 ]

SOLR-13682: command line option to export data to a file


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
>  * format : jsonl or javabin
>  * out : export file name
>  * .(if this starts with "http://; the output will be piped to that url. Can 
> be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 173 - Failure

2019-08-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/173/

No tests ran.

Build Log:
[...truncated 24989 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2590 links (2119 relative) to 3408 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
 * format : jsonl or javabin
 * out : export file name
 * .(if this starts with "http://; the output will be piped to that url. Can be 
used to pipe docs to another cluster)

  was:
example
{code:java}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
 * format : jsonl or javabin
 * out : export file name
 * .(if this starts with "http://; the output will be piped to that url. Can be 
used to pipe docs to another cluster). Or 


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
>  * format : jsonl or javabin
>  * out : export file name
>  * .(if this starts with "http://; the output will be piped to that url. Can 
> be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13682:
--
Description: 
example
{code:java}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
 * format : jsonl or javabin
 * out : export file name
 * .(if this starts with "http://; the output will be piped to that url. Can be 
used to pipe docs to another cluster). Or 

  was:
example 
{code}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
* format : jsonl or javabin 
* file :  export file name .(if this starts with "http://; the output will be 
piped to that url. Can be used to pipe docs to another cluster)


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code:java}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
>  * format : jsonl or javabin
>  * out : export file name
>  * .(if this starts with "http://; the output will be piped to that url. Can 
> be used to pipe docs to another cluster). Or 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902778#comment-16902778
 ] 

Noble Paul commented on SOLR-13682:
---

bq. Perhaps optimize for the normal case of exporting a collection in the local 
cluster,

this is for the most common usecase . the last part is the collection name
 bq. Also, consider making the default format jsonl 

OK
bq. and default output stdout 

That would be a bad experience , we are gonna emit a few megabytes of data. We 
can have an extra option to do so



> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example 
> {code}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
> * format : jsonl or javabin 
> * file :  export file name .(if this starts with "http://; the output will be 
> piped to that url. Can be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902777#comment-16902777
 ] 

ASF subversion and git services commented on SOLR-13682:


Commit 696f83df963d296d2cf2b639d2d5d1c5c317edd0 in lucene-solr's branch 
refs/heads/jira/SOLR-13682 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=696f83d ]

SOLR-13682: command line option to export data to a file


> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example 
> {code}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
> * format : jsonl or javabin 
> * file :  export file name .(if this starts with "http://; the output will be 
> piped to that url. Can be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge multiple sorted indices

2019-08-08 Thread Aravind S (User Intent)
Yes, we do a force merge(1). We are seeing in thread stack only 4 Lucene
merger threads are spawned in spite of disabling throttle io on merge
scheduler. We want to decrease the time it takes for merging n indices in
general.
Indices are added using MmapDirectory to the IndexWriter.

On Thu, Aug 8, 2019 at 1:09 PM Atri Sharma  wrote:

> Did you try force merging, given that your indices are static?
>
> On Thu, Aug 8, 2019 at 12:22 PM Aravind S (User Intent)
>  wrote:
> >
> > Hi Atri,
> >
> > Thanks for the prompt reply.
> >
> > There are no deletion or updation we are trying to merge offline
> generated sorted indices into a single sorted segment index. This entire
> process is completely offline and there is no online serving for this index
> taking no live updates like addition or deletion.
> >
> > Each index is of size is close to 1 GB. We are trying to merge 25 such
> indices into a single index using add indexes of Lucene IndexWriter using
> ConcurrentMergeScheduler.
> >
> > Is there any recommendation on how to go about merging multiple sorted
> indices into a single index more efficiently?
> >
> > Regards,
> > Aravind S
> >
> > On Thu, Aug 8, 2019, 11:57 AM Atri Sharma  wrote:
> >>
> >> Have you tried a more frequent merging? What are the average sizes of
> >> your segments, and what does your deletion/updates rate look like?
> >>
> >> On Thu, Aug 8, 2019 at 1:39 AM Aravind S (User Intent)
> >>  wrote:
> >> >
> >> > Hi,
> >> >
> >> > We are currently trying to merge sorted indices in an offline
> process. This process is taking a lot of time in merging. We tried using
> ConcurrentMergeScheduler with tiered merge policy.
> >> >
> >> > We see the merger threads to be maximum to be set as 4 in
> ConcurrentMergerScheduler setDefaultMaxMergesAndThreads method.
> >> >
> >> > Is there a way to decrease the time in merging this sorted indices.
> Is there any recommendation that could be followed to scale merging with an
> increase in a number of indices to be merged.
> >> >
> >> > Regards,
> >> > Aravind S
> >> >
> >> >
> -
> >> >
> >> > This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error, please notify the
> system manager. This message contains confidential information and is
> intended only for the individual named. If you are not the named addressee,
> you should not disseminate, distribute or copy this email. Please notify
> the sender immediately by email if you have received this email by mistake
> and delete this email from your system. If you are not the intended
> recipient, you are notified that disclosing, copying, distributing or
> taking any action in reliance on the contents of this information is
> strictly prohibited.
> >> >
> >> >
> >> >
> >> > Any views or opinions presented in this email are solely those of the
> author and do not necessarily represent those of the organization. Any
> information on shares, debentures or similar instruments, recommended
> product pricing, valuations and the like are for information purposes only.
> It is not meant to be an instruction or recommendation, as the case may be,
> to buy or to sell securities, products, services nor an offer to buy or
> sell securities, products or services unless specifically stated to be so
> on behalf of the Flipkart group. Employees of the Flipkart group of
> companies are expressly required not to make defamatory statements and not
> to infringe or authorise any infringement of copyright or any other legal
> right by email communications. Any such communication is contrary to
> organizational policy and outside the scope of the employment of the
> individual concerned. The organization will not accept any liability in
> respect of such communication, and the employee responsible will be
> personally liable for any damages or other liability arising.
> >> >
> >> >
> >> >
> >> > Our organization accepts no liability for the content of this email,
> or for the consequences of any actions taken on the basis of the
> information provided, unless that information is subsequently confirmed in
> writing. If you are not the intended recipient, you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
> >> >
> >> >
> -
> >>
> >> --
> >> Regards,
> >>
> >> Atri
> >> Apache Concerted
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
> >
> -
> >
> > This email and any files 

[jira] [Comment Edited] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902765#comment-16902765
 ] 

Jan Høydahl edited comment on SOLR-13682 at 8/8/19 7:48 AM:


Perhaps optimize for the normal case of exporting a collection in the local 
cluster, i.e. we could have a {{\-c gettingstarted}} as an alternative to 
{{\--url}}. Also, consider making the default format {{jsonl}} and default 
output stdout (or at least a {{\--stdout}} option), which is what a Unix tool 
would likely look and feel like and encourage e.g.
{noformat}
bin/solr export -c gettingstarted | head -5 | jq -cs '.'
bin/solr export -c gettingstarted | gz > gettingstarted.jsonl.gz
{noformat}


was (Author: janhoy):
Perhaps optimize for the normal case of exporting a collection in the local 
cluster, i.e. we could have a {{-c gettingstarted}} as an alternative to 
{{--url}}. Also, consider making the default format {{jsonl}} and default 
output stdout (or at least a {{--stdout}} option), which is what a Unix tool 
would likely look and feel like and encourage e.g.
{noformat}
bin/solr export -c gettingstarted | head -5 | jq -cs '.'
bin/solr export -c gettingstarted | gz > gettingstarted.jsonl.gz
{noformat}

> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example 
> {code}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
> * format : jsonl or javabin 
> * file :  export file name .(if this starts with "http://; the output will be 
> piped to that url. Can be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13683) SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers

2019-08-08 Thread Niranjan Nanda (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902766#comment-16902766
 ] 

Niranjan Nanda commented on SOLR-13683:
---

What is the purpose of this method? 
{{Http2SolrClient.Builder().withHttpClient(Http2SolrClient httpClient)}}? 
Ideally this should allow setting Jetty's {{HttpClient}} object instead of an 
instance of its own type.

> SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers
> -
>
> Key: SOLR-13683
> URL: https://issues.apache.org/jira/browse/SOLR-13683
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.1.1
>Reporter: Niranjan Nanda
>Priority: Minor
>
> Currently {{Http2SolrClient}} does not allow configuring custom headers. For 
> example, how to pass Basic Auth headers? It should expose some builder APIs 
> to pass such headers.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13682) command line option to export data to a file

2019-08-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902765#comment-16902765
 ] 

Jan Høydahl commented on SOLR-13682:


Perhaps optimize for the normal case of exporting a collection in the local 
cluster, i.e. we could have a {{-c gettingstarted}} as an alternative to 
{{--url}}. Also, consider making the default format {{jsonl}} and default 
output stdout (or at least a {{--stdout}} option), which is what a Unix tool 
would likely look and feel like and encourage e.g.
{noformat}
bin/solr export -c gettingstarted | head -5 | jq -cs '.'
bin/solr export -c gettingstarted | gz > gettingstarted.jsonl.gz
{noformat}

> command line option to export data to a file
> 
>
> Key: SOLR-13682
> URL: https://issues.apache.org/jira/browse/SOLR-13682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> example 
> {code}
> bin/solr export --url http://localhost:8983/solr/gettingstarted
> {code}
> This will export all the docs in a collection called {{gettingstarted}} into 
> a file called {{gettingstarted.javabin}}
> additional options are
> * format : jsonl or javabin 
> * file :  export file name .(if this starts with "http://; the output will be 
> piped to that url. Can be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13683) SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP headers

2019-08-08 Thread Niranjan Nanda (JIRA)
Niranjan Nanda created SOLR-13683:
-

 Summary: SolrJ 8.1.1 Http2SolrClient should allow customizing HTTP 
headers
 Key: SOLR-13683
 URL: https://issues.apache.org/jira/browse/SOLR-13683
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 8.1.1
Reporter: Niranjan Nanda


Currently {{Http2SolrClient}} does not allow configuring custom headers. For 
example, how to pass Basic Auth headers? It should expose some builder APIs to 
pass such headers.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >