[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733898#comment-16733898
 ] 

Shalin Shekhar Mangar commented on SOLR-11126:
--

I found a reproducible failure for the test:
{code}
ant test  -Dtestcase=HealthCheckHandlerTest 
-Dtests.method=testHealthCheckHandlerSolrJ -Dtests.seed=48599B8D10B62191 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
-Dtests.timezone=Europe/Mariehamn -Dtests.asserts=true 
-Dtests.file.encoding=ANSI_X3.4-1968
{code}

[~sarkaramr...@gmail.com] -- can you please take a look? Also, beast the test a 
few times to ensure it is not flaky.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733887#comment-16733887
 ] 

Shalin Shekhar Mangar commented on SOLR-11126:
--

Thanks Amrit. The changes look good to me. I'll commit after running precommit 
and tests.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-11126:


Assignee: Shalin Shekhar Mangar  (was: Anshum Gupta)

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13099) Support a new type of unit 'WEEK ' for DateMathParser

2019-01-03 Thread Haochao Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733862#comment-16733862
 ] 

Haochao Zhuang commented on SOLR-13099:
---

It's a great idea. Thanks for your suggest.

I'll let the default first day of wek be Mondy. I will describe my change and 
update the document.

 

> Support a new type of unit 'WEEK ' for DateMathParser
> -
>
> Key: SOLR-13099
> URL: https://issues.apache.org/jira/browse/SOLR-13099
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Haochao Zhuang
>Priority: Major
> Attachments: SOLR-13099.patch
>
>
> for convenience purpose, i think a WEEK unit is necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Quick Questions on Merging

2019-01-03 Thread John Wilson
Excellent. Thanks!

On Thu, Jan 3, 2019 at 2:33 PM Erick Erickson 
wrote:

> 1> A segment is a miniature index that holds part of the total logical
> index, each segment is complete in and of itself.
> All the files with the same prefix comprise a single segment. I.e.
> _0.ftd, _0.fdx, _0.tim... all make up a segment. See:
>
> https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/codecs/lucene70/package-summary.html
> .
> Each extension holds different information about that segment.
>
> 2> No. The segments_N file contains a list of the current segments as
> of come commit point. In the absence of active indexing, segments_n
> will contain all the segments in the index directory. There's a lot of
> nuance here that I'm skipping about how segments come and go based on
> background merging and the like, how an "index searcher" only "sees"
> certain segments until a new searcher is opened and the like, but
> that's kind of extraneous at this point.
>
> 3> Yes, kind of. Don't think of it as "files" though, think of it as
> "segments". IOW, if segments 0, 1, 2, 3 are being merged into segment
> 4, then _0.fdt, _1.fdt, _2.fdt and _3.fdt will be merged into _4.fdt
> and so on for all the different extensions. Once all the merging is
> done and a new searcher is opened, _0.*, _1.*, _2.* and _3.* will be
> deleted.
>
> 4> Pretty much. Again, think of it as segments rather than files
> though. Here's Mike McCandless' excellent blog on the topic:
>
> http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
> .
> TieredMergePolicy (TMP) is the default (third graphic down IIRC).
> Basically, your maxMergeAtOnce being set to 10 means that 10 roughly
> same-sized segments will be merged into a new segment. The idea here
> is that let's say maxMergeAtOnce is 3 ('cause it's easier to enumerate
> than 10). Let's further say you have 3 segments, of sizes (in M) 1, 1,
> 100. It'd be extremely wasteful to rewrite that 100M segment into a
> new segment just to add 2 more M, so TMP waits until there are three
> smaller segments 1, 1, 1, 100 and merges the three similar sized
> segments into one so you wind up with two segments of sizes 3 and 100.
> When there are 3 3M segments, they're merged into a 9M segment and so
> on. Incidentally, the default max segment  size is 5G so at some point
> you'll have segments that won't be merged unless they have a lot of
> deleted docs.
>
> I'm skipping a _lot_ here about how "like sized" segments are chosen.
>
> All that said, by and large you should simply ignore this unless
> you're trying to troubleshoot some kind of performance issue...
>
> Best,
> Erick
>
> On Thu, Jan 3, 2019 at 1:58 PM John Wilson 
> wrote:
> >
> > Hi,
> >
> > I'm watching my index directory while indexing million documents. While
> my indexer runs, I see a number of files with extensions like tip, doc,
> tim, fdx, fdt, etc being created. The total number of these files goes up
> and down during the run -- from as high as 1500 in the middle of the run to
> 290 when the indexer completes. Finally, I see that an additional file
> segments_1 being created.
> >
> > My questions:
> >
> > What exactly is a segment?
> > In my case, does it mean that I just have 1 segment since I have just
> one segments_1 file? Or,
> > Is it the case that files of the same type (extension) get merged
> together into bigger files? For example, many fdt files being merged into
> one or bigger fdt files?
> > maxMergeAtOnce specifies the # of many segments at once to merge. In my
> case, what does this mean? If I set it to 10, for example, does it mean
> that once the # of files for a specific file type (e.g. fdt) reaches 10, it
> is combined into a single fdt file?
> >
> > Thanks in advance!
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: PreCommit-LUCENE-Build

2019-01-03 Thread Michael McCandless
Hi Murali,

What do you mean by "Solr builds usually holds them"?  The "ant precommit"
top-level target tests both Lucene and Solr.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Dec 17, 2018 at 11:24 PM Murali Krishna 
wrote:

> Hi,
> https://builds.apache.org/job/PreCommit-LUCENE-Build/ seems to be always
> waiting for executor. Looks like there are only 2 hosts marked for Lucene,
> and Solr builds usually holds them. Who maintains this and can we add more
> hosts?
>
> Thanks,
> Murali
>


[jira] [Commented] (LUCENE-8601) Adding attributes to IndexFieldType

2019-01-03 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733641#comment-16733641
 ] 

Michael McCandless commented on LUCENE-8601:


Hi [~muralikpbhat], I pushed the change to master, thanks!

But the {{git cherry-pick}} back to 7.x was not clean – could you fixup the 
patch to apply to 7.x as well?  Also, the test case uses FieldInfos API that 
was never back-ported to 7.x ({{getMergedFieldInfos}}).

Also, staring at the code shortly after I pushed I noticed that the field 
type's attributes will be saved into FieldInfo the first time that field is 
seen for a given segment, but subsequent times it looks like we will fail to 
copy the attributes again?  Can you also add a test case exposing this bug, and 
then fixing it?  We can do that on a follow-on issue ... thanks!

> Adding attributes to IndexFieldType
> ---
>
> Key: LUCENE-8601
> URL: https://issues.apache.org/jira/browse/LUCENE-8601
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.5
>Reporter: Murali Krishna P
>Priority: Major
> Attachments: LUCENE-8601.01.patch, LUCENE-8601.02.patch, 
> LUCENE-8601.03.patch, LUCENE-8601.04.patch, LUCENE-8601.05.patch, 
> LUCENE-8601.06.patch, LUCENE-8601.patch
>
>
> Today, we can write a custom Field using custom IndexFieldType, but when the 
> DefaultIndexingChain converts [IndexFieldType to 
> FieldInfo|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/DefaultIndexingChain.java#L662],
>  only few key informations such as indexing options and doc value type are 
> retained. The [Codec gets the 
> FieldInfo|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/codecs/DocValuesConsumer.java#L90],
>  but not the type details.
>   
>  FieldInfo has support for ['attributes'| 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java#L47]
>  and it would be great if we can add 'attributes' to IndexFieldType also and 
> copy it to FieldInfo's 'attribute'.
>   
>  This would allow someone to write a custom codec (extending docvalueformat 
> for example) for only the 'special field' that he wants and delegate the rest 
> of the fields to the default codec.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8601) Adding attributes to IndexFieldType

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733635#comment-16733635
 ] 

ASF subversion and git services commented on LUCENE-8601:
-

Commit 63dfba4c7d81a019a4008777beace0d391987ceb in lucene-solr's branch 
refs/heads/master from Michael McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=63dfba4 ]

LUCENE-8601: attributes added to IndexableFieldType during indexing will now be 
preserved in the index and accessible at search time via FieldInfo attributes


> Adding attributes to IndexFieldType
> ---
>
> Key: LUCENE-8601
> URL: https://issues.apache.org/jira/browse/LUCENE-8601
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.5
>Reporter: Murali Krishna P
>Priority: Major
> Attachments: LUCENE-8601.01.patch, LUCENE-8601.02.patch, 
> LUCENE-8601.03.patch, LUCENE-8601.04.patch, LUCENE-8601.05.patch, 
> LUCENE-8601.06.patch, LUCENE-8601.patch
>
>
> Today, we can write a custom Field using custom IndexFieldType, but when the 
> DefaultIndexingChain converts [IndexFieldType to 
> FieldInfo|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/DefaultIndexingChain.java#L662],
>  only few key informations such as indexing options and doc value type are 
> retained. The [Codec gets the 
> FieldInfo|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/codecs/DocValuesConsumer.java#L90],
>  but not the type details.
>   
>  FieldInfo has support for ['attributes'| 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java#L47]
>  and it would be great if we can add 'attributes' to IndexFieldType also and 
> copy it to FieldInfo's 'attribute'.
>   
>  This would allow someone to write a custom codec (extending docvalueformat 
> for example) for only the 'special field' that he wants and delegate the rest 
> of the fields to the default codec.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2616 - Unstable

2019-01-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2616/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/252/consoleText

[repro] Revision: 63a6c250d7c0acb45f31dcc420595a0d25c3af65

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=4CDDFACD00E95AAE 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=nl-NL -Dtests.timezone=America/Argentina/ComodRivadavia 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
ec43d100d1dd429829758a4f672a37536e447ed0
[repro] git fetch
[repro] git checkout 63a6c250d7c0acb45f31dcc420595a0d25c3af65

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro] ant compile-test

[...truncated 3592 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestSimTriggerIntegration" -Dtests.showOutput=onerror  
-Dtests.seed=4CDDFACD00E95AAE -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=nl-NL 
-Dtests.timezone=America/Argentina/ComodRivadavia -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 5746 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro] git checkout ec43d100d1dd429829758a4f672a37536e447ed0

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733618#comment-16733618
 ] 

Lucene/Solr QA commented on SOLR-11126:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  4s{color} 
| {color:red} core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
42s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.handler.TestReplicationHandler |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11126 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953658/SOLR-11126.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  validaterefguide  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ec43d10 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/259/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/259/testReport/ |
| modules | C: solr/core solr/solrj solr/solr-ref-guide U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/259/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733613#comment-16733613
 ] 

Jan Høydahl commented on SOLR-7896:
---

I suppose the RefGuide text could be clarified from
{quote}When authentication is required the Admin UI will presented you with a 
login dialogue.
{quote}
to something like:

"The Admin UI will allow anonymous use for any page or action not requiring 
login, however, when authentication is required, the Admin UI will presented 
you with a login dialogue."

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, Authentication, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Fix For: master (8.0), 7.7
>
> Attachments: dispatchfilter-code.png, login-page.png, 
> login-screen-2.png, logout.png, unknown_scheme.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733608#comment-16733608
 ] 

Jan Høydahl commented on SOLR-7896:
---

{quote}since it seems from reading the docs that if I use any other auth other 
than Basic (such as Kerberos) I can then no longer ever access the UI at all 
after this change, is that true?
{quote}
Not exactly, the UI will start as normal and allow doing any action that is 
permitted without authentication. If the user opens a page or attempts an 
action that requires authentication, then the login screen is presented with a 
message from whatever Auth plugin is active. I guess this will look like a dead 
end, as the only menu option will be "Login" at this point. But opening a new 
browser tab will bring back the full UI. Ideally the UI should be security 
aware and hide or grey out options that are not available without login.

The situation before was a bunch of errors in the UI and possibly a totally 
defunct user experience. At least now you will be told that the UI does not 
work with the chosen Auth.

I opened SOLR-13116 to add login support for Kerberos.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, Authentication, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Fix For: master (8.0), 7.7
>
> Attachments: dispatchfilter-code.png, login-page.png, 
> login-screen-2.png, logout.png, unknown_scheme.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13116) Add Admin UI login support for Kerberos

2019-01-03 Thread JIRA
Jan Høydahl created SOLR-13116:
--

 Summary: Add Admin UI login support for Kerberos
 Key: SOLR-13116
 URL: https://issues.apache.org/jira/browse/SOLR-13116
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: master (8.0), 7.7
Reporter: Jan Høydahl


Spinoff from SOLR-7896. Kerberos auth plugin should get Admin UI Login support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13114) CVE-2018-8009 Threat Level 7 Against Solr v7.6. org.apache.hadoop : hadoop-common : 2.7.4. Apache Hadoop 3.1.0, 3.0.0-alpha to 3.0.2, 2.9.0 to 2.9.1, 2.8.0 to 2.8.4, 2.

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13114:
-

 Summary: CVE-2018-8009  Threat Level 7 Against Solr v7.6.  
org.apache.hadoop : hadoop-common : 2.7.4. Apache Hadoop 3.1.0, 3.0.0-alpha to 
3.0.2, 2.9.0 to 2.9.1, 2.8.0 to 2.8.4, 2.0.0-alpha to 2.7.6, 0.23.0 to 0.23.11 
is exploitable via the zip slip vulnerability
 Key: SOLR-13114
 URL: https://issues.apache.org/jira/browse/SOLR-13114
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.  May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 7 Against Solr v7.6.  org.apache.hadoop : hadoop-common : 2.7.4
Apache Hadoop 3.1.0, 3.0.0-alpha to 3.0.2, 2.9.0 to 2.9.1, 2.8.0 to 2.8.4, 
2.0.0-alpha to 2.7.6, 0.23.0 to 0.23.11 is exploitable via the zip slip 
vulnerability in places that accept a zip file. 

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8009



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13115) CVE-2012-0881(CVE-2013-4002) Threat Level 7 Against Solr v7.6. xerces : xercesImpl : 2.9.1. Apache Xerces2 Java Parser before 2.12.0 allows remote attackers to cause a

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13115:
-

 Summary: CVE-2012-0881(CVE-2013-4002)  Threat Level 7 Against Solr 
v7.6.  xerces : xercesImpl : 2.9.1. Apache Xerces2 Java Parser before 2.12.0 
allows remote attackers to cause a denial of service (CPU consumption) via a 
crafted message to an XML service...
 Key: SOLR-13115
 URL: https://issues.apache.org/jira/browse/SOLR-13115
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.  May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 7 Against Solr v7.6.  xerces : xercesImpl : 2.9.1

Two Issues arising due to Apache Xerces2 Java Parser before 2.12.0.
h2. CVE-2012-0881


Apache Xerces2 Java Parser before 2.12.0 allows remote attackers to cause a 
denial of service (CPU consumption) via a crafted message to an XML service, 
which triggers hash table collisions.


http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0881
h2. CVE-2013-4002

XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the 
Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 
SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 
and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit 
R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and 
earlier, and possibly other products allows remote attackers to cause a denial 
of service via vectors related to XML attribute names. 



http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13109) CVE-2015-1832 Threat Level 9 Against Solr v7.6. org.apache.derby : derby : 10.9.1.0. XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby befor

2019-01-03 Thread RobertHathaway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RobertHathaway updated SOLR-13109:
--
Summary: CVE-2015-1832 Threat Level 9 Against Solr v7.6.  org.apache.derby 
: derby : 10.9.1.0. XML external entity (XXE) vulnerability in the SqlXmlUtil 
code in Apache Derby before 10.12.1.1, w/o Java Security Manager, ...attackers 
to read arbitrary files or DOS  (was: CVE-2015-1832 Against Solr v7.6)

> CVE-2015-1832 Threat Level 9 Against Solr v7.6.  org.apache.derby : derby : 
> 10.9.1.0. XML external entity (XXE) vulnerability in the SqlXmlUtil code in 
> Apache Derby before 10.12.1.1, w/o Java Security Manager, ...attackers to 
> read arbitrary files or DOS
> -
>
> Key: SOLR-13109
> URL: https://issues.apache.org/jira/browse/SOLR-13109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Priority: Blocker
>
> Threat Level 9/Critical from Sonatype Application Composition Report run Of 
> Solr - 7.6.0, Using Scanner 1.56.0-01.  Enterprise security won't allow us to 
> move past Solr 6.5 unless this is fixed or somehow remediated. Lots of issues 
> in Solr 7.1 also, may be best to move to latest Solr.
> h2. CVE-2015-1832 Detail
> h3. Current Description
> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache 
> Derby before 10.12.1.1, when a Java Security Manager is not in place, allows 
> context-dependent attackers to read arbitrary files or cause a denial of 
> service (resource consumption) via vectors involving XmlVTI and the XML 
> datatype.
> h3. Impact
> *CVSS v3.0 Severity and Metrics:*
>  *Base Score:* [ 9.1 CRITICAL 
> |https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2015-1832=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H]
>  
>  *Vector:* AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H ([V3 
> legend|https://www.first.org/cvss/specification-document]) 
>  *Impact Score:* 5.2 
>  *Exploitability Score:* 3.9
> [https://nvd.nist.gov/vuln/detail/CVE-2015-1832]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13109) CVE-2015-1832 Against Solr v7.6

2019-01-03 Thread RobertHathaway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RobertHathaway updated SOLR-13109:
--
Description: 
Threat Level 9/Critical from Sonatype Application Composition Report run Of 
Solr - 7.6.0, Using Scanner 1.56.0-01.  Enterprise security won't allow us to 
move past Solr 6.5 unless this is fixed or somehow remediated. Lots of issues 
in Solr 7.1 also, may be best to move to latest Solr.
h2. CVE-2015-1832 Detail
h3. Current Description

XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby 
before 10.12.1.1, when a Java Security Manager is not in place, allows 
context-dependent attackers to read arbitrary files or cause a denial of 
service (resource consumption) via vectors involving XmlVTI and the XML 
datatype.
h3. Impact

*CVSS v3.0 Severity and Metrics:*
 *Base Score:* [ 9.1 CRITICAL 
|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2015-1832=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H]
 
 *Vector:* AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H ([V3 
legend|https://www.first.org/cvss/specification-document]) 
 *Impact Score:* 5.2 
 *Exploitability Score:* 3.9

[https://nvd.nist.gov/vuln/detail/CVE-2015-1832]

  was:
Threat Level 9/Critical from Sonatype Applicatiuon Composition Report run Of 
Solr - 7.6.0, Using Scanner 1.56.0-01.  Enterprise security won't allow us to 
move past Solr 6.5 unless this is fixed or somehow remediated. Lots of issues 
in Solr 7.1 also, may be best to move to latest Solr.
h2. CVE-2015-1832 Detail
h3. Current Description

XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby 
before 10.12.1.1, when a Java Security Manager is not in place, allows 
context-dependent attackers to read arbitrary files or cause a denial of 
service (resource consumption) via vectors involving XmlVTI and the XML 
datatype.
h3. Impact
*CVSS v3.0 Severity and Metrics:*
*Base Score:*  [ 9.1 CRITICAL 
|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2015-1832=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H]
 
 *Vector:*   AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H ([V3 
legend|https://www.first.org/cvss/specification-document])  
 *Impact Score:*   5.2  
 *Exploitability Score:*   3.9 

https://nvd.nist.gov/vuln/detail/CVE-2015-1832


> CVE-2015-1832 Against Solr v7.6
> ---
>
> Key: SOLR-13109
> URL: https://issues.apache.org/jira/browse/SOLR-13109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Priority: Blocker
>
> Threat Level 9/Critical from Sonatype Application Composition Report run Of 
> Solr - 7.6.0, Using Scanner 1.56.0-01.  Enterprise security won't allow us to 
> move past Solr 6.5 unless this is fixed or somehow remediated. Lots of issues 
> in Solr 7.1 also, may be best to move to latest Solr.
> h2. CVE-2015-1832 Detail
> h3. Current Description
> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache 
> Derby before 10.12.1.1, when a Java Security Manager is not in place, allows 
> context-dependent attackers to read arbitrary files or cause a denial of 
> service (resource consumption) via vectors involving XmlVTI and the XML 
> datatype.
> h3. Impact
> *CVSS v3.0 Severity and Metrics:*
>  *Base Score:* [ 9.1 CRITICAL 
> |https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2015-1832=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H]
>  
>  *Vector:* AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H ([V3 
> legend|https://www.first.org/cvss/specification-document]) 
>  *Impact Score:* 5.2 
>  *Exploitability Score:* 3.9
> [https://nvd.nist.gov/vuln/detail/CVE-2015-1832]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Quick Questions on Merging

2019-01-03 Thread Erick Erickson
1> A segment is a miniature index that holds part of the total logical
index, each segment is complete in and of itself.
All the files with the same prefix comprise a single segment. I.e.
_0.ftd, _0.fdx, _0.tim... all make up a segment. See:
https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/codecs/lucene70/package-summary.html.
Each extension holds different information about that segment.

2> No. The segments_N file contains a list of the current segments as
of come commit point. In the absence of active indexing, segments_n
will contain all the segments in the index directory. There's a lot of
nuance here that I'm skipping about how segments come and go based on
background merging and the like, how an "index searcher" only "sees"
certain segments until a new searcher is opened and the like, but
that's kind of extraneous at this point.

3> Yes, kind of. Don't think of it as "files" though, think of it as
"segments". IOW, if segments 0, 1, 2, 3 are being merged into segment
4, then _0.fdt, _1.fdt, _2.fdt and _3.fdt will be merged into _4.fdt
and so on for all the different extensions. Once all the merging is
done and a new searcher is opened, _0.*, _1.*, _2.* and _3.* will be
deleted.

4> Pretty much. Again, think of it as segments rather than files
though. Here's Mike McCandless' excellent blog on the topic:
http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html.
TieredMergePolicy (TMP) is the default (third graphic down IIRC).
Basically, your maxMergeAtOnce being set to 10 means that 10 roughly
same-sized segments will be merged into a new segment. The idea here
is that let's say maxMergeAtOnce is 3 ('cause it's easier to enumerate
than 10). Let's further say you have 3 segments, of sizes (in M) 1, 1,
100. It'd be extremely wasteful to rewrite that 100M segment into a
new segment just to add 2 more M, so TMP waits until there are three
smaller segments 1, 1, 1, 100 and merges the three similar sized
segments into one so you wind up with two segments of sizes 3 and 100.
When there are 3 3M segments, they're merged into a 9M segment and so
on. Incidentally, the default max segment  size is 5G so at some point
you'll have segments that won't be merged unless they have a lot of
deleted docs.

I'm skipping a _lot_ here about how "like sized" segments are chosen.

All that said, by and large you should simply ignore this unless
you're trying to troubleshoot some kind of performance issue...

Best,
Erick

On Thu, Jan 3, 2019 at 1:58 PM John Wilson  wrote:
>
> Hi,
>
> I'm watching my index directory while indexing million documents. While my 
> indexer runs, I see a number of files with extensions like tip, doc, tim, 
> fdx, fdt, etc being created. The total number of these files goes up and down 
> during the run -- from as high as 1500 in the middle of the run to 290 when 
> the indexer completes. Finally, I see that an additional file segments_1 
> being created.
>
> My questions:
>
> What exactly is a segment?
> In my case, does it mean that I just have 1 segment since I have just one 
> segments_1 file? Or,
> Is it the case that files of the same type (extension) get merged together 
> into bigger files? For example, many fdt files being merged into one or 
> bigger fdt files?
> maxMergeAtOnce specifies the # of many segments at once to merge. In my case, 
> what does this mean? If I set it to 10, for example, does it mean that once 
> the # of files for a specific file type (e.g. fdt) reaches 10, it is combined 
> into a single fdt file?
>
> Thanks in advance!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13112) CVE-2018-14718(-14719),sonatype-2017-0312, CVE-2018-14720(-14721) Threat Level 8 Against Solr v7.6. com.fasterxml.jackson.core : jackson-databind : 2.9.6. FasterXML jac

2019-01-03 Thread RobertHathaway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RobertHathaway updated SOLR-13112:
--
Summary: CVE-2018-14718(-14719),sonatype-2017-0312, CVE-2018-14720(-14721)  
Threat Level 8 Against Solr v7.6.  com.fasterxml.jackson.core : 
jackson-databind : 2.9.6. FasterXML jackson-databind 2.x before 2.9.7 Remote 
Hackers...  (was: CVE-2018-14718  Threat Level 8 Against Solr v7.6.  
com.fasterxml.jackson.core : jackson-databind : 2.9.6. FasterXML 
jackson-databind 2.x before 2.9.7 might allow remote attackers to execute 
arbitrary code by...)

> CVE-2018-14718(-14719),sonatype-2017-0312, CVE-2018-14720(-14721)  Threat 
> Level 8 Against Solr v7.6.  com.fasterxml.jackson.core : jackson-databind : 
> 2.9.6. FasterXML jackson-databind 2.x before 2.9.7 Remote Hackers...
> --
>
> Key: SOLR-13112
> URL: https://issues.apache.org/jira/browse/SOLR-13112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Priority: Blocker
>
> We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
> Solr - 7.6.0 Build,
> Using Scanner 1.56.0-01
> Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
> jackson-databind : 2.9.6
> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
> execute arbitrary code by leveraging failure to block the slf4j-ext class 
> from polymorphic deserialization.
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: Solr 7.2.1 Stream API throws null pointer execption when used with collapse filter query

2019-01-03 Thread gopikannan
Hi,
   I am getting null pointer exception when streaming search is done with
collapse filter query. When debugged the last element in FixedBitSet array
is null. Please let me know if I can raise an issue.

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/export/ExportWriter.java#L232


http://localhost:8983/stream/?expr=search(coll_a ,sort="field_a
asc",fl="field_a,field_b,field_c,field_d",qt="/export",q="*:*",fq="(filed_b:x)",fq="{!collapse
field=field_c sort='field_d desc'}")

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
at
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
at
org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)


[jira] [Created] (SOLR-13113) CVE-2018-1000632 Threat Level 7 Against Solr v7.6. dom4j : dom4j : 1.6.1. dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class:

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13113:
-

 Summary: CVE-2018-1000632  Threat Level 7 Against Solr v7.6.  
dom4j : dom4j : 1.6.1. dom4j version prior to version 2.1.1 contains a CWE-91: 
XML Injection vulnerability in Class: Element. Methods: addElement, 
addAttribute ...
 Key: SOLR-13113
 URL: https://issues.apache.org/jira/browse/SOLR-13113
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: RedHat Linux.  May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 7 Against Solr v7.6.  dom4j : dom4j : 1.6.1
dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection 
vulnerability in Class: Element. Methods: addElement, addAttribute that can 
result in an attacker tampering with XML documents through XML injection. This 
attack appear to be exploitable via an attacker specifying attributes or 
elements in the XML document. This vulnerability appears to have been fixed in 
2.1.1 or later. 

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13112) CVE-2018-14718(-14719),sonatype-2017-0312, CVE-2018-14720(-14721) Threat Level 8 Against Solr v7.6. com.fasterxml.jackson.core : jackson-databind : 2.9.6. FasterXML j

2019-01-03 Thread RobertHathaway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733578#comment-16733578
 ] 

RobertHathaway commented on SOLR-13112:
---

5 Total CVE's Against jackson-databind : 2.9.6

CVE-2018-14718
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
CVE-2018-14719
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
sonatype-2017-0312
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
7 CVE-2018-14720
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
CVE-2018-14721
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open

> CVE-2018-14718(-14719),sonatype-2017-0312, CVE-2018-14720(-14721)  Threat 
> Level 8 Against Solr v7.6.  com.fasterxml.jackson.core : jackson-databind : 
> 2.9.6. FasterXML jackson-databind 2.x before 2.9.7 Remote Hackers...
> --
>
> Key: SOLR-13112
> URL: https://issues.apache.org/jira/browse/SOLR-13112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Priority: Blocker
>
> We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
> Solr - 7.6.0 Build,
> Using Scanner 1.56.0-01
> Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
> jackson-databind : 2.9.6
> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
> execute arbitrary code by leveraging failure to block the slf4j-ext class 
> from polymorphic deserialization.
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13112) CVE-2018-14718 Threat Level 8 Against Solr v7.6. com.fasterxml.jackson.core : jackson-databind : 2.9.6. FasterXML jackson-databind 2.x before 2.9.7 might allow remote a

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13112:
-

 Summary: CVE-2018-14718  Threat Level 8 Against Solr v7.6.  
com.fasterxml.jackson.core : jackson-databind : 2.9.6. FasterXML 
jackson-databind 2.x before 2.9.7 might allow remote attackers to execute 
arbitrary code by...
 Key: SOLR-13112
 URL: https://issues.apache.org/jira/browse/SOLR-13112
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
jackson-databind : 2.9.6
FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
execute arbitrary code by leveraging failure to block the slf4j-ext class from 
polymorphic deserialization.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13111) CVE-2017-1000190 Threat Level 9 Against Solr v7.6. org.simpleframework : simple-xml : 2.7.1. SimpleXML (latest version 2.7.1) is vulnerable to an XXE vulnerability resu

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13111:
-

 Summary: CVE-2017-1000190  Threat Level 9 Against Solr v7.6.  
org.simpleframework : simple-xml : 2.7.1. SimpleXML (latest version 2.7.1) is 
vulnerable to an XXE vulnerability resulting SSRF, information disclosure, DoS 
and so on. 
 Key: SOLR-13111
 URL: https://issues.apache.org/jira/browse/SOLR-13111
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 9   Against Solr v7.6.  org.simpleframework 

SimpleXML (latest version 2.7.1) is vulnerable to an XXE vulnerability 
resulting SSRF, information disclosure, DoS and so on. 

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-1000190



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13110) CVE-2017-7525 Threat Level 9 Against Solr v7.6. org.codehaus.jackson : jackson-mapper-asl : 1.9.13. .A deserialization flaw was discovered in the jackson-databind, vers

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13110:
-

 Summary: CVE-2017-7525  Threat Level 9 Against Solr v7.6.  
org.codehaus.jackson : jackson-mapper-asl : 1.9.13. .A deserialization flaw was 
discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, 
...
 Key: SOLR-13110
 URL: https://issues.apache.org/jira/browse/SOLR-13110
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
Solr - 7.6.0 Build,
Using Scanner 1.56.0-01

Threat Level 9   org.codehaus.jackson : jackson-mapper-asl : 1.9.13.   

A deserialization flaw was discovered in the jackson-databind, versions before 
2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to 
perform code execution by sending the maliciously crafted input to the 
readValue method of the ObjectMapper.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Quick Questions on Merging

2019-01-03 Thread John Wilson
Hi,

I'm watching my index directory while indexing million documents. While my
indexer runs, I see a number of files with extensions like tip, doc, tim,
fdx, fdt, etc being created. The total number of these files goes up and
down during the run -- from as high as 1500 in the middle of the run to 290
when the indexer completes. Finally, I see that an additional file
segments_1 being created.

My questions:

   1. What exactly is a segment?
   2. In my case, does it mean that I just have 1 segment since I have just
   one segments_1 file? Or,
   3. Is it the case that files of the same type (extension) get merged
   together into bigger files? For example, many fdt files being merged into
   one or bigger fdt files?
   4. maxMergeAtOnce specifies the # of many segments at once to merge. In
   my case, what does this mean? If I set it to 10, for example, does it mean
   that once the # of files for a specific file type (e.g. fdt) reaches 10, it
   is combined into a single fdt file?

Thanks in advance!


[jira] [Created] (SOLR-13109) CVE-2015-1832 Against Solr v7.6

2019-01-03 Thread RobertHathaway (JIRA)
RobertHathaway created SOLR-13109:
-

 Summary: CVE-2015-1832 Against Solr v7.6
 Key: SOLR-13109
 URL: https://issues.apache.org/jira/browse/SOLR-13109
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
 Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 but 
this issue is from Sonatype component scan and should be independent of Linux 
platform version.
Reporter: RobertHathaway


Threat Level 9/Critical from Sonatype Applicatiuon Composition Report run Of 
Solr - 7.6.0, Using Scanner 1.56.0-01.  Enterprise security won't allow us to 
move past Solr 6.5 unless this is fixed or somehow remediated. Lots of issues 
in Solr 7.1 also, may be best to move to latest Solr.
h2. CVE-2015-1832 Detail
h3. Current Description

XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby 
before 10.12.1.1, when a Java Security Manager is not in place, allows 
context-dependent attackers to read arbitrary files or cause a denial of 
service (resource consumption) via vectors involving XmlVTI and the XML 
datatype.
h3. Impact
*CVSS v3.0 Severity and Metrics:*
*Base Score:*  [ 9.1 CRITICAL 
|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2015-1832=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H]
 
 *Vector:*   AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H ([V3 
legend|https://www.first.org/cvss/specification-document])  
 *Impact Score:*   5.2  
 *Exploitability Score:*   3.9 

https://nvd.nist.gov/vuln/detail/CVE-2015-1832



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12888) NestedUpdateProcessor code should activate automatically in 8.0

2019-01-03 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733541#comment-16733541
 ] 

Lucene/Solr QA commented on SOLR-12888:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  3m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  3m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  3m 41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 41s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestSimTriggerIntegration |
|   | solr.cloud.cdcr.CdcrVersionReplicationTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953649/SOLR-12888.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ec43d10 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/258/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/258/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/258/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> NestedUpdateProcessor code should activate automatically in 8.0
> ---
>
> Key: SOLR-12888
> URL: https://issues.apache.org/jira/browse/SOLR-12888
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-12888.patch
>
>
> If the schema supports it, the NestedUpdateProcessor URP should be registered 
> automatically somehow.  The Factory for this already looks for the existence 
> of certain special fields in the schema, so that's good.  But the URP Factory 
> needs to be added to your chain in any of the ways we support that.  _In 8.0 
> the user shouldn't have to do anything to their solrconfig._  
> We might un-URP this and call directly somewhere.  Or perhaps we might add a 
> special named URP chain (needn't document), defined automatically, that 
> activates at RunURP.  Perhaps other things could be added to this in the 
> future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 252 - Still Unstable

2019-01-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/252/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeAdded/127.0.0.1:10082_solr should have been deleted

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeAdded/127.0.0.1:10082_solr 
should have been deleted
at 
__randomizedtesting.SeedInfo.seed([4CDDFACD00E95AAE:546772C10EDC9741]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration(TestSimTriggerIntegration.java:892)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14526 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
   

[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2019-01-03 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733508#comment-16733508
 ] 

Cassandra Targett commented on SOLR-7896:
-

I was looking at some commits to the Ref Guide for copy-editing, and came 
across the edits for this.

I really should have paid a bit more attention earlier, since it seems from 
reading the docs that if I use any other auth other than Basic (such as 
Kerberos) I can then no longer ever access the UI at all after this change, is 
that true?

This is a step back in functionality, since today I can enable Kerberos auth 
and I don't need to access the login page; if my browser has been properly 
configured I can access the Admin UI using my valid ticket.

If that's the case, and we can't figure out anything else, the Ref Guide is 
going to need to be a lot more vocal about this limitation in places other than 
just the auth pages.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, Authentication, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Fix For: master (8.0), 7.7
>
> Attachments: dispatchfilter-code.png, login-page.png, 
> login-screen-2.png, logout.png, unknown_scheme.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13092) Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson

2019-01-03 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733504#comment-16733504
 ] 

David Smiley commented on SOLR-13092:
-

I'm guessing you are using solr-core for embedded solr either directly or for 
solr tests?  Any way, you can depend on solr and explicitly exclude those JARs.

If Solr had a plugin/module ecosystem, then I could imagine that this 
dependency and others could be separate by default, thus reducing the problems 
here.  Ah well.

> Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson
> -
>
> Key: SOLR-13092
> URL: https://issues.apache.org/jira/browse/SOLR-13092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
>
> The pom.xml in the maven repository of dataimporthandler:
> view-source:https://repo1.maven.org/maven2/org/apache/solr/solr-dataimporthandler/7.6.0/solr-dataimporthandler-7.6.0.pom
> declares both com.fasterxml.jackson and org.codehaus.jackson. This is a bug 
> and it is stopping me form upgrading my app to fasterxml jackson.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13092) Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson

2019-01-03 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733500#comment-16733500
 ] 

David Smiley commented on SOLR-13092:
-

You mentioned the DIH but it's unrelated; solr-core depends on lots of stuff 
and the DIH depends on solr-core and thus gets all its dependencies.  In our 
project we've explicitly listed every required dependency in each pom to make 
dependencies explicit; we don't rely on transitive resolution.

Any way, it is indeed a shame that Solr depends on the old Jackson libs.  You 
can see context about that here: SOLR-9542   It would be awesome if we could 
somehow annotate some dependencies as "optional" in the POM.  Perhaps any input 
you have would be better placed on that issue even if it's resolved.

> Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson
> -
>
> Key: SOLR-13092
> URL: https://issues.apache.org/jira/browse/SOLR-13092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
>
> The pom.xml in the maven repository of dataimporthandler:
> view-source:https://repo1.maven.org/maven2/org/apache/solr/solr-dataimporthandler/7.6.0/solr-dataimporthandler-7.6.0.pom
> declares both com.fasterxml.jackson and org.codehaus.jackson. This is a bug 
> and it is stopping me form upgrading my app to fasterxml jackson.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13092) Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson

2019-01-03 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-13092:

Summary: Solr's Maven pom declares both org.codehaus.jackson and 
com.fasterxml.jackson  (was: Dataimporthandler declares both 
org.codehaus.jackson and com.fasterxml.jackson)

> Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson
> -
>
> Key: SOLR-13092
> URL: https://issues.apache.org/jira/browse/SOLR-13092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
>
> The pom.xml in the maven repository of dataimporthandler:
> view-source:https://repo1.maven.org/maven2/org/apache/solr/solr-dataimporthandler/7.6.0/solr-dataimporthandler-7.6.0.pom
> declares both com.fasterxml.jackson and org.codehaus.jackson. This is a bug 
> and it is stopping me form upgrading my app to fasterxml jackson.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13045) Harden TestSimPolicyCloud

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733450#comment-16733450
 ] 

ASF subversion and git services commented on SOLR-13045:


Commit dcc09411a09d0addf972ade9da3db01c7f510232 in lucene-solr's branch 
refs/heads/branch_7_6 from Jason Gerlowski
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dcc0941 ]

SOLR-13045: Allow SimDistribStateManager to create top-level data nodes

While working on a related issue in SimDistribStateManager, I noticed
that `createData()` only worked successfully on nodes nested more than
one level under root.  (i.e. `createData("/foo", someData, mode)` would
fail, while the same with "/foo/bar" wouldn't).  This was due to an edge
case in SimDistribStateManager's path building logic.  This commit fixes
this issue.


> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13045.patch, SOLR-13045.patch, jenkins.log.txt.gz
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13045) Harden TestSimPolicyCloud

2019-01-03 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-13045.

   Resolution: Fixed
Fix Version/s: 7.6.1
   7.7
   master (8.0)

> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: master (8.0), 7.7, 7.6.1
>
> Attachments: SOLR-13045.patch, SOLR-13045.patch, jenkins.log.txt.gz
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature: Solr implicitly defined field types?

2019-01-03 Thread David Smiley
Broadly, you refer to "locale" issues.  Solr's way of dealing with this
today is with optional & configurable use of URPs.  The schema-less /
data-driven mode has some of these enabled; you can see it in the
solrconfig.xml including many date formats.  You can look into that for
further info if you like.  The primitive field types are not locale
sensitive.

Update: It's looking like 8.0 will only employ this implicit field type
mechanism for _nest_path_ which probably won't be in the default schema.
Assuming it isn't, then it'll only be documented in the context of this
particular feature.  It'd be nice to see the scope of fields expanded and
at that juncture it could/should be more broadly documented.  That can wait
to people have energy to do it.

On Sun, Dec 30, 2018 at 4:54 AM Jörn Franke  wrote:

> Hi David,
>
> I now get the idea and yes this makes sense. It would require though some
> tutorial or best practices, eg overriding a platform data type may make not
> so much sense - it may confuse new developers in an existing project that
> know Solr, but then get a platform type that has not the default behavior.
>
> Could you deal with different languages in platform types? Eg for dates it
> does not seem a problem, because Solr expects only one specific type of
> date that needs to be somehow converted beforehand (maybe that conversion
> could be also part of a platform type), but decimals are different in some
> languages or Boolean values.
>
> Am 30.12.2018 um 07:01 schrieb David Smiley :
>
> Thanks for your thoughtful response Jörn!
> ...
> On Sat, Dec 29, 2018 at 4:14 AM Jörn Franke  wrote:
>
>> I think it is a good idea, but I see some potential complexity for
>> “deployment” of collections. For instance, in environments where Solr is
>> used as a shared platform amongst several stakeholders, every time you
>> deploy/modify a collection you need to take care that the platform types
>> exist. If it exists in the Test environment then i need to make sure that
>> it exists as well in acceptance/production. The problem is that the
>> platform type could have been defined by somebody else who has not yet (eg
>> due to project/sprint delays) not updated the other environments. Another
>> issue is if I move to another Solr cluster in the same environment. Then, I
>> have to make sure that all platform types move with me.
>>
>
> RE "the platform type could have been defined by somebody else":  I'm not
> imagining it'd be configurable, thus the "somebody else" is the Solr
> project/committers.
>
> Otherwise, I think I get your point, but perhaps I don't.  It's the same
> point for *any* use of some new feature of Solr.  If you use some new
> feature, you have to take care that all Solr instances you deploy your
> configuration to can handle that new feature.  That's a fairly generic
> point that would apply to just about anything in Solr.
>
>
>> A (minor) issue is that platform types may change (for whatever reasons)
>> and that then potentially all collections have to be reindexed or we have
>> different versions of the same platform type making things not easier.
>>
>
> Yes it's possible.  Though I think that point is apart from the feature I
> propose.  You're saying that you might want to use an "int" field and then
> one day realize you want some newer/better definition of what an "int" is
> (e.g. trie -> points).  Sure.  That's true wether the field type is
> explicit or implicit.  There's nothing stopping you from explicitly
> defining the field type if you want to; the names would not be reserved. If
> you want to stick with your current index running the new Solr version,
> then you would keep luceneMatchVersion what it was, which would effectively
> retain the interpretation of the implicit field types.
>
>
>> Currently we have all our Schema definitions in a version management
>> system (we use the Schema API but the JSON requests are out there) so that
>> projects can inspire from each other. Needless to say, that careful type
>> engineering requires also some documentation on technical design and may be
>> indeed very Collection specific.
>>
>> Another issue could be that a platform type may also imply a certain
>> platform solrconfig.xml (eg lib directive etc).
>>
>
> I'm imagining platform types would be basic primitive types (int, boolean,
> etc. and some special situations like in the issue I referenced).  They
> would not depend on contrib libs... though I could imagine one day an
> evolution of this in which a contrib could somehow auto-add implicit field
> types.
>
>
>> I am not sure yet what are the exact benefits of referring to types of
>> other collections in the Solr runtime itself instead of having a version
>> system and letting projects decide if they want to adapt types of other
>> collections, but maybe I am overlooking something here.
>>
>
> The notion of implicit field types is not a cross-config
> (cross-collection) thing.  Implicit field types are nothing more than
> 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 1184 - Still Unstable

2019-01-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1184/

4 tests failed.
FAILED:  
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithFLTKNN

Error Message:
expected:<7> but was:<6>

Stack Trace:
java.lang.AssertionError: expected:<7> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([557DE6906840F043:E0B325A2F08C2FCC]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.checkCM(ConfusionMatrixGeneratorTest.java:110)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithFLTKNN(ConfusionMatrixGeneratorTest.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7FC688D660D637F8]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error Message:
Suite timeout exceeded (>= 

[jira] [Commented] (SOLR-13050) SystemLogListener can "lose" record of nodeLost event when node lost is/was .system collection leader

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733453#comment-16733453
 ] 

ASF subversion and git services commented on SOLR-13050:


Commit aee7acdf71444ae7d863dcb2b86a41f604c6a434 in lucene-solr's branch 
refs/heads/branch_7x from Cassandra Targett
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aee7acd ]

SOLR-13050: make italicized note into a real NOTE block


> SystemLogListener can "lose" record of nodeLost event when node lost is/was 
> .system collection leader
> -
>
> Key: SOLR-13050
> URL: https://issues.apache.org/jira/browse/SOLR-13050
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13050.test-workaround.patch, 
> jenkins.sarowe__Lucene-Solr-tests-7.x__7104.log.txt
>
>
> A chicken/egg issue of the way the autoscaling SystemLogListener uses the 
> {{.system}} collection to record event history is that in the case of a 
> {{nodeLost}} event for the {{.system}} collection's leader, there is a window 
> of time during leader election where trying to add the "Document" 
> representing that {{nodeLost}} event to the {{.system}} collection can fail.
> This isn't a silently failure: the SystemLogListener, acting the role of a 
> Solr client, is informed that the "add" failed, but it doesn't/can't do much 
> to deal with this situation other then to "log" (to the slf4j Logger) that it 
> wasn't able to add the doc.
> 
> I'm not sure how much of a "real world" impact this has on users, but I 
> noticed the issue while diagnosing a jenkins test failure and wanted to track 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13045) Harden TestSimPolicyCloud

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733449#comment-16733449
 ] 

ASF subversion and git services commented on SOLR-13045:


Commit 7a0b0590a0db5def3886ac85b80f2aee4fae85bc in lucene-solr's branch 
refs/heads/branch_7_6 from Jason Gerlowski
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a0b059 ]

SOLR-13045: Harden TestSimPolicyCloud

This commit fixes a race condition in SimClusterStateProvider, fixing
several fails in TestSimPolicyCloud.


> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13045.patch, SOLR-13045.patch, jenkins.log.txt.gz
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13045) Harden TestSimPolicyCloud

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733451#comment-16733451
 ] 

ASF subversion and git services commented on SOLR-13045:


Commit 34d82ed033cccd8120431b73e93554b85b24a278 in lucene-solr's branch 
refs/heads/branch_7_6 from Jason Gerlowski
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34d82ed ]

SOLR-13045: Sim node versioning should start at 0

Prior to this commit, new ZK nodes being simulated by the sim framework
were started with a version of -1.  This causes problems, since -1 is
also coincidentally the flag value used to ignore optimistic concurrency
locking and force overwrite values.


> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13045.patch, SOLR-13045.patch, jenkins.log.txt.gz
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13050) SystemLogListener can "lose" record of nodeLost event when node lost is/was .system collection leader

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733452#comment-16733452
 ] 

ASF subversion and git services commented on SOLR-13050:


Commit ec43d100d1dd429829758a4f672a37536e447ed0 in lucene-solr's branch 
refs/heads/master from Cassandra Targett
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec43d10 ]

SOLR-13050: make italicized note into a real NOTE block


> SystemLogListener can "lose" record of nodeLost event when node lost is/was 
> .system collection leader
> -
>
> Key: SOLR-13050
> URL: https://issues.apache.org/jira/browse/SOLR-13050
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13050.test-workaround.patch, 
> jenkins.sarowe__Lucene-Solr-tests-7.x__7104.log.txt
>
>
> A chicken/egg issue of the way the autoscaling SystemLogListener uses the 
> {{.system}} collection to record event history is that in the case of a 
> {{nodeLost}} event for the {{.system}} collection's leader, there is a window 
> of time during leader election where trying to add the "Document" 
> representing that {{nodeLost}} event to the {{.system}} collection can fail.
> This isn't a silently failure: the SystemLogListener, acting the role of a 
> Solr client, is informed that the "add" failed, but it doesn't/can't do much 
> to deal with this situation other then to "log" (to the slf4j Logger) that it 
> wasn't able to add the doc.
> 
> I'm not sure how much of a "real world" impact this has on users, but I 
> noticed the issue while diagnosing a jenkins test failure and wanted to track 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2019-01-03 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733374#comment-16733374
 ] 

Erick Erickson edited comment on SOLR-12727 at 1/3/19 6:59 PM:
---

Since this is a test-only issue, or at least a configuration issue that can be 
addressed let's close this ticket and address on the test in SOLR-13075


was (Author: erickerickson):
Since this is a test-only issue, or at least a configuration issue that can be 
addressed let's close this ticket and address on the test in SOLR-120756

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Fix For: master (8.0), 7.7
>
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2019-01-03 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12727.
---
Resolution: Fixed

Since this is a test-only issue, or at least a configuration issue that can be 
addressed let's close this ticket and address on the test in SOLR-13075

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Fix For: master (8.0), 7.7
>
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2019-01-03 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733374#comment-16733374
 ] 

Erick Erickson commented on SOLR-12727:
---

Since this is a test-only issue, or at least a configuration issue that can be 
addressed let's close this ticket and address on the test in SOLR-120756

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Fix For: master (8.0), 7.7
>
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11245) Cloud native Dockerfile

2019-01-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-11245:
---
Fix Version/s: (was: master (8.0))

> Cloud native Dockerfile
> ---
>
> Key: SOLR-11245
> URL: https://issues.apache.org/jira/browse/SOLR-11245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 6.6
>Reporter: jay vyas
>Priority: Major
>
> SOLR Should have its own Dockerfile, ideally one that is cloud native (i.e. 
> doesn't expect anything special from the operating system in terms of user 
> IDs, etc), for deployment, that we can curate and submit changes to as part 
> of the official ASF process, rather then externally.  The idea here is that 
> testing SOLR regression, as a microservice, is something we should be doing 
> as part of our continuous integration, rather then something done externally.
> We have a team here that would be more then happy to do the work to port 
> whatever existing SOLR dockerfiles are out there into something that is ASF 
> maintainable, and cloud native, and easily testable, as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12237) Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script

2019-01-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12237.

Resolution: Fixed

> Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script
> --
>
> Key: SOLR-12237
> URL: https://issues.apache.org/jira/browse/SOLR-12237
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 6.6.1, 6.6.2, 6.6.3, 7.0, 7.1, 7.2
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.7
>
>
> Currently the solr start script incorrectly has the variable 
> SOLR_SSL_KEYSTORE_TYPE.  The correct variable name is 
> SOLR_SSL_KEY_STORE_TYPE. Because of this mislabeled variable the key store 
> type is not set properly at startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12237) Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733359#comment-16733359
 ] 

ASF subversion and git services commented on SOLR-12237:


Commit 94f156a173f7f081f6465f5837a3f0a493cfd303 in lucene-solr's branch 
refs/heads/branch_7x from Jan Høydahl
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=94f156a ]

SOLR-12237: Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script

(cherry picked from commit 9488c8f6880a0fee41e2114def51117a2269e1f0)


> Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script
> --
>
> Key: SOLR-12237
> URL: https://issues.apache.org/jira/browse/SOLR-12237
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 6.6.1, 6.6.2, 6.6.3, 7.0, 7.1, 7.2
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> Currently the solr start script incorrectly has the variable 
> SOLR_SSL_KEYSTORE_TYPE.  The correct variable name is 
> SOLR_SSL_KEY_STORE_TYPE. Because of this mislabeled variable the key store 
> type is not set properly at startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12237) Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script

2019-01-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12237:
---
Fix Version/s: (was: 7.6)
   7.7

> Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script
> --
>
> Key: SOLR-12237
> URL: https://issues.apache.org/jira/browse/SOLR-12237
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 6.6.1, 6.6.2, 6.6.3, 7.0, 7.1, 7.2
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.7
>
>
> Currently the solr start script incorrectly has the variable 
> SOLR_SSL_KEYSTORE_TYPE.  The correct variable name is 
> SOLR_SSL_KEY_STORE_TYPE. Because of this mislabeled variable the key store 
> type is not set properly at startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12237) Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733357#comment-16733357
 ] 

ASF subversion and git services commented on SOLR-12237:


Commit 9488c8f6880a0fee41e2114def51117a2269e1f0 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9488c8f ]

SOLR-12237: Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script


> Fix incorrect SOLR_SSL_KEYSTORE_TYPE variable in solr start script
> --
>
> Key: SOLR-12237
> URL: https://issues.apache.org/jira/browse/SOLR-12237
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 6.6.1, 6.6.2, 6.6.3, 7.0, 7.1, 7.2
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> Currently the solr start script incorrectly has the variable 
> SOLR_SSL_KEYSTORE_TYPE.  The correct variable name is 
> SOLR_SSL_KEY_STORE_TYPE. Because of this mislabeled variable the key store 
> type is not set properly at startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12018) Ref Guide: Comment system is offline

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733352#comment-16733352
 ] 

Jan Høydahl commented on SOLR-12018:


Now that we move to Gitbox we'll be able to merge PRs directly from GH, so 
perhaps the time is ripe for adding this link now?

> Ref Guide: Comment system is offline
> 
>
> Key: SOLR-12018
> URL: https://issues.apache.org/jira/browse/SOLR-12018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: RefGuideCommentsBroken.png, SOLR-12018.patch
>
>
> The Ref Guide uses comments.apache.org to allow user comments. Sometime in 
> December/early January, it was taken offline. 
> I filed INFRA-15947 to ask after it's long-term status, and recently got an 
> answer that it an ETA is mid-March for a permanent INFRA-hosted system. 
> However, it's of course possible changes in priorities or other factors will 
> delay that timeline.
> Every Ref Guide page currently invites users to leave comments, but since the 
> whole Comments area is pulled in via JavaScript from a non-existent server, 
> there's no space to do so (see attached screenshot). While we wait for the 
> permanent server to be online, we have a couple of options:
> # Leave it the way it is and hopefully by mid-March it will be back
> # Change the text to tell users it's not working temporarily on all published 
> versions
> # Remove it from all the published versions and put it back when it's back
> I'm not a great fan of #2 or #3, because it'd be a bit of work for me to 
> backport changes to 4 branches and republish every guide just to fix it again 
> in a month or so. I'm fine with option #1 since I've known about it for about 
> a month at least and as far as I can tell no one else has noticed. But if 
> people feel strongly about it now that they know about it, we can figure 
> something out.
> If for some reason it takes longer than mid-March to get it back, or INFRA 
> chooses to stop supporting it entirely, this issue can morph into what we 
> should do for an alternative permanent solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1677#comment-1677
 ] 

Jan Høydahl commented on SOLR-12727:


Can this be resolved again?

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Fix For: master (8.0), 7.7
>
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11127) Add a Collections API command to migrate the .system collection schema from Trie-based (pre-7.0) to Points-based (7.0+)

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733325#comment-16733325
 ] 

Jan Høydahl commented on SOLR-11127:


Anyone planning to look into this for 8.0?

> Add a Collections API command to migrate the .system collection schema from 
> Trie-based (pre-7.0) to Points-based (7.0+)
> ---
>
> Key: SOLR-11127
> URL: https://issues.apache.org/jira/browse/SOLR-11127
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: master (8.0)
>
>
> SOLR-9 will switch the Trie fieldtypes in the .system collection's schema 
> to Points.
> Users with pre-7.0 .system collections will no longer be able to use them 
> once Trie fields have been removed (8.0).
> Solr should provide a Collections API command MIGRATESYSTEMCOLLECTION to 
> automatically convert a Trie-based .system collection to a Points-based one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12613) Rename "Cloud" tab as "Cluster" in Admin UI

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733269#comment-16733269
 ] 

Jan Høydahl commented on SOLR-12613:


Should we do this or not? The UI needs much more restructuring than just this 
menu renaming, so question is whether we should start a larger redesign effort 
instead, or if we should just do incremental improvements like this.

> Rename "Cloud" tab as "Cluster" in Admin UI
> ---
>
> Key: SOLR-12613
> URL: https://issues.apache.org/jira/browse/SOLR-12613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Fix For: master (8.0)
>
>
> Spinoff from SOLR-8207. When adding more cluster-wide functionality to the 
> Admin UI, it feels better to name the "Cloud" UI tab as "Cluster".
> In addition to renaming the "Cloud" tab, we should also change the URL part 
> from {{~cloud}} to {{~cluster}}, update reference guide page names, 
> screenshots and references etc.
> I propose this change is not introduced in 7.x due to the impact, so tagged 
> it as fix-version 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11087) Get rid of jar duplicates in release

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733274#comment-16733274
 ] 

Jan Høydahl commented on SOLR-11087:


[~thetaphi] what do you think of committing this for the size improvement's 
sake for 8.0, or are you planning on getting rid of start.jar and web apps 
folder already for 8.0?

> Get rid of jar duplicates in release
> 
>
> Key: SOLR-11087
> URL: https://issues.apache.org/jira/browse/SOLR-11087
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-11087.patch
>
>
> Sub task of SOLR-6806
> The {{dist/}} folder contains many duplicate jar files, totalling 10,5M:
> {noformat}
> 4,6M   ./dist/solr-core-6.6.0.jar (WEB-INF/lib)
> 1,2M   ./dist/solr-solrj-6.6.0.jar (WEB-INF/lib)
> 4,7M   ./dist/solrj-lib/* (WEB-INF/lib and server/lib/ext)
> {noformat}
> The rest of the files in dist/ are contrib jars and test-framework.
> To weed out the duplicates and save 10,5M, we can simply add a 
> {{dist/README.md}} file listing what jar files are located where. The file 
> could also contain a bash one-liner to copy them to the dist folder. Another 
> possibility is to ship the binary release tarball with symlinks in the dist 
> folder, and advise people to use {{cp -RL dist mydist}} which will make a 
> copy with the real files. Downside is that this won't work for ZIP archives 
> that do not preserve symlinks, and neither on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11774) langid.map.individual won't work with langid.map.keepOrig

2019-01-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-11774.

Resolution: Fixed

Finally this gets in for 8.0.0 :)

Probably not too many users of {{langid.map.individual}} since this has been 
open so long. But the more important change here is to get a streaming API for 
detection to reduce memory pressure.

> langid.map.individual won't work with langid.map.keepOrig
> -
>
> Key: SOLR-11774
> URL: https://issues.apache.org/jira/browse/SOLR-11774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 5.0
>Reporter: Marco Remy
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Tried to get language detection to work.
> *Setting:*
> {code:xml}
>  class="org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory">
>   title,author
>   detected_languages
>   de,en
>   txt
>   true
>   true
>   true
> 
> {code}
> Main purpose
> * Map fields individually
> * Keep the original field
> But the fields won't be mapped individually. They are mapped to a single 
> detected language. After some hours of investigation i finally found the 
> reason: *The option langid.map.keepOrig breaks the individual mapping 
> function.* Only if it is disabled the fields will be mapped as expected.
> - Regards



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11774) langid.map.individual won't work with langid.map.keepOrig

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733256#comment-16733256
 ] 

ASF subversion and git services commented on SOLR-11774:


Commit 00f8f3a13acd3c4da491e7169afdfbdc0f38e26d in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=00f8f3a ]

SOLR-11774: langid.map.individual now works together with langid.map.keepOrig


> langid.map.individual won't work with langid.map.keepOrig
> -
>
> Key: SOLR-11774
> URL: https://issues.apache.org/jira/browse/SOLR-11774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 5.0
>Reporter: Marco Remy
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Tried to get language detection to work.
> *Setting:*
> {code:xml}
>  class="org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory">
>   title,author
>   detected_languages
>   de,en
>   txt
>   true
>   true
>   true
> 
> {code}
> Main purpose
> * Map fields individually
> * Keep the original field
> But the fields won't be mapped individually. They are mapped to a single 
> detected language. After some hours of investigation i finally found the 
> reason: *The option langid.map.keepOrig breaks the individual mapping 
> function.* Only if it is disabled the fields will be mapped as expected.
> - Regards



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733214#comment-16733214
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Thank you Jan and Shalin for the fruitful discussion above and we do have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733214#comment-16733214
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/3/19 4:48 PM:
-

Thank you Jan and Shalin for the fruitful discussion above. So we have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.


was (Author: sarkaramr...@gmail.com):
Thank you Jan and Shalin for the fruitful discussion above and we do have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733211#comment-16733211
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Fresh patch uploaded, incorporating all suggestions made above. Thank you.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread David Smiley
https://issues.apache.org/jira/browse/INFRA-17534
Good questions Erick; please post as a comment to the issue.

On Thu, Jan 3, 2019 at 11:21 AM Erick Erickson 
wrote:

> +1 and thanks!
>
> Any time works for me. I assume we'll get some idea of when it'll
> happen, I'm also assuming that the git-wip-us.apache.org will just
> completely stop working so there's no chance of pushing something to
> the wrong place?
>
>
> On Thu, Jan 3, 2019 at 8:15 AM David Smiley 
> wrote:
> >
> > I agree with Uwe's sentiment.  Essentially anywhere in your git remote
> configuration that refers to git-wip-us.apache.org will need to change to
> gitbox.apache.org  open up .git/config to see what I mean.  At your
> prerogative, you may instead work with GitHub's mirror exclusively -- a new
> option.  If you want to do that, see https://gitbox.apache.org which is
> pretty helpful (do read it no matter what you do), and includes a link to
> the "account linking page".  Personally, I intend to commit to gitbox but I
> will also link my accounts as I suspect this will enable more direct use of
> the GitHub website like closing old pull requests (unconfirmed).
> >
> > On Thu, Jan 3, 2019 at 10:57 AM Alan Woodward 
> wrote:
> >>
> >> +1, thanks for volunteering David!
> >>
> >>
> >> On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
> >>
> >> +1
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com
> >>
> >> 3. jan. 2019 kl. 14:45 skrev David Smiley :
> >>
> >> I propose we (me) coordinate with them to do this transition on
> Wednesday next week (Jan 9th).  It appears to be a minor inconvenience.  If
> there are problems, we'll have some work days after to deal with it.  And
> doing this ahead of the mass migration may give us more individual
> attention from the busy infra team if there are problems.  Can I get some
> +1's?
> >>
> >> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team <
> infrastruct...@apache.org> wrote:
> >>>
> >>> Hello, lucene folks.
> >>> As stated earlier in 2018, all git repositories must be migrated from
> >>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
> >>> is being decommissioned. Your project is receiving this email because
> >>> you still have repositories on git-wip-us that needs to be migrated.
> >>>
> >>> The following repositories on git-wip-us belong to your project:
> >>>  - lucene-solr.git
> >>>
> >>>
> >>> We are now entering the mandated (coordinated) move stage of the
> roadmap,
> >>> and you are asked to please coordinate migration with the Apache
> >>> Infrastructure Team before February 7th. All repositories not migrated
> >>> on February 7th will be mass migrated without warning, and we'd
> appreciate
> >>> it if we could work together to avoid a big mess that day :-).
> >>>
> >>> Moving to gitbox means you will get full write access on GitHub as
> well,
> >>> and be able to close/merge pull requests and much more.
> >>>
> >>> To have your repositories moved, please follow these steps:
> >>>
> >>> - Ensure consensus on the move (a link to a lists.apache.org thread
> will
> >>>   suffice for us as evidence).
> >>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
> >>>
> >>> Your migration should only take a few minutes. If you wish to migrate
> >>> at a specific time of day or date, please do let us know in the ticket.
> >>>
> >>> As always, we appreciate your understanding and patience as we move
> >>> things around and work to provide better services and features for
> >>> the Apache Family.
> >>>
> >>> Should you wish to contact us with feedback or questions, please do so
> >>> at: us...@infra.apache.org.
> >>>
> >>>
> >>> With regards,
> >>> Apache Infrastructure
> >>>
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >> --
> >> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
> >>
> >>
> >>
> > --
> > Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Erick Erickson
+1 and thanks!

Any time works for me. I assume we'll get some idea of when it'll
happen, I'm also assuming that the git-wip-us.apache.org will just
completely stop working so there's no chance of pushing something to
the wrong place?


On Thu, Jan 3, 2019 at 8:15 AM David Smiley  wrote:
>
> I agree with Uwe's sentiment.  Essentially anywhere in your git remote 
> configuration that refers to git-wip-us.apache.org will need to change to 
> gitbox.apache.org  open up .git/config to see what I mean.  At your 
> prerogative, you may instead work with GitHub's mirror exclusively -- a new 
> option.  If you want to do that, see https://gitbox.apache.org which is 
> pretty helpful (do read it no matter what you do), and includes a link to the 
> "account linking page".  Personally, I intend to commit to gitbox but I will 
> also link my accounts as I suspect this will enable more direct use of the 
> GitHub website like closing old pull requests (unconfirmed).
>
> On Thu, Jan 3, 2019 at 10:57 AM Alan Woodward  wrote:
>>
>> +1, thanks for volunteering David!
>>
>>
>> On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
>>
>> +1
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 3. jan. 2019 kl. 14:45 skrev David Smiley :
>>
>> I propose we (me) coordinate with them to do this transition on Wednesday 
>> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
>> problems, we'll have some work days after to deal with it.  And doing this 
>> ahead of the mass migration may give us more individual attention from the 
>> busy infra team if there are problems.  Can I get some +1's?
>>
>> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
>>  wrote:
>>>
>>> Hello, lucene folks.
>>> As stated earlier in 2018, all git repositories must be migrated from
>>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
>>> is being decommissioned. Your project is receiving this email because
>>> you still have repositories on git-wip-us that needs to be migrated.
>>>
>>> The following repositories on git-wip-us belong to your project:
>>>  - lucene-solr.git
>>>
>>>
>>> We are now entering the mandated (coordinated) move stage of the roadmap,
>>> and you are asked to please coordinate migration with the Apache
>>> Infrastructure Team before February 7th. All repositories not migrated
>>> on February 7th will be mass migrated without warning, and we'd appreciate
>>> it if we could work together to avoid a big mess that day :-).
>>>
>>> Moving to gitbox means you will get full write access on GitHub as well,
>>> and be able to close/merge pull requests and much more.
>>>
>>> To have your repositories moved, please follow these steps:
>>>
>>> - Ensure consensus on the move (a link to a lists.apache.org thread will
>>>   suffice for us as evidence).
>>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
>>>
>>> Your migration should only take a few minutes. If you wish to migrate
>>> at a specific time of day or date, please do let us know in the ticket.
>>>
>>> As always, we appreciate your understanding and patience as we move
>>> things around and work to provide better services and features for
>>> the Apache Family.
>>>
>>> Should you wish to contact us with feedback or questions, please do so
>>> at: us...@infra.apache.org.
>>>
>>>
>>> With regards,
>>> Apache Infrastructure
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>> --
>> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
>> http://www.solrenterprisesearchserver.com
>>
>>
>>
> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread David Smiley
I agree with Uwe's sentiment.  Essentially anywhere in your git remote
configuration that refers to git-wip-us.apache.org will need to change to
gitbox.apache.org  open up .git/config to see what I mean.  At your
prerogative, you may instead work with GitHub's mirror exclusively -- a new
option.  If you want to do that, see https://gitbox.apache.org which is
pretty helpful (do read it no matter what you do), and includes a link to
the "account linking page".  Personally, I intend to commit to gitbox but I
will also link my accounts as I *suspect* this will enable more direct use
of the GitHub website like closing old pull requests (*unconfirmed).*

On Thu, Jan 3, 2019 at 10:57 AM Alan Woodward  wrote:

> +1, thanks for volunteering David!
>
>
> On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
>
> +1
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 3. jan. 2019 kl. 14:45 skrev David Smiley :
>
> I propose we (me) coordinate with them to do this transition on Wednesday
> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are
> problems, we'll have some work days after to deal with it.  And doing this
> ahead of the mass migration may give us more individual attention from the
> busy infra team if there are problems.  Can I get some +1's?
>
> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team <
> infrastruct...@apache.org> wrote:
>
>> Hello, lucene folks.
>> As stated earlier in 2018, all git repositories must be migrated from
>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
>> is being decommissioned. Your project is receiving this email because
>> you still have repositories on git-wip-us that needs to be migrated.
>>
>> The following repositories on git-wip-us belong to your project:
>>  - lucene-solr.git
>>
>>
>> We are now entering the mandated (coordinated) move stage of the roadmap,
>> and you are asked to please coordinate migration with the Apache
>> Infrastructure Team before February 7th. All repositories not migrated
>> on February 7th will be mass migrated without warning, and we'd appreciate
>> it if we could work together to avoid a big mess that day :-).
>>
>> Moving to gitbox means you will get full write access on GitHub as well,
>> and be able to close/merge pull requests and much more.
>>
>> To have your repositories moved, please follow these steps:
>>
>> - Ensure consensus on the move (a link to a lists.apache.org thread will
>>   suffice for us as evidence).
>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
>>
>> Your migration should only take a few minutes. If you wish to migrate
>> at a specific time of day or date, please do let us know in the ticket.
>>
>> As always, we appreciate your understanding and patience as we move
>> things around and work to provide better services and features for
>> the Apache Family.
>>
>> Should you wish to contact us with feedback or questions, please do so
>> at: us...@infra.apache.org.
>>
>>
>> With regards,
>> Apache Infrastructure
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
>
> --
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Christian Moen
+1

On Fri, Jan 4, 2019 at 12:57 AM Alan Woodward  wrote:

> +1, thanks for volunteering David!
>
>
> On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
>
> +1
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 3. jan. 2019 kl. 14:45 skrev David Smiley :
>
> I propose we (me) coordinate with them to do this transition on Wednesday
> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are
> problems, we'll have some work days after to deal with it.  And doing this
> ahead of the mass migration may give us more individual attention from the
> busy infra team if there are problems.  Can I get some +1's?
>
> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team <
> infrastruct...@apache.org> wrote:
>
>> Hello, lucene folks.
>> As stated earlier in 2018, all git repositories must be migrated from
>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
>> is being decommissioned. Your project is receiving this email because
>> you still have repositories on git-wip-us that needs to be migrated.
>>
>> The following repositories on git-wip-us belong to your project:
>>  - lucene-solr.git
>>
>>
>> We are now entering the mandated (coordinated) move stage of the roadmap,
>> and you are asked to please coordinate migration with the Apache
>> Infrastructure Team before February 7th. All repositories not migrated
>> on February 7th will be mass migrated without warning, and we'd appreciate
>> it if we could work together to avoid a big mess that day :-).
>>
>> Moving to gitbox means you will get full write access on GitHub as well,
>> and be able to close/merge pull requests and much more.
>>
>> To have your repositories moved, please follow these steps:
>>
>> - Ensure consensus on the move (a link to a lists.apache.org thread will
>>   suffice for us as evidence).
>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
>>
>> Your migration should only take a few minutes. If you wish to migrate
>> at a specific time of day or date, please do let us know in the ticket.
>>
>> As always, we appreciate your understanding and patience as we move
>> things around and work to provide better services and features for
>> the Apache Family.
>>
>> Should you wish to contact us with feedback or questions, please do so
>> at: us...@infra.apache.org.
>>
>>
>> With regards,
>> Apache Infrastructure
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
>
>


[jira] [Commented] (SOLR-12633) JSON Loader: remove anonChildDoc option

2019-01-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733174#comment-16733174
 ] 

ASF subversion and git services commented on SOLR-12633:


Commit 6342ec699e4b5e4d1636fdf20e9b69d0a5099eab in lucene-solr's branch 
refs/heads/master from David Wayne Smiley
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6342ec6 ]

SOLR-12633: remove anonChildDocs update parameter used in nested docs in JSON.


> JSON Loader: remove anonChildDoc option
> ---
>
> Key: SOLR-12633
> URL: https://issues.apache.org/jira/browse/SOLR-12633
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-12633.patch
>
>
> In 8.0/master, we should drop "anonChildDocs" that we added.  It was a 
> temporary flag.  Assume it's not anonymous unless the field name is 
> {{\_childDocuments_\}}.  That exception to the rule should have been added 
> already but was overlooked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Alan Woodward
+1, thanks for volunteering David!

> On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
> 
> +1
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com 
> 
>> 3. jan. 2019 kl. 14:45 skrev David Smiley > >:
>> 
>> I propose we (me) coordinate with them to do this transition on Wednesday 
>> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
>> problems, we'll have some work days after to deal with it.  And doing this 
>> ahead of the mass migration may give us more individual attention from the 
>> busy infra team if there are problems.  Can I get some +1's?
>> 
>> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
>> mailto:infrastruct...@apache.org>> wrote:
>> Hello, lucene folks.
>> As stated earlier in 2018, all git repositories must be migrated from
>> the git-wip-us.apache.org  URL to 
>> gitbox.apache.org , as the old service
>> is being decommissioned. Your project is receiving this email because
>> you still have repositories on git-wip-us that needs to be migrated.
>> 
>> The following repositories on git-wip-us belong to your project:
>>  - lucene-solr.git
>> 
>> 
>> We are now entering the mandated (coordinated) move stage of the roadmap,
>> and you are asked to please coordinate migration with the Apache
>> Infrastructure Team before February 7th. All repositories not migrated
>> on February 7th will be mass migrated without warning, and we'd appreciate
>> it if we could work together to avoid a big mess that day :-).
>> 
>> Moving to gitbox means you will get full write access on GitHub as well,
>> and be able to close/merge pull requests and much more.
>> 
>> To have your repositories moved, please follow these steps:
>> 
>> - Ensure consensus on the move (a link to a lists.apache.org 
>>  thread will
>>   suffice for us as evidence).
>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA 
>> 
>> 
>> Your migration should only take a few minutes. If you wish to migrate
>> at a specific time of day or date, please do let us know in the ticket.
>> 
>> As always, we appreciate your understanding and patience as we move
>> things around and work to provide better services and features for
>> the Apache Family.
>> 
>> Should you wish to contact us with feedback or questions, please do so
>> at: us...@infra.apache.org .
>> 
>> 
>> With regards,
>> Apache Infrastructure
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
>> 
>> For additional commands, e-mail: dev-h...@lucene.apache.org 
>> 
>> 
>> -- 
>> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley 
>>  | Book: 
>> http://www.solrenterprisesearchserver.com 
>> 



[GitHub] lucene-solr pull request #525: LUCENE-8585: Index-time jump-tables for DocVa...

2019-01-03 Thread tokee
Github user tokee commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/525#discussion_r245040952
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/lucene80/IndexedDISI.java ---
@@ -0,0 +1,542 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene80;
+
+import java.io.DataInput;
+import java.io.IOException;
+
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.lucene.store.RandomAccessInput;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.BitSetIterator;
+import org.apache.lucene.util.FixedBitSet;
+import org.apache.lucene.util.RoaringDocIdSet;
+
+/**
+ * Disk-based implementation of a {@link DocIdSetIterator} which can return
+ * the index of the current document, i.e. the ordinal of the current 
document
+ * among the list of documents that this iterator can return. This is 
useful
+ * to implement sparse doc values by only having to encode values for 
documents
+ * that actually have a value.
+ * Implementation-wise, this {@link DocIdSetIterator} is inspired of
+ * {@link RoaringDocIdSet roaring bitmaps} and encodes ranges of {@code 
65536}
+ * documents independently and picks between 3 encodings depending on the
+ * density of the range:
+ *   {@code ALL} if the range contains 65536 documents exactly,
+ *   {@code DENSE} if the range contains 4096 documents or more; in 
that
+ *   case documents are stored in a bit set,
+ *   {@code SPARSE} otherwise, and the lower 16 bits of the doc IDs are
+ *   stored in a {@link DataInput#readShort() short}.
+ * 
+ * Only ranges that contain at least one value are encoded.
+ * This implementation uses 6 bytes per document in the worst-case, 
which happens
+ * in the case that all ranges contain exactly one document.
+ *
+ * 
+ * To avoid O(n) lookup time complexity, with n being the number of 
documents, two lookup
+ * tables are used: A lookup table for block blockCache and index, and a 
rank structure
+ * for DENSE block lookups.
+ *
+ * The lookup table is an array of {@code long}s with an entry for each 
block. It allows for
+ * direct jumping to the block, as opposed to iteration from the current 
position and forward
+ * one block at a time.
+ *
+ * Each long entry consists of 2 logical parts:
+ *
+ * The first 31 bits hold the index (number of set bits in the blocks) up 
to just before the
+ * wanted block. The next 33 bits holds the offset in bytes into the 
underlying slice.
+ * As there is a maximum of 2^16 blocks, it follows that the maximum size 
of any block must
+ * not exceed 2^17 bits to avoid overflow. This is currently the case, 
with the largest
+ * block being DENSE and using 2^16 + 288 bits, and is likely to continue 
to hold as using
+ * more than double the amount of bits is unlikely to be an efficient 
representation.
+ * The cache overhead is numDocs/1024 bytes.
+ *
+ * Note: There are 4 types of blocks: ALL, DENSE, SPARSE and non-existing 
(0 set bits).
+ * In the case of non-existing blocks, the entry in the lookup table has 
index equal to the
+ * previous entry and offset equal to the next non-empty block.
+ *
+ * The block lookup table is stored at the end of the total block 
structure.
+ *
+ *
+ * The rank structure for DENSE blocks is an array of unsigned {@code 
short}s with an entry
+ * for each sub-block of 512 bits out of the 65536 bits in the outer DENSE 
block.
+ *
+ * Each rank-entry states the number of set bits within the block up to 
the bit before the
+ * bit positioned at the start of the sub-block.
+ * Note that that the rank entry of the first sub-block is always 0 and 
that the last entry can
+ * at most be 65536-512 = 65024 and thus will always fit into an unsigned 
short.
+ *
+ * The rank structure 

Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Jan Høydahl
+1

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 3. jan. 2019 kl. 14:45 skrev David Smiley :
> 
> I propose we (me) coordinate with them to do this transition on Wednesday 
> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
> problems, we'll have some work days after to deal with it.  And doing this 
> ahead of the mass migration may give us more individual attention from the 
> busy infra team if there are problems.  Can I get some +1's?
> 
> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
> mailto:infrastruct...@apache.org>> wrote:
> Hello, lucene folks.
> As stated earlier in 2018, all git repositories must be migrated from
> the git-wip-us.apache.org  URL to 
> gitbox.apache.org , as the old service
> is being decommissioned. Your project is receiving this email because
> you still have repositories on git-wip-us that needs to be migrated.
> 
> The following repositories on git-wip-us belong to your project:
>  - lucene-solr.git
> 
> 
> We are now entering the mandated (coordinated) move stage of the roadmap,
> and you are asked to please coordinate migration with the Apache
> Infrastructure Team before February 7th. All repositories not migrated
> on February 7th will be mass migrated without warning, and we'd appreciate
> it if we could work together to avoid a big mess that day :-).
> 
> Moving to gitbox means you will get full write access on GitHub as well,
> and be able to close/merge pull requests and much more.
> 
> To have your repositories moved, please follow these steps:
> 
> - Ensure consensus on the move (a link to a lists.apache.org 
>  thread will
>   suffice for us as evidence).
> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA 
> 
> 
> Your migration should only take a few minutes. If you wish to migrate
> at a specific time of day or date, please do let us know in the ticket.
> 
> As always, we appreciate your understanding and patience as we move
> things around and work to provide better services and features for
> the Apache Family.
> 
> Should you wish to contact us with feedback or questions, please do so
> at: us...@infra.apache.org .
> 
> 
> With regards,
> Apache Infrastructure
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> -- 
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley 
>  | Book: 
> http://www.solrenterprisesearchserver.com 
> 


[jira] [Commented] (SOLR-12888) NestedUpdateProcessor code should activate automatically in 8.0

2019-01-03 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733157#comment-16733157
 ] 

David Smiley commented on SOLR-12888:
-

I kinda like it as an URP so I left it as-such.  Even Solr internal things are 
URPs (think distributed search, logging, RunUpdate, and many opt-in features) 
-- it didn't have to be this way but it is and it's fine.  And I think it's 
nicer to fit code into an overarching framework than add more logic into other 
code that already has complexities to deal with.  And by doing this in an URP, 
we avoid a thread safety bug -- DistributedUpdateProcessor's has logic to 
examine remaining URPs and see when to clone the document or not.  We do indeed 
need to clone the doc when this logic is in place, and now it will do so.

I could have hard-coded in the reference to this URP in 
RunUpdateProcessorFactory's constructor (similar to how TRA does so in 
DistributedUpdateProcessorFactory) but I chose to instead make it potentially 
configurable using an internal automatically registered update chain named 
{{\_preRun\_}}.  To make this work I wanted to be able to create an URP chain 
while specifying the "last" (next) URP, so I did this, which was pretty easy.

I noticed that 
{{org.apache.solr.update.processor.UpdateRequestProcessorChain#createProcessor}}
 had surprising logic about a factory's getInstance method returning null.  I 
think that's silly since we pass in a "last" (next) argument and so a factory 
that choses to do nothing should, IMO, return that argument instead of 
returning null.  This avoid special casing logic and thus simplified 
createProcessor's logic.  I changed a couple URPs in Solr that do this to this 
approach.

I mentioned this issue at the committer's meeting at Activate; I think 
[~yo...@apache.org]  and [~hossman] had comments then.  [~moshebla] maybe you 
want to review.

> NestedUpdateProcessor code should activate automatically in 8.0
> ---
>
> Key: SOLR-12888
> URL: https://issues.apache.org/jira/browse/SOLR-12888
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-12888.patch
>
>
> If the schema supports it, the NestedUpdateProcessor URP should be registered 
> automatically somehow.  The Factory for this already looks for the existence 
> of certain special fields in the schema, so that's good.  But the URP Factory 
> needs to be added to your chain in any of the ways we support that.  _In 8.0 
> the user shouldn't have to do anything to their solrconfig._  
> We might un-URP this and call directly somewhere.  Or perhaps we might add a 
> special named URP chain (needn't document), defined automatically, that 
> activates at RunURP.  Perhaps other things could be added to this in the 
> future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13099) Support a new type of unit 'WEEK ' for DateMathParser

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733074#comment-16733074
 ] 

Jan Høydahl edited comment on SOLR-13099 at 1/3/19 3:36 PM:


Please also add a line to CHANGES.txt to describe your change. And if you want 
to attempt Reference Guide documentation update for 
[https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#date-math-syntax]
 that would be perfect, others can language wash and check correctness. 

We should find a way to configure start of week. Since Solr already abides by 
[ISO-8601|https://en.wikipedia.org/wiki/ISO_8601] and UTC when it comes to 
dates, it makes more sense to let the default first day of week be Monday as 
defined in the standard ([https://en.wikipedia.org/wiki/Week,] 
[https://www.timeanddate.com/calendar/days/monday.html,] 
[https://docs.oracle.com/javase/8/docs/api/java/time/DayOfWeek.html]).

In order to ask Solr to use Sunday as first day for date math I suggest a new 
request parameter {{FDOW=}} where FDOW=7 would set Sunday as first day. 
That could be described in the section [Request Parameters That Affect Date 
Math|https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#request-parameters-that-affect-date-math]

 


was (Author: janhoy):
Please also add a line to CHANGES.txt to describe your change. And if you want 
to attempt Reference Guide documentation update for 
[https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#date-math-syntax]
 that would be perfect, others can language wash and check correctness. 

We should find a way to configure start of week. Since Solr already abides by 
[ISO-8601|https://en.wikipedia.org/wiki/ISO_8601] and UTC when it comes to 
dates, it makes more sense to let the default first day of week be Monday as 
defined in the standard ([https://en.wikipedia.org/wiki/Week,] 
[https://www.timeanddate.com/calendar/days/monday.html,] 
[https://docs.oracle.com/javase/8/docs/api/java/time/DayOfWeek.html]).

In order to ask Solr to use Sunday as first day for date math I suggest a new 
request parameter {{FDOW=Sun}} that could be described in the section [Request 
Parameters That Affect Date 
Math|https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#request-parameters-that-affect-date-math]

 

> Support a new type of unit 'WEEK ' for DateMathParser
> -
>
> Key: SOLR-13099
> URL: https://issues.apache.org/jira/browse/SOLR-13099
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Haochao Zhuang
>Priority: Major
> Attachments: SOLR-13099.patch
>
>
> for convenience purpose, i think a WEEK unit is necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13108) RelatednessAgg ignores cacheDf, consults filterCache for every bucket/term

2019-01-03 Thread Michael Gibney (JIRA)
Michael Gibney created SOLR-13108:
-

 Summary: RelatednessAgg ignores cacheDf, consults filterCache for 
every bucket/term
 Key: SOLR-13108
 URL: https://issues.apache.org/jira/browse/SOLR-13108
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 7.4, master (8.0)
Reporter: Michael Gibney


The {{relatedness}} aggregation function in JSON facet API ignores {{cacheDf}} 
setting and consults the filterCache for every bucket. This is ok e.g. for 
"Query" facet type, where buckets are explicitly enumerated (and thus probably 
relatively low cardinality). But for "Terms" facet type, where bucket count is 
determined by the corpus, this can be a problem. When used over even modestly 
high-cardinality fields, this is very likely to blow out the filterCache.

See also issue with similar consequences: SOLR-9350



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12888) NestedUpdateProcessor code should activate automatically in 8.0

2019-01-03 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12888:

Attachment: SOLR-12888.patch

> NestedUpdateProcessor code should activate automatically in 8.0
> ---
>
> Key: SOLR-12888
> URL: https://issues.apache.org/jira/browse/SOLR-12888
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-12888.patch
>
>
> If the schema supports it, the NestedUpdateProcessor URP should be registered 
> automatically somehow.  The Factory for this already looks for the existence 
> of certain special fields in the schema, so that's good.  But the URP Factory 
> needs to be added to your chain in any of the ways we support that.  _In 8.0 
> the user shouldn't have to do anything to their solrconfig._  
> We might un-URP this and call directly somewhere.  Or perhaps we might add a 
> special named URP chain (needn't document), defined automatically, that 
> activates at RunURP.  Perhaps other things could be added to this in the 
> future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 256 - Still Unstable

2019-01-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/256/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [MMapDirectory, 
InternalHttpClient, SolrCore, MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:508)  
at org.apache.solr.core.SolrCore.(SolrCore.java:959)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1088)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:158)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411)
  at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:305)  
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)  
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:421) 
 at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:237) 
 at 

Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Uwe Schindler
+1
This should be completely hassle-free for committers. Just reconfigure your 
push location, done.

Uwe

Am January 3, 2019 2:59:54 PM UTC schrieb Adrien Grand :
>+1 to migrate next Wednesday, thanks David for offering your help! Any
>time works for me.
>
>On Thu, Jan 3, 2019 at 2:54 PM David Smiley 
>wrote:
>>
>> I propose we (me) coordinate with them to do this transition on
>Wednesday next week (Jan 9th).  It appears to be a minor inconvenience.
>If there are problems, we'll have some work days after to deal with it.
>And doing this ahead of the mass migration may give us more individual
>attention from the busy infra team if there are problems.  Can I get
>some +1's?
>>
>> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team
> wrote:
>>>
>>> Hello, lucene folks.
>>> As stated earlier in 2018, all git repositories must be migrated
>from
>>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old
>service
>>> is being decommissioned. Your project is receiving this email
>because
>>> you still have repositories on git-wip-us that needs to be migrated.
>>>
>>> The following repositories on git-wip-us belong to your project:
>>>  - lucene-solr.git
>>>
>>>
>>> We are now entering the mandated (coordinated) move stage of the
>roadmap,
>>> and you are asked to please coordinate migration with the Apache
>>> Infrastructure Team before February 7th. All repositories not
>migrated
>>> on February 7th will be mass migrated without warning, and we'd
>appreciate
>>> it if we could work together to avoid a big mess that day :-).
>>>
>>> Moving to gitbox means you will get full write access on GitHub as
>well,
>>> and be able to close/merge pull requests and much more.
>>>
>>> To have your repositories moved, please follow these steps:
>>>
>>> - Ensure consensus on the move (a link to a lists.apache.org thread
>will
>>>   suffice for us as evidence).
>>> - Create a JIRA ticket at
>https://issues.apache.org/jira/browse/INFRA
>>>
>>> Your migration should only take a few minutes. If you wish to
>migrate
>>> at a specific time of day or date, please do let us know in the
>ticket.
>>>
>>> As always, we appreciate your understanding and patience as we move
>>> things around and work to provide better services and features for
>>> the Apache Family.
>>>
>>> Should you wish to contact us with feedback or questions, please do
>so
>>> at: us...@infra.apache.org.
>>>
>>>
>>> With regards,
>>> Apache Infrastructure
>>>
>>>
>>>
>-
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>> --
>> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>http://www.solrenterprisesearchserver.com
>
>
>
>-- 
>Adrien
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Adrien Grand
+1 to migrate next Wednesday, thanks David for offering your help! Any
time works for me.

On Thu, Jan 3, 2019 at 2:54 PM David Smiley  wrote:
>
> I propose we (me) coordinate with them to do this transition on Wednesday 
> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
> problems, we'll have some work days after to deal with it.  And doing this 
> ahead of the mass migration may give us more individual attention from the 
> busy infra team if there are problems.  Can I get some +1's?
>
> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
>  wrote:
>>
>> Hello, lucene folks.
>> As stated earlier in 2018, all git repositories must be migrated from
>> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
>> is being decommissioned. Your project is receiving this email because
>> you still have repositories on git-wip-us that needs to be migrated.
>>
>> The following repositories on git-wip-us belong to your project:
>>  - lucene-solr.git
>>
>>
>> We are now entering the mandated (coordinated) move stage of the roadmap,
>> and you are asked to please coordinate migration with the Apache
>> Infrastructure Team before February 7th. All repositories not migrated
>> on February 7th will be mass migrated without warning, and we'd appreciate
>> it if we could work together to avoid a big mess that day :-).
>>
>> Moving to gitbox means you will get full write access on GitHub as well,
>> and be able to close/merge pull requests and much more.
>>
>> To have your repositories moved, please follow these steps:
>>
>> - Ensure consensus on the move (a link to a lists.apache.org thread will
>>   suffice for us as evidence).
>> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
>>
>> Your migration should only take a few minutes. If you wish to migrate
>> at a specific time of day or date, please do let us know in the ticket.
>>
>> As always, we appreciate your understanding and patience as we move
>> things around and work to provide better services and features for
>> the Apache Family.
>>
>> Should you wish to contact us with feedback or questions, please do so
>> at: us...@infra.apache.org.
>>
>>
>> With regards,
>> Apache Infrastructure
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #531: SOLR-12768

2019-01-03 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/531
  
Nonetheless we could stop trying to solve this ambiguous case further _for 
now_ (thus commit what you have here) since (a) this syntax is very 
experimental (b) it's documented nowhere (c) and wasn't developed very openly.  
RE openness; given (a) & (b), the non-openness of (c) is okay but if we 
_really_ want to make this feature known, it deserves it's own issue to discuss 
with the syntax ought to be publicly.   A syntax to match paths is big enough 
that it shouldn't be buried within the scope of some other issue.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13099) Support a new type of unit 'WEEK ' for DateMathParser

2019-01-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16733074#comment-16733074
 ] 

Jan Høydahl commented on SOLR-13099:


Please also add a line to CHANGES.txt to describe your change. And if you want 
to attempt Reference Guide documentation update for 
[https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#date-math-syntax]
 that would be perfect, others can language wash and check correctness. 

We should find a way to configure start of week. Since Solr already abides by 
[ISO-8601|https://en.wikipedia.org/wiki/ISO_8601] and UTC when it comes to 
dates, it makes more sense to let the default first day of week be Monday as 
defined in the standard ([https://en.wikipedia.org/wiki/Week,] 
[https://www.timeanddate.com/calendar/days/monday.html,] 
[https://docs.oracle.com/javase/8/docs/api/java/time/DayOfWeek.html]).

In order to ask Solr to use Sunday as first day for date math I suggest a new 
request parameter {{FDOW=Sun}} that could be described in the section [Request 
Parameters That Affect Date 
Math|https://lucene.apache.org/solr/guide/7_6/working-with-dates.html#request-parameters-that-affect-date-math]

 

> Support a new type of unit 'WEEK ' for DateMathParser
> -
>
> Key: SOLR-13099
> URL: https://issues.apache.org/jira/browse/SOLR-13099
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Haochao Zhuang
>Priority: Major
> Attachments: SOLR-13099.patch
>
>
> for convenience purpose, i think a WEEK unit is necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #531: SOLR-12768

2019-01-03 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/531
  
> Perhaps we could test whether "foo" is a defined field in the current 
collection?

Eh; that sounds too much like a "guess what the user might mean" kind of 
solution.  Even if "foo" is in the schema, it doesn't prevent an element "foo" 
from being used.

> I personally prefer keeping the API as simple as possible(I'm a believer 
of the KISS principle).

At the expense of ambiguity?  Trade-offs.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #531: SOLR-12768

2019-01-03 Thread moshebla
Github user moshebla commented on the issue:

https://github.com/apache/lucene-solr/pull/531
  
Perhaps we could test whether "foo" is a defined field in the current 
collection?
I'll try and test this on Sunday as I have got to go.
I personally prefer keeping the API as simple as possible(I'm a believer of 
the KISS principle).


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 1183 - Unstable

2019-01-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1183/

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerTest.testShardLeaderChange

Error Message:
Captured an uncaught exception in thread: Thread[id=23631, 
name=OverseerCollectionConfigSetProcessor-72559407733342211-127.0.0.1:40624_solr-n_01,
 state=RUNNABLE, group=Overseer collection creation process.]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=23631, 
name=OverseerCollectionConfigSetProcessor-72559407733342211-127.0.0.1:40624_solr-n_01,
 state=RUNNABLE, group=Overseer collection creation process.]
at 
__randomizedtesting.SeedInfo.seed([F9BD36BC2D875FB6:27EEB14B371FAA47]:0)
Caused by: org.apache.solr.common.AlreadyClosedException
at __randomizedtesting.SeedInfo.seed([F9BD36BC2D875FB6]:0)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:69)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:358)
at 
org.apache.solr.cloud.OverseerTaskProcessor.amILeader(OverseerTaskProcessor.java:416)
at 
org.apache.solr.cloud.OverseerTaskProcessor.run(OverseerTaskProcessor.java:156)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14214 lines...]
   [junit4] Suite: org.apache.solr.cloud.OverseerTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.OverseerTest_F9BD36BC2D875FB6-001/init-core-data-001
   [junit4]   2> 1422713 WARN  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
startTrackingSearchers: numOpens=17 numCloses=17
   [junit4]   2> 1422713 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1422715 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1422715 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
SecureRandom sanity checks: test.solr.allowed.securerandom=null & 
java.security.egd=file:/dev/./urandom
   [junit4]   2> 1422716 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 1422716 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1422716 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 1422816 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.c.ZkTestServer 
start zk server on port:40624
   [junit4]   2> 1422816 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.c.ZkTestServer 
parse host and port list: 127.0.0.1:40624
   [junit4]   2> 1422816 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.c.ZkTestServer 
connecting to 127.0.0.1 40624
   [junit4]   2> 1422820 INFO  (zkConnectionManagerCallback-6967-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422823 INFO  (zkConnectionManagerCallback-6969-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422823 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
initCore
   [junit4]   2> 1422823 INFO  
(SUITE-OverseerTest-seed#[F9BD36BC2D875FB6]-worker) [] o.a.s.SolrTestCaseJ4 
initCore end
   [junit4]   2> 1422829 INFO  
(TEST-OverseerTest.testShardLeaderChange-seed#[F9BD36BC2D875FB6]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testShardLeaderChange
   [junit4]   2> 1422909 INFO  (zkConnectionManagerCallback-6973-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422915 INFO  (zkConnectionManagerCallback-6977-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422926 INFO  (zkConnectionManagerCallback-6983-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422932 INFO  (zkConnectionManagerCallback-6989-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422950 INFO  (zkConnectionManagerCallback-6994-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422950 INFO  (zkConnectionManagerCallback-6999-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1422954 INFO  
(TEST-OverseerTest.testShardLeaderChange-seed#[F9BD36BC2D875FB6]) [] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:40624/solr ready
   [junit4]   2> 1422955 INFO  (Thread-5409) [] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 

Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Cassandra Targett
+1, and thanks for offering to coordinate that.

Can we get an idea from Infra first what we need to do in our local Github 
repos for this migration? Like, do we need to re-checkout the project, or can 
we edit our .gitconfigs to point to github instead of git-wip-us?

Another question I have is what might happen to any forks that currently exist 
- the ones I’m thinking of were forked from the GH repo which I think would 
mean nothing changes for them, but if there is any migration needed there a 
heads up would be helpful
On Jan 3, 2019, 7:59 AM -0600, Martin Gainty , wrote:
> +1
> From: David Smiley 
> Sent: Thursday, January 3, 2019 8:45 AM
> To: dev@lucene.apache.org
> Subject: Re: [NOTICE] Mandatory migration of git repositories to 
> gitbox.apache.org
>
> I propose we (me) coordinate with them to do this transition on Wednesday 
> next week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
> problems, we'll have some work days after to deal with it.  And doing this 
> ahead of the mass migration may give us more individual attention from the 
> busy infra team if there are problems.  Can I get some +1's?
>
> On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
>  wrote:
> > Hello, lucene folks.
> > As stated earlier in 2018, all git repositories must be migrated from
> > the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
> > is being decommissioned. Your project is receiving this email because
> > you still have repositories on git-wip-us that needs to be migrated.
> >
> > The following repositories on git-wip-us belong to your project:
> >  - lucene-solr.git
> >
> >
> > We are now entering the mandated (coordinated) move stage of the roadmap,
> > and you are asked to please coordinate migration with the Apache
> > Infrastructure Team before February 7th. All repositories not migrated
> > on February 7th will be mass migrated without warning, and we'd appreciate
> > it if we could work together to avoid a big mess that day :-).
> >
> > Moving to gitbox means you will get full write access on GitHub as well,
> > and be able to close/merge pull requests and much more.
> >
> > To have your repositories moved, please follow these steps:
> >
> > - Ensure consensus on the move (a link to a lists.apache.org thread will
> >   suffice for us as evidence).
> > - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
> >
> > Your migration should only take a few minutes. If you wish to migrate
> > at a specific time of day or date, please do let us know in the ticket.
> >
> > As always, we appreciate your understanding and patience as we move
> > things around and work to provide better services and features for
> > the Apache Family.
> >
> > Should you wish to contact us with feedback or questions, please do so
> > at: us...@infra.apache.org.
> >
> >
> > With regards,
> > Apache Infrastructure
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-13107) Math Expressions for custom Apache Zeppelin visualizations

2019-01-03 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13107:
--
Summary: Math Expressions for custom Apache Zeppelin visualizations   (was: 
Math Expressions for Custom Apache Zeppelin visualizations )

> Math Expressions for custom Apache Zeppelin visualizations 
> ---
>
> Key: SOLR-13107
> URL: https://issues.apache.org/jira/browse/SOLR-13107
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for creating Math Expressions for custom Apache 
> Zeppelin Visualizations
> Apache Zeppelin supports a growing number of visualizations out of the box 
> which can all be accessed using the *zplot* Math Expression. 
> Apache Zeppelin also has support for adding Angular code to support custom 
> visualizations. There are a number of areas of interest:
> 1) More advanced map visualizations. Math Expressions can cluster lat/lon 
> points, and draw convex hulls and enclosing circles around clusters. Apache 
> Zeppelin mapping doesn't currently support drawing circles or polygons on a 
> map. There is an interesting post that describes using google maps inside 
> Zeppelin: 
> [https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]
> 2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
> export nodes and edges. It would be great to be able to visual a this graph 
> in Zeppelin. Currently the Zeppelin network visualization seems to not be 
> robust enough to work with larger networks so a custom custom visualization 
> may be needed.
> 3) Clustering visualizations.
> 4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13107) Math Expressions for Custom Apache Zeppelin visualizations

2019-01-03 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13107:
--
Description: 
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a number of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
export nodes and edges. It would be great to be able to visual a this graph in 
Zeppelin. Currently the Zeppelin network visualization seems to not be robust 
enough to work with larger networks so a custom custom visualization may be 
needed.

3) Clustering visualizations.

4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.

  was:
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
export nodes and edges. It would be great to be able to visual a this graph in 
Zeppelin. Currently the Zeppelin network visualization seems to not be robust 
enough to work with larger networks so a custom custom visualization may be 
needed.

3) Clustering visualizations.

4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.


> Math Expressions for Custom Apache Zeppelin visualizations 
> ---
>
> Key: SOLR-13107
> URL: https://issues.apache.org/jira/browse/SOLR-13107
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for creating Math Expressions for custom Apache 
> Zeppelin Visualizations
> Apache Zeppelin supports a growing number of visualizations out of the box 
> which can all be accessed using the *zplot* Math Expression. 
> Apache Zeppelin also has support for adding Angular code to support custom 
> visualizations. There are a number of areas of interest:
> 1) More advanced map visualizations. Math Expressions can cluster lat/lon 
> points, and draw convex hulls and enclosing circles around clusters. Apache 
> Zeppelin mapping doesn't currently support drawing circles or polygons on a 
> map. There is an interesting post that describes using google maps inside 
> Zeppelin: 
> [https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]
> 2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
> export nodes and edges. It would be great to be able to visual a this graph 
> in Zeppelin. Currently the Zeppelin network visualization seems to not be 
> robust enough to work with larger networks so a custom custom visualization 
> may be needed.
> 3) Clustering visualizations.
> 4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13107) Math Expressions for Custom Apache Zeppelin visualizations

2019-01-03 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13107:
--
Description: 
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
export nodes and edges. It would be great to be able to visual a this graph in 
Zeppelin. Currently the Zeppelin network visualization seems to not be robust 
enough to work with larger networks so a custom custom visualization may be 
needed.

3) Clustering visualizations.

4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.

  was:
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
export nodes and edges. It would be great to be able to visual a this graph in 
Zeppelin. Currently the Zeppelin network visualization seems to not be robust 
enough to work with larger networks so a custom custom visualization may be 
needed.

2) Clustering visualizations.

3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.


> Math Expressions for Custom Apache Zeppelin visualizations 
> ---
>
> Key: SOLR-13107
> URL: https://issues.apache.org/jira/browse/SOLR-13107
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for creating Math Expressions for custom Apache 
> Zeppelin Visualizations
> Apache Zeppelin supports a growing number of visualizations out of the box 
> which can all be accessed using the *zplot* Math Expression. 
> Apache Zeppelin also has support for adding Angular code to support custom 
> visualizations. There are a couple of areas of interest:
> 1) More advanced map visualizations. Math Expressions can cluster lat/lon 
> points, and draw convex hulls and enclosing circles around clusters. Apache 
> Zeppelin mapping doesn't currently support drawing circles or polygons on a 
> map. There is an interesting post that describes using google maps inside 
> Zeppelin: 
> [https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]
> 2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
> export nodes and edges. It would be great to be able to visual a this graph 
> in Zeppelin. Currently the Zeppelin network visualization seems to not be 
> robust enough to work with larger networks so a custom custom visualization 
> may be needed.
> 3) Clustering visualizations.
> 4) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13107) Math Expressions for Custom Apache Zeppelin visualizations

2019-01-03 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13107:
--
Description: 
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
export nodes and edges. It would be great to be able to visual a this graph in 
Zeppelin. Currently the Zeppelin network visualization seems to not be robust 
enough to work with larger networks so a custom custom visualization may be 
needed.

2) Clustering visualizations.

3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.

  was:
This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Clustering visualizations.

3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.


> Math Expressions for Custom Apache Zeppelin visualizations 
> ---
>
> Key: SOLR-13107
> URL: https://issues.apache.org/jira/browse/SOLR-13107
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for creating Math Expressions for custom Apache 
> Zeppelin Visualizations
> Apache Zeppelin supports a growing number of visualizations out of the box 
> which can all be accessed using the *zplot* Math Expression. 
> Apache Zeppelin also has support for adding Angular code to support custom 
> visualizations. There are a couple of areas of interest:
> 1) More advanced map visualizations. Math Expressions can cluster lat/lon 
> points, and draw convex hulls and enclosing circles around clusters. Apache 
> Zeppelin mapping doesn't currently support drawing circles or polygons on a 
> map. There is an interesting post that describes using google maps inside 
> Zeppelin: 
> [https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]
> 2) Graph visualizations. The nodes expressions can flexibly walk a graph and 
> export nodes and edges. It would be great to be able to visual a this graph 
> in Zeppelin. Currently the Zeppelin network visualization seems to not be 
> robust enough to work with larger networks so a custom custom visualization 
> may be needed.
> 2) Clustering visualizations.
> 3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23455 - Unstable!

2019-01-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23455/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test

Error Message:
.facet_counts.facet_fields.t_s.null:0!=null

Stack Trace:
junit.framework.AssertionFailedError: 
.facet_counts.facet_fields.t_s.null:0!=null
at 
__randomizedtesting.SeedInfo.seed([200A3530822A081E:A85E0AEA2CD665E6]:0)
at junit.framework.Assert.fail(Assert.java:57)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:987)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1014)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:668)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:631)
at 
org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
at 
org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1070)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1042)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] lucene-solr pull request #:

2019-01-03 Thread dsmiley
Github user dsmiley commented on the pull request:


https://github.com/apache/lucene-solr/commit/c04a3ac306b017054173d986e32ea217e2fc6595#commitcomment-31829970
  
In 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformerFactory.java:
In 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformerFactory.java
 on line 162:
I think this illustrates a problem with this syntax that you invented.  The 
code here says the absence of any '/' means it's a "regular filter, not 
hierarchy based".  But how then do you articulate a path filter for all 
elements named "foo" regardless of parentage?  So I think "childFilter" is 
trying to do double-duty here leading to some edge cases as we try to guess 
what was intended.  I propose instead we have a different local param 
"pathFilter".  Both filters will be AND'ed together.  WDYT?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Martin Gainty
+1

From: David Smiley 
Sent: Thursday, January 3, 2019 8:45 AM
To: dev@lucene.apache.org
Subject: Re: [NOTICE] Mandatory migration of git repositories to 
gitbox.apache.org

I propose we (me) coordinate with them to do this transition on Wednesday next 
week (Jan 9th).  It appears to be a minor inconvenience.  If there are 
problems, we'll have some work days after to deal with it.  And doing this 
ahead of the mass migration may give us more individual attention from the busy 
infra team if there are problems.  Can I get some +1's?

On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team 
mailto:infrastruct...@apache.org>> wrote:
Hello, lucene folks.
As stated earlier in 2018, all git repositories must be migrated from
the git-wip-us.apache.org URL to 
gitbox.apache.org, as the old service
is being decommissioned. Your project is receiving this email because
you still have repositories on git-wip-us that needs to be migrated.

The following repositories on git-wip-us belong to your project:
 - lucene-solr.git


We are now entering the mandated (coordinated) move stage of the roadmap,
and you are asked to please coordinate migration with the Apache
Infrastructure Team before February 7th. All repositories not migrated
on February 7th will be mass migrated without warning, and we'd appreciate
it if we could work together to avoid a big mess that day :-).

Moving to gitbox means you will get full write access on GitHub as well,
and be able to close/merge pull requests and much more.

To have your repositories moved, please follow these steps:

- Ensure consensus on the move (a link to a 
lists.apache.org thread will
  suffice for us as evidence).
- Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA

Your migration should only take a few minutes. If you wish to migrate
at a specific time of day or date, please do let us know in the ticket.

As always, we appreciate your understanding and patience as we move
things around and work to provide better services and features for
the Apache Family.

Should you wish to contact us with feedback or questions, please do so
at: us...@infra.apache.org.


With regards,
Apache Infrastructure


-
To unsubscribe, e-mail: 
dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 
dev-h...@lucene.apache.org

--
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-13107) Math Expressions for Custom Apache Zeppelin visualizations

2019-01-03 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13107:
--
Summary: Math Expressions for Custom Apache Zeppelin visualizations   (was: 
Math Expressions for Custom Apache Zeppelin Visualizations )

> Math Expressions for Custom Apache Zeppelin visualizations 
> ---
>
> Key: SOLR-13107
> URL: https://issues.apache.org/jira/browse/SOLR-13107
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for creating Math Expressions for custom Apache 
> Zeppelin Visualizations
> Apache Zeppelin supports a growing number of visualizations out of the box 
> which can all be accessed using the *zplot* Math Expression. 
> Apache Zeppelin also has support for adding Angular code to support custom 
> visualizations. There are a couple of areas of interest:
> 1) More advanced map visualizations. Math Expressions can cluster lat/lon 
> points, and draw convex hulls and enclosing circles around clusters. Apache 
> Zeppelin mapping doesn't currently support drawing circles or polygons on a 
> map. There is an interesting post that describes using google maps inside 
> Zeppelin: 
> [https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]
> 2) Clustering visualizations.
> 3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13107) Math Expressions for Custom Apache Zeppelin Visualizations

2019-01-03 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-13107:
-

 Summary: Math Expressions for Custom Apache Zeppelin 
Visualizations 
 Key: SOLR-13107
 URL: https://issues.apache.org/jira/browse/SOLR-13107
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This is an umbrella ticket for creating Math Expressions for custom Apache 
Zeppelin Visualizations

Apache Zeppelin supports a growing number of visualizations out of the box 
which can all be accessed using the *zplot* Math Expression. 

Apache Zeppelin also has support for adding Angular code to support custom 
visualizations. There are a couple of areas of interest:

1) More advanced map visualizations. Math Expressions can cluster lat/lon 
points, and draw convex hulls and enclosing circles around clusters. Apache 
Zeppelin mapping doesn't currently support drawing circles or polygons on a 
map. There is an interesting post that describes using google maps inside 
Zeppelin: 
[https://community.hortonworks.com/articles/75834/using-angular-within-apache-zeppelin-to-create-cus.html.]

2) Clustering visualizations.

3) 3D Visualizations, particularly Multivariate Normal Distribution plotting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #531: SOLR-12768

2019-01-03 Thread moshebla
Github user moshebla commented on the issue:

https://github.com/apache/lucene-solr/pull/531
  
> I wonder... hmmm... maybe the nest path should always start with a '/'? 
It would seem more correct since paths in general usually start with one, and 
it would also make implementing this case simpler.

Uploaded a new commit with the above requested change.
I also fixed TestNestedUpdateProcessor and TestChildDocTransformerHierarchy.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread David Smiley
I propose we (me) coordinate with them to do this transition on Wednesday
next week (Jan 9th).  It appears to be a minor inconvenience.  If there are
problems, we'll have some work days after to deal with it.  And doing this
ahead of the mass migration may give us more individual attention from the
busy infra team if there are problems.  Can I get some +1's?

On Thu, Jan 3, 2019 at 8:18 AM Apache Infrastructure Team <
infrastruct...@apache.org> wrote:

> Hello, lucene folks.
> As stated earlier in 2018, all git repositories must be migrated from
> the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
> is being decommissioned. Your project is receiving this email because
> you still have repositories on git-wip-us that needs to be migrated.
>
> The following repositories on git-wip-us belong to your project:
>  - lucene-solr.git
>
>
> We are now entering the mandated (coordinated) move stage of the roadmap,
> and you are asked to please coordinate migration with the Apache
> Infrastructure Team before February 7th. All repositories not migrated
> on February 7th will be mass migrated without warning, and we'd appreciate
> it if we could work together to avoid a big mess that day :-).
>
> Moving to gitbox means you will get full write access on GitHub as well,
> and be able to close/merge pull requests and much more.
>
> To have your repositories moved, please follow these steps:
>
> - Ensure consensus on the move (a link to a lists.apache.org thread will
>   suffice for us as evidence).
> - Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA
>
> Your migration should only take a few minutes. If you wish to migrate
> at a specific time of day or date, please do let us know in the ticket.
>
> As always, we appreciate your understanding and patience as we move
> things around and work to provide better services and features for
> the Apache Family.
>
> Should you wish to contact us with feedback or questions, please do so
> at: us...@infra.apache.org.
>
>
> With regards,
> Apache Infrastructure
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: about testing distributed features

2019-01-03 Thread David Smiley
No, it's not done on a real/physical set of servers.  Instead the test
harness creates multiple Solr/Jetty servers in one JVM; the same used for
tests.

On Thu, Jan 3, 2019 at 5:49 AM Jose Raul Perez Rodriguez <
joseraul.subscript...@gmail.com> wrote:

> Hi David, thanks for the answer.
>
> Let me reformulate the doubt in a different way; what I was trying to ask
> is, if the testing of Solr distributed features are performed in a
> distributed environment; like deploying a test Solr Cloud Cluster on some
> machines, to perform test on sharding, partitioning, etc.
>
> If this is done on some way here in solr, could you explain a bit, how can
> I introduce in it, what technologies are used, etc.
>
> Thanks a lot in advance
>
>
>
> On 12/31/18 5:37 PM, David Smiley wrote:
>
> Hi Jose,
>
> Just about everything is tested.  If you are looking to improve some
> aspect of SolrCloud's internals, then I suggest looking for existing tests
> of that particular aspect.  If you are working on a feature that wants
> SolrCloud (whole system integration test) then I suggest
> subclassing SolrCloudTestCase.  You will see many such subclasses already,
> and you can learn from them.  You can have an external project that depends
> on solr-test-framework.  Note that it may be essential to depend on that
> before any explicit dependencies on solr core or solrj that you may have.
>
> ~ David
>
> On Sat, Dec 29, 2018 at 7:29 AM Jose Raul Perez Rodriguez <
> joseraul.subscript...@gmail.com> wrote:
>
>> Hi all,
>>
>> I am interested in contribute to solr and I have a couple o details I
>> would like to know but I can't find the answer in the doc. If its
>> possible, what kinds of aspects of distributed features of solr
>> (replication, lucene index sharding, etc) are tested in the tests
>> included in the repo https://github.com/apache/lucene-solr?. And if
>> there is any like an external tool for test solr distributed features as
>> a whole system?
>>
>> Many Thanks in advance,
>>
>> Jr
>>
>> On 12/29/18 1:23 PM, dev-h...@lucene.apache.org wrote:
>> > Hi! This is the ezmlm program. I'm managing the
>> > dev@lucene.apache.org mailing list.
>> >
>> > Acknowledgment: I have added the address
>> >
>> > joseraul.subscript...@gmail.com
>> >
>> > to the dev mailing list.
>> >
>> > Welcome to dev@lucene.apache.org!
>> >
>> > Please save this message so that you know the address you are
>> > subscribed under, in case you later want to unsubscribe or change your
>> > subscription address.
>> >
>> >
>> > --- Administrative commands for the dev list ---
>> >
>> > I can handle administrative requests automatically. Please
>> > do not send them to the list address! Instead, send
>> > your message to the correct command address:
>> >
>> > To subscribe to the list, send a message to:
>> > 
>> >
>> > To remove your address from the list, send a message to:
>> > 
>> >
>> > Send mail to the following for info and FAQ for this list:
>> > 
>> > 
>> >
>> > Similar addresses exist for the digest list:
>> > 
>> > 
>> >
>> > To get messages 123 through 145 (a maximum of 100 per request), mail:
>> > 
>> >
>> > To get an index with subject and author for messages 123-456 , mail:
>> > 
>> >
>> > They are always returned as sets of 100, max 2000 per request,
>> > so you'll actually get 100-499.
>> >
>> > To receive all messages with the same subject as message 12345,
>> > send a short message to:
>> > 
>> >
>> > The messages should contain one line or word of text to avoid being
>> > treated as sp@m, but I will ignore their content.
>> > Only the ADDRESS you send to is important.
>> >
>> > You can start a subscription for an alternate address,
>> > for example "john@host.domain" , just add a hyphen
>> and your
>> > address (with '=' instead of '@') after the command word:
>> > 
>> >
>> > To stop subscription for this address, mail:
>> > 
>> >
>> > In both cases, I'll send a confirmation message to that address. When
>> > you receive it, simply reply to it to complete your subscription.
>> >
>> > If despite following these instructions, you do not get the
>> > desired results, please contact my owner at
>> > dev-ow...@lucene.apache.org. Please be patient, my owner is a
>> > lot slower than I am ;-)
>> >
>> > --- Enclosed is a copy of the request I received.
>> >
>> > Return-Path: 
>> > Received: (qmail 73053 invoked by uid 99); 29 Dec 2018 12:23:23 -
>> > Received: from pnap-us-west-generic-nat.apache.org (HELO
>> spamd3-us-west.apache.org) (209.188.14.142)
>> >  by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 29 Dec 2018
>> 12:23:23 +
>> > Received: from localhost (localhost [127.0.0.1])
>> >   by spamd3-us-west.apache.org (ASF Mail Server at
>> spamd3-us-west.apache.org) with ESMTP id 2F7E0180F8C
>> >   for > gmail@lucene.apache.org>; Sat, 29 Dec 2018 12:23:23 + (UTC)
>> > X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org
>> > X-Spam-Flag: NO
>> > X-Spam-Score: -0.203
>> > 

Re: Congratulations to the new Lucene/Solr PMC chair, Cassandra Targett

2019-01-03 Thread Nhat Nguyen
Congratulations Cassandra!

> On Jan 3, 2019, at 6:19 AM, Jason Gerlowski  wrote:
> 
> Congrats!
> 
> On Wed, Jan 2, 2019 at 2:09 PM Kevin Risden  wrote:
>> 
>> Congrats!
>> 
>> Kevin Risden
>> 
>> On Wed, Jan 2, 2019 at 1:36 PM Anshum Gupta  wrote:
>>> 
>>> Congratulations, Cassandra!
>>> 
>>> On Sun, Dec 30, 2018 at 11:38 PM Adrien Grand  wrote:
 
 Every year, the Lucene PMC rotates the Lucene PMC chair and Apache
 Vice President position.
 
 This year we have nominated and elected Cassandra Targett as the
 chair, a decision that the board approved in its December 2018
 meeting.
 
 Congratulations, Cassandra!
 
 --
 Adrien
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
>>> 
>>> 
>>> --
>>> Anshum Gupta
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-03 Thread Apache Infrastructure Team
Hello, lucene folks.
As stated earlier in 2018, all git repositories must be migrated from
the git-wip-us.apache.org URL to gitbox.apache.org, as the old service
is being decommissioned. Your project is receiving this email because
you still have repositories on git-wip-us that needs to be migrated.

The following repositories on git-wip-us belong to your project:
 - lucene-solr.git


We are now entering the mandated (coordinated) move stage of the roadmap,
and you are asked to please coordinate migration with the Apache
Infrastructure Team before February 7th. All repositories not migrated
on February 7th will be mass migrated without warning, and we'd appreciate
it if we could work together to avoid a big mess that day :-).

Moving to gitbox means you will get full write access on GitHub as well,
and be able to close/merge pull requests and much more.

To have your repositories moved, please follow these steps:

- Ensure consensus on the move (a link to a lists.apache.org thread will
  suffice for us as evidence).
- Create a JIRA ticket at https://issues.apache.org/jira/browse/INFRA

Your migration should only take a few minutes. If you wish to migrate
at a specific time of day or date, please do let us know in the ticket.

As always, we appreciate your understanding and patience as we move
things around and work to provide better services and features for
the Apache Family.

Should you wish to contact us with feedback or questions, please do so
at: us...@infra.apache.org.


With regards,
Apache Infrastructure


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-03 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732977#comment-16732977
 ] 

Shalin Shekhar Mangar commented on SOLR-13035:
--

Okay I understand your point. I'm +1 if we default SOLR_VAR_ROOT to SOLR_TIP.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732987#comment-16732987
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Thanks Shalin for the feedback. I see there are some details left to clean up 
and add. On it.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732971#comment-16732971
 ] 

Shalin Shekhar Mangar commented on SOLR-11126:
--

Thanks Amrit. A few comments:

# The CommonParams used to have {{/admin/health}} but now has 
{{/admin/info/health}}. It is okay to change the path because this API has 
never been released but there is some inconsistency because 
ImplicitPlugins.json still has {{"/admin/health"}}
# HealthCheckHandler -- the {{cores != null} check is redundant and the if 
condition can be simplified to {{if(cores == null || cores.isShutDown())}}
# HealthCheckHandler -- redundant return statement at the end of the 
handleRequestBody method
# Please make a note in the reference guide that this health check handler is 
only available in solrcloud mode
# There should be at least one test which tests the v2 API
# The test can make use of the expectThrows pattern. See the changes made in 
SOLR-12555 for examples.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13099) Support a new type of unit 'WEEK ' for DateMathParser

2019-01-03 Thread Haochao Zhuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haochao Zhuang updated SOLR-13099:
--
Attachment: SOLR-13099.patch

> Support a new type of unit 'WEEK ' for DateMathParser
> -
>
> Key: SOLR-13099
> URL: https://issues.apache.org/jira/browse/SOLR-13099
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Haochao Zhuang
>Priority: Major
> Attachments: SOLR-13099.patch
>
>
> for convenience purpose, i think a WEEK unit is necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13099) Support a new type of unit 'WEEK ' for DateMathParser

2019-01-03 Thread Haochao Zhuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haochao Zhuang updated SOLR-13099:
--
Attachment: (was: DateMathParserTest.java)

> Support a new type of unit 'WEEK ' for DateMathParser
> -
>
> Key: SOLR-13099
> URL: https://issues.apache.org/jira/browse/SOLR-13099
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Haochao Zhuang
>Priority: Major
>
> for convenience purpose, i think a WEEK unit is necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >