[jira] [Commented] (SOLR-13952) Separate out Gradle-specific code from other (mostly test) changes and commit separately

2019-11-22 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980670#comment-16980670
 ] 

David Smiley commented on SOLR-13952:
-

So basically you ultimately plan on committing one issue with a bunch of random 
Solr changes?  That's not cool.  Commits & corresponding issues should have a 
particular subject, mostly.

> Separate out Gradle-specific code from other (mostly test) changes and commit 
> separately
> 
>
> Key: SOLR-13952
> URL: https://issues.apache.org/jira/browse/SOLR-13952
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: fordavid.patch
>
>
> The gradle_8 branch has many changes unrelated to gradle. It would be much 
> easier to work on the gradle parts if these were separated. So here's my plan:
> - establish a branch to use for the non-gradle parts of the gradle_8 branch 
> and commit separately. For a first cut, I'll make all the changes I'm 
> confident of, and mark the others with nocommits so we can iterate and decide 
> when to merge to master and 8x.
> - create a "gradle_9" branch that hosts only the gradle changes for us all to 
> iterate on.
> I hope to have a preliminary cut at this over the weekend. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13952) Separate out Gradle-specific code from other (mostly test) changes and commit separately

2019-11-22 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980665#comment-16980665
 ] 

David Smiley commented on SOLR-13952:
-

Ignore his change to XmlOffsetCorrector.  AFAICT he maybe barely started to try 
to have this class not depend on Woodstox for some reason... but didn't really. 
 That's my guess.  Harmless.

> Separate out Gradle-specific code from other (mostly test) changes and commit 
> separately
> 
>
> Key: SOLR-13952
> URL: https://issues.apache.org/jira/browse/SOLR-13952
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: fordavid.patch
>
>
> The gradle_8 branch has many changes unrelated to gradle. It would be much 
> easier to work on the gradle parts if these were separated. So here's my plan:
> - establish a branch to use for the non-gradle parts of the gradle_8 branch 
> and commit separately. For a first cut, I'll make all the changes I'm 
> confident of, and mark the others with nocommits so we can iterate and decide 
> when to merge to master and 8x.
> - create a "gradle_9" branch that hosts only the gradle changes for us all to 
> iterate on.
> I hope to have a preliminary cut at this over the weekend. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980654#comment-16980654
 ] 

David Smiley commented on SOLR-13961:
-

Makes sense to me; thanks for working on this Thomas.  Do tests pass?  I added 
one comment to the PR.  WDYT [~moshebla] ?

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-13961:
---

Assignee: David Smiley

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Assignee: David Smiley
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1030: SOLR-13961: Fix Atomic Update unset nested documents

2019-11-22 Thread GitBox
dsmiley commented on a change in pull request #1030: SOLR-13961: Fix Atomic 
Update unset nested documents
URL: https://github.com/apache/lucene-solr/pull/1030#discussion_r349857587
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/update/processor/NestedAtomicUpdateTest.java
 ##
 @@ -642,6 +642,118 @@ public void testBlockAtomicRemove() throws Exception {
 );
   }
 
+  @Test
+  public void testBlockAtomicSetToNull() throws Exception {
+SolrInputDocument doc = sdoc("id", "1",
+"cat_ss", new String[] {"aaa", "ccc"},
+"child1", sdocs(sdoc("id", "2", "cat_ss", "child"), sdoc("id", "3", 
"cat_ss", "child")));
+assertU(adoc(doc));
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId,
+RealTimeGetComponent.Resolution.ROOT_WITH_CHILDREN);
+// assert block doc has child docs
+assertTrue(block.containsKey("child1"));
+
+assertJQ(req("q", "id:1"), "/response/numFound==0");
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument committedBlock = 
RealTimeGetComponent.getInputDocument(core, rootDocId,
+RealTimeGetComponent.Resolution.ROOT_WITH_CHILDREN);
+BytesRef childDocId = new BytesRef("2");
+// ensure the whole block is returned when resolveBlock is true and id of 
a child doc is provided
+assertEquals(committedBlock.toString(), RealTimeGetComponent
+.getInputDocument(core, childDocId, 
RealTimeGetComponent.Resolution.ROOT_WITH_CHILDREN).toString());
+
+assertJQ(req("q", "id:1"), "/response/numFound==1");
+
+assertJQ(req("qt", "/get", "id", "1", "fl", "id, cat_ss, child1, 
[child]"), "=={\"doc\":{'id':\"1\"" +
+", cat_ss:[\"aaa\",\"ccc\"], 
child1:[{\"id\":\"2\",\"cat_ss\":[\"child\"]}, 
{\"id\":\"3\",\"cat_ss\":[\"child\"]}]}}");
+
+assertU(commit());
+
+assertJQ(req("qt", "/get", "id", "1", "fl", "id, cat_ss, child1, 
[child]"), "=={\"doc\":{'id':\"1\"" +
+", cat_ss:[\"aaa\",\"ccc\"], 
child1:[{\"id\":\"2\",\"cat_ss\":[\"child\"]}, 
{\"id\":\"3\",\"cat_ss\":[\"child\"]}]}}");
+
+doc = sdoc("id", "1", "child1", Collections.singletonMap("set", null));
+addAndGetVersion(doc, params("wt", "json"));
+
+assertJQ(req("qt", "/get", "id", "1", "fl", "id, cat_ss, child1, 
[child]"), "=={\"doc\":{'id':\"1\", cat_ss:[\"aaa\",\"ccc\"]}}");
+
+assertU(commit());
+
+// a cut-n-paste of the first big query, but this time it will be 
retrieved from the index rather than the
+// transaction log
+// this requires ChildDocTransformer to get the whole block, since the 
document is retrieved using an index lookup
+assertJQ(req("qt", "/get", "id", "1", "fl", "id, cat_ss, child1, 
[child]"), "=={'doc':{'id':'1', cat_ss:[\"aaa\",\"ccc\"]}}");
+
+// ensure the whole block has been committed correctly to the index.
+assertJQ(req("q", "id:1", "fl", "*, [child]"),
+"/response/numFound==1",
+"/response/docs/[0]/id=='1'",
+"/response/docs/[0]/cat_ss/[0]==\"aaa\"",
+"/response/docs/[0]/cat_ss/[1]==\"ccc\"");
+  }
+
+  @Test
+  public void testBlockAtomicSetToEmpty() throws Exception {
 
 Review comment:
   this is a repetition of the above method with a slight change; right?  
Instead of repeating code, can you refactor to a method with a boolean to say 
null or empty?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9049) Remove FST cachedRootArcs now redundant with direct-addressing

2019-11-22 Thread Robert Muir (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980643#comment-16980643
 ] 

Robert Muir commented on LUCENE-9049:
-

Since removing the cache seems to work out here, I'm curious (can be separate 
issue) if ~30k han cache in kuromoji is redundant after LUCENE-8920 changes 
(https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/TokenInfoFST.java#L35-L38)

Actually the entire linked file's purpose is all around this caching, so if its 
not needed anymore it would be a nice cleanup. But it was definitely needed for 
good performance before, so we shoudl be careful. Nori analyzer has the exact 
same thing (file has the same name) for ~10k hangul syllables.

> Remove FST cachedRootArcs now redundant with direct-addressing
> --
>
> Key: LUCENE-9049
> URL: https://issues.apache.org/jira/browse/LUCENE-9049
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: LUCENE-9049.patch
>
>
> With LUCENE-8920 FST most often encodes top level nodes with 
> direct-addressing (instead of array for binary search). This probably made 
> the cachedRootArcs redundant. So they should be removed, and this will reduce 
> the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8985) SynonymGraphFilter cannot handle input stream with tokens filtered.

2019-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/LUCENE-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980596#comment-16980596
 ] 

Jan Høydahl commented on LUCENE-8985:
-

I'd love to see this go in 8.4 but I need help reviewing. [~msoko...@gmail.com]?

> SynonymGraphFilter cannot handle input stream with tokens filtered.
> ---
>
> Key: LUCENE-8985
> URL: https://issues.apache.org/jira/browse/LUCENE-8985
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Chongchen Chen
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.3
>
> Attachments: SGF_SF_interaction.patch.txt
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> [~janhoy] find the bug.
> In an analyzer with e.g. stopFilter where tokens are removed from the stream 
> and replaced with a “hole”, synonymgraphfilter will not preserve these holes 
> but remove them, resulting in certain phrase queries failing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13465) CoreContainer.auditloggerPlugin should be volatile

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980587#comment-16980587
 ] 

ASF subversion and git services commented on SOLR-13465:


Commit 3adb0903bf629f07cfc4f42fafab24c0fb9718b4 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3adb090 ]

SOLR-13465 CoreContainer.auditloggerPlugin should be volatile (#672)

(cherry picked from commit 312431b1821a67c9ddb7e219b9203d6fd7bdd5df)


> CoreContainer.auditloggerPlugin should be volatile
> --
>
> Key: SOLR-13465
> URL: https://issues.apache.org/jira/browse/SOLR-13465
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CoreContainer.auditloggerPlugin needs to be declared as volatile, see Hoss' 
> comment in SOLR-12120 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13465) CoreContainer.auditloggerPlugin should be volatile

2019-11-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13465.

Resolution: Fixed

> CoreContainer.auditloggerPlugin should be volatile
> --
>
> Key: SOLR-13465
> URL: https://issues.apache.org/jira/browse/SOLR-13465
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CoreContainer.auditloggerPlugin needs to be declared as volatile, see Hoss' 
> comment in SOLR-12120 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13465) CoreContainer.auditloggerPlugin should be volatile

2019-11-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13465:
---
Fix Version/s: 8.4

> CoreContainer.auditloggerPlugin should be volatile
> --
>
> Key: SOLR-13465
> URL: https://issues.apache.org/jira/browse/SOLR-13465
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 8.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CoreContainer.auditloggerPlugin needs to be declared as volatile, see Hoss' 
> comment in SOLR-12120 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13465) CoreContainer.auditloggerPlugin should be volatile

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980585#comment-16980585
 ] 

ASF subversion and git services commented on SOLR-13465:


Commit 312431b1821a67c9ddb7e219b9203d6fd7bdd5df in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=312431b ]

SOLR-13465 CoreContainer.auditloggerPlugin should be volatile (#672)



> CoreContainer.auditloggerPlugin should be volatile
> --
>
> Key: SOLR-13465
> URL: https://issues.apache.org/jira/browse/SOLR-13465
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CoreContainer.auditloggerPlugin needs to be declared as volatile, see Hoss' 
> comment in SOLR-12120 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #672: SOLR-13465 CoreContainer.auditloggerPlugin should be volatile

2019-11-22 Thread GitBox
janhoy merged pull request #672: SOLR-13465 CoreContainer.auditloggerPlugin 
should be volatile
URL: https://github.com/apache/lucene-solr/pull/672
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13837) AuditLogger must handle V2 requests better

2019-11-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13837:
---
Description: 
Spinoff from SOLR-13741

Turns out that Audit logger does not log the body of V2 Admin API requests and 
needs a general improvement in how V2 requests are handled, i.e:
 * We do not audit log the BODY of the request (which is where the action is)
 * We do not detect what collections the request is for (so the 
AuditEvent#collections array is null)
 * -The resource path is internal format {{/v2/c}} instead of {{/api/c}} 
(should we convert the prefix in the AuditEvent?)-

  was:
Spinoff from SOLR-13741

Turns out that Audit logger does not log the body of V2 Admin API requests and 
needs a general improvement in how V2 requests are handled, i.e:
 * We do not audit log the BODY of the request (which is where the action is)
 * We do not detect what collections the request is for (so the 
AuditEvent#collections array is null)
 * The resource path is internal format {{/v2/c}} instead of {{/api/c}} 
(should we convert the prefix in the AuditEvent?)


> AuditLogger must handle V2 requests better
> --
>
> Key: SOLR-13837
> URL: https://issues.apache.org/jira/browse/SOLR-13837
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Affects Versions: 8.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Spinoff from SOLR-13741
> Turns out that Audit logger does not log the body of V2 Admin API requests 
> and needs a general improvement in how V2 requests are handled, i.e:
>  * We do not audit log the BODY of the request (which is where the action is)
>  * We do not detect what collections the request is for (so the 
> AuditEvent#collections array is null)
>  * -The resource path is internal format {{/v2/c}} instead of {{/api/c}} 
> (should we convert the prefix in the AuditEvent?)-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13905) Make findRequestType in AuditEvent more robust

2019-11-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13905.

Resolution: Fixed

> Make findRequestType in AuditEvent more robust
> --
>
> Key: SOLR-13905
> URL: https://issues.apache.org/jira/browse/SOLR-13905
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Affects Versions: 8.3
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In SOLR-13941 we fixed the root cause for a NullPointer exception in 
> findRequestType for certain AuditEvents.
> In this issue we make it even more robust and make the pattern matching more 
> performant at the same time as detecting some more patterns for ADMIN 
> requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13905) Make findRequestType in AuditEvent more robust

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980563#comment-16980563
 ] 

ASF subversion and git services commented on SOLR-13905:


Commit 29e172f6e2b4c5f8e3cf57ee1754777323ccdb86 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=29e172f ]

SOLR-13905 Make findRequestType in AuditEvent more robust (#1014)

(cherry picked from commit e45c5ce9b9e70650f119976b8b2d91b3c760cb48)


> Make findRequestType in AuditEvent more robust
> --
>
> Key: SOLR-13905
> URL: https://issues.apache.org/jira/browse/SOLR-13905
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Affects Versions: 8.3
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In SOLR-13941 we fixed the root cause for a NullPointer exception in 
> findRequestType for certain AuditEvents.
> In this issue we make it even more robust and make the pattern matching more 
> performant at the same time as detecting some more patterns for ADMIN 
> requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13905) Make findRequestType in AuditEvent more robust

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980550#comment-16980550
 ] 

ASF subversion and git services commented on SOLR-13905:


Commit e45c5ce9b9e70650f119976b8b2d91b3c760cb48 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e45c5ce ]

SOLR-13905 Make findRequestType in AuditEvent more robust (#1014)




> Make findRequestType in AuditEvent more robust
> --
>
> Key: SOLR-13905
> URL: https://issues.apache.org/jira/browse/SOLR-13905
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Affects Versions: 8.3
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In SOLR-13941 we fixed the root cause for a NullPointer exception in 
> findRequestType for certain AuditEvents.
> In this issue we make it even more robust and make the pattern matching more 
> performant at the same time as detecting some more patterns for ADMIN 
> requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1014: SOLR-13905 Make findRequestType in AuditEvent more robust

2019-11-22 Thread GitBox
janhoy merged pull request #1014: SOLR-13905 Make findRequestType in AuditEvent 
more robust
URL: https://github.com/apache/lucene-solr/pull/1014
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980520#comment-16980520
 ] 

Thomas Wöckinger edited comment on SOLR-13961 at 11/22/19 9:42 PM:
---

To have a consistent API unsetting nested documents should work the same as 
unsetting regular fields. For regular fields this is possible and also 
documented. 

More important is the fact that otherwise all nested documents from the given 
path must be known to remove the whole sub-tree.

The performance is a lot better, because no queries are required to get the 
nested document ids to remove

>From 
>[https://lucene.apache.org/solr/guide/8_3/updating-parts-of-documents.html#atomic-updates]
  
 {{set}}
 Set or replace the field value(s) with the specified value(s), or remove the 
values if 'null' or empty list is specified as the new value.

May be specified as a single value, or as a list for multiValued fields.

 

 

 


was (Author: thomas.woeckinger):
To have a consistent API unsetting nested documents should work the same as 
unsetting regular fields. For regular fields this is possible and also 
documented. 
 
>From 
>https://lucene.apache.org/solr/guide/8_3/updating-parts-of-documents.html#atomic-updates
 
{{set}}
Set or replace the field value(s) with the specified value(s), or remove the 
values if 'null' or empty list is specified as the new value.

May be specified as a single value, or as a list for multiValued fields.

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980520#comment-16980520
 ] 

Thomas Wöckinger commented on SOLR-13961:
-

To have a consistent API unsetting nested documents should work the same as 
unsetting regular fields. For regular fields this is possible and also 
documented. 
 
>From 
>https://lucene.apache.org/solr/guide/8_3/updating-parts-of-documents.html#atomic-updates
 
{{set}}
Set or replace the field value(s) with the specified value(s), or remove the 
values if 'null' or empty list is specified as the new value.

May be specified as a single value, or as a list for multiValued fields.

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9060) Fix the files generated python scripts in lucene/util/packed to not use RamUsageEstimator.NUM_BYTES_INT

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9060:
---
Attachment: LUCENE-9060.patch
Status: Open  (was: Open)

Haven't even tried to compile, but I think this is the right general direction. 
If nothing else you won't have to go poking around to find the files I saw.

> Fix the files generated python scripts in lucene/util/packed to not use 
> RamUsageEstimator.NUM_BYTES_INT
> ---
>
> Key: LUCENE-9060
> URL: https://issues.apache.org/jira/browse/LUCENE-9060
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Erick Erickson
>Priority: Major
> Attachments: LUCENE-9060.patch
>
>
> RamUsageEstimator.NUM_BYTES_INT has been removed. But the Python code still 
> puts it in the generated code. Once you run "ant regenerate" (and I had to 
> run it with 24G!) you can no longer build.
> We should verify that warnings against hand-editing end up in the generated 
> code, although they weren't hand-edited in this case.
> It looks like the constants were removed as part of LUCENE-8745.
> I think it's just a straightforward substitution of "Integer.BYTES".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9060) Fix the files generated python scripts in lucene/util/packed to not use RamUsageEstimator.NUM_BYTES_INT

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9060:
---
Environment: (was: RamUsageEstimator.NUM_BYTES_INT has been removed. 
But the Python code still puts it in the generated code. Once you run "ant 
regenerate" (and I had to run it with 24G!) you can no longer build.

We should verify that warnings against hand-editing end up in the generated 
code, although they weren't hand-edited in this case.

It looks like the constants were removed as part of LUCENE-8745.

I think it's just a straightforward substitution of "Integer.BYTES".)

> Fix the files generated python scripts in lucene/util/packed to not use 
> RamUsageEstimator.NUM_BYTES_INT
> ---
>
> Key: LUCENE-9060
> URL: https://issues.apache.org/jira/browse/LUCENE-9060
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Erick Erickson
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9060) Fix the files generated python scripts in lucene/util/packed to not use RamUsageEstimator.NUM_BYTES_INT

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9060:
---
Description: 
RamUsageEstimator.NUM_BYTES_INT has been removed. But the Python code still 
puts it in the generated code. Once you run "ant regenerate" (and I had to run 
it with 24G!) you can no longer build.

We should verify that warnings against hand-editing end up in the generated 
code, although they weren't hand-edited in this case.

It looks like the constants were removed as part of LUCENE-8745.

I think it's just a straightforward substitution of "Integer.BYTES".

> Fix the files generated python scripts in lucene/util/packed to not use 
> RamUsageEstimator.NUM_BYTES_INT
> ---
>
> Key: LUCENE-9060
> URL: https://issues.apache.org/jira/browse/LUCENE-9060
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Erick Erickson
>Priority: Major
>
> RamUsageEstimator.NUM_BYTES_INT has been removed. But the Python code still 
> puts it in the generated code. Once you run "ant regenerate" (and I had to 
> run it with 24G!) you can no longer build.
> We should verify that warnings against hand-editing end up in the generated 
> code, although they weren't hand-edited in this case.
> It looks like the constants were removed as part of LUCENE-8745.
> I think it's just a straightforward substitution of "Integer.BYTES".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9060) Fix the files generated python scripts in lucene/util/packed to not use RamUsageEstimator.NUM_BYTES_INT

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9060:
---
Summary: Fix the files generated python scripts in lucene/util/packed to 
not use RamUsageEstimator.NUM_BYTES_INT  (was: Fix the files generated python 
scripts in lucene/util/packed to not use 
RamUsageEstimator.RamUsageEstimator.NUM_BYTES_INT)

> Fix the files generated python scripts in lucene/util/packed to not use 
> RamUsageEstimator.NUM_BYTES_INT
> ---
>
> Key: LUCENE-9060
> URL: https://issues.apache.org/jira/browse/LUCENE-9060
> Project: Lucene - Core
>  Issue Type: Bug
> Environment: RamUsageEstimator.NUM_BYTES_INT has been removed. But 
> the Python code still puts it in the generated code. Once you run "ant 
> regenerate" (and I had to run it with 24G!) you can no longer build.
> We should verify that warnings against hand-editing end up in the generated 
> code, although they weren't hand-edited in this case.
> It looks like the constants were removed as part of LUCENE-8745.
> I think it's just a straightforward substitution of "Integer.BYTES".
>Reporter: Erick Erickson
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9060) Fix the files generated python scripts in lucene/util/packed to not use RamUsageEstimator.RamUsageEstimator.NUM_BYTES_INT

2019-11-22 Thread Erick Erickson (Jira)
Erick Erickson created LUCENE-9060:
--

 Summary: Fix the files generated python scripts in 
lucene/util/packed to not use RamUsageEstimator.RamUsageEstimator.NUM_BYTES_INT
 Key: LUCENE-9060
 URL: https://issues.apache.org/jira/browse/LUCENE-9060
 Project: Lucene - Core
  Issue Type: Bug
 Environment: RamUsageEstimator.NUM_BYTES_INT has been removed. But the 
Python code still puts it in the generated code. Once you run "ant regenerate" 
(and I had to run it with 24G!) you can no longer build.

We should verify that warnings against hand-editing end up in the generated 
code, although they weren't hand-edited in this case.

It looks like the constants were removed as part of LUCENE-8745.

I think it's just a straightforward substitution of "Integer.BYTES".
Reporter: Erick Erickson






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9058) IntervalQuery.matches() does't emit alt field under I.or(I.fixField()) at least

2019-11-22 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-9058:
-
Summary: IntervalQuery.matches() does't emit alt field under 
I.or(I.fixField()) at least  (was: IntervalQuery.matches() does't emit alt 
field under I,or(I.fixField()) at least)

> IntervalQuery.matches() does't emit alt field under I.or(I.fixField()) at 
> least
> ---
>
> Key: LUCENE-9058
> URL: https://issues.apache.org/jira/browse/LUCENE-9058
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/queries
>Reporter: Mikhail Khludnev
>Priority: Major
>
> matches FieldOffsetStrategy.createOffsetsEnumsWeightMatcher() doesn't have 
> alt fields supposed to provided by underneath Intervals.fixField(). I drop 
> off impacted tests from LUCENE-9031
> cc [~romseygeek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9058) IntervalQuery.matches() does't emit alt field under I.or(I.fixField()) at least

2019-11-22 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-9058:
-
Description: 
matches FieldOffsetStrategy.createOffsetsEnumsWeightMatcher() doesn't have alt 
fields supposed to be provided by underneath Intervals.fixField(). I drop off 
impacted tests from LUCENE-9031
cc [~romseygeek]

  was:
matches FieldOffsetStrategy.createOffsetsEnumsWeightMatcher() doesn't have alt 
fields supposed to provided by underneath Intervals.fixField(). I drop off 
impacted tests from LUCENE-9031
cc [~romseygeek]


> IntervalQuery.matches() does't emit alt field under I.or(I.fixField()) at 
> least
> ---
>
> Key: LUCENE-9058
> URL: https://issues.apache.org/jira/browse/LUCENE-9058
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/queries
>Reporter: Mikhail Khludnev
>Priority: Major
>
> matches FieldOffsetStrategy.createOffsetsEnumsWeightMatcher() doesn't have 
> alt fields supposed to be provided by underneath Intervals.fixField(). I drop 
> off impacted tests from LUCENE-9031
> cc [~romseygeek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] shows the 
power of combining a composable streaming and mathematics language with a 
search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Function Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] shows the 
power of combining a composable streaming and mathematics language with a 
search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] shows the 
> power of combining a composable streaming and mathematics language with a 
> search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. And stream sources will be developed that stream this 
> information so that it 

[jira] [Resolved] (SOLR-13947) Documentation on configuring StreamHandler incorrect

2019-11-22 Thread Tomas Eduardo Fernandez Lobbe (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Eduardo Fernandez Lobbe resolved SOLR-13947.
--
Fix Version/s: 8.4
   master (9.0)
   Resolution: Fixed

> Documentation on configuring StreamHandler incorrect
> 
>
> Key: SOLR-13947
> URL: https://issues.apache.org/jira/browse/SOLR-13947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: David Eric Pugh
>Priority: Minor
> Fix For: master (9.0), 8.4
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The JavaDocs for {{StreamHandler}} are not correct for how to add additional 
> Streaming Expressions.   It refers to a .
> This configuration DOES work for {{GraphHandler}}, who is configured 
> differently than {{StreamHandler}}, and maybe is a seperate bug!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13947) Documentation on configuring StreamHandler incorrect

2019-11-22 Thread Tomas Eduardo Fernandez Lobbe (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980462#comment-16980462
 ] 

Tomas Eduardo Fernandez Lobbe commented on SOLR-13947:
--

I forgot to mention this Jira issue in the commit to master: 
https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=537862d5bb42d710208209a403dd1a207a0426f3

> Documentation on configuring StreamHandler incorrect
> 
>
> Key: SOLR-13947
> URL: https://issues.apache.org/jira/browse/SOLR-13947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: David Eric Pugh
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The JavaDocs for {{StreamHandler}} are not correct for how to add additional 
> Streaming Expressions.   It refers to a .
> This configuration DOES work for {{GraphHandler}}, who is configured 
> differently than {{StreamHandler}}, and maybe is a seperate bug!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13947) Documentation on configuring StreamHandler incorrect

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980460#comment-16980460
 ] 

ASF subversion and git services commented on SOLR-13947:


Commit 4b37fb0c8f39e92dfe60b00321d03356e1716480 in lucene-solr's branch 
refs/heads/branch_8x from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4b37fb0 ]

SOLR-13947: Document how to load your own streaming plugins (#1025)


> Documentation on configuring StreamHandler incorrect
> 
>
> Key: SOLR-13947
> URL: https://issues.apache.org/jira/browse/SOLR-13947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: David Eric Pugh
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The JavaDocs for {{StreamHandler}} are not correct for how to add additional 
> Streaming Expressions.   It refers to a .
> This configuration DOES work for {{GraphHandler}}, who is configured 
> differently than {{StreamHandler}}, and maybe is a seperate bug!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #1025: SOLR-13947 document how to load your own streaming plugins

2019-11-22 Thread GitBox
tflobbe merged pull request #1025: SOLR-13947 document how to load your own 
streaming plugins
URL: https://github.com/apache/lucene-solr/pull/1025
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13960) Reproducible failure in HdfsBasicDistributedZk2Test

2019-11-22 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13960:

Component/s: hdfs
 Hadoop Integration

> Reproducible failure in HdfsBasicDistributedZk2Test 
> 
>
> Key: SOLR-13960
> URL: https://issues.apache.org/jira/browse/SOLR-13960
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Erick Erickson
>Priority: Major
>
> This reproduces for me very consistently on a fresh checkout of master.
> ant test-nocompile  -Dtestcase=HdfsBasicDistributedZk2Test 
> -Dtests.method=test -Dtests.seed=67263E0CD3327A11 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr 
> -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] yonik commented on issue #1009: SOLR-13926

2019-11-22 Thread GitBox
yonik commented on issue #1009: SOLR-13926
URL: https://github.com/apache/lucene-solr/pull/1009#issuecomment-557657500
 
 
   It's verbose (including the use of binary over hex ;-) but fine I think.
   Since you have something else pending @dsmiley , I'll let you commit when 
you're ready. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13950) ZkStateReader’s getLeaderRetry method swallowed InterruptedException

2019-11-22 Thread Tomas Eduardo Fernandez Lobbe (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Eduardo Fernandez Lobbe resolved SOLR-13950.
--
Fix Version/s: 8.4
   Resolution: Fixed

Resolving. Thanks [~andy_vuong]!

> ZkStateReader’s getLeaderRetry method swallowed InterruptedException
> 
>
> Key: SOLR-13950
> URL: https://issues.apache.org/jira/browse/SOLR-13950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (9.0), 8.3
>Reporter: Andy Vuong
>Priority: Minor
> Fix For: master (9.0), 8.4
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ZkStateReader’s getLeaderRetry(String collection, String shard, int timeout) 
> swallows the InterruptedException and doesn’t interrupt the current thread 
> despite declaring throws InterruptedException.
>  
> This small patch calls Thread.currentThread().interrupt() and passes the 
> InterruptedException up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13950) ZkStateReader’s getLeaderRetry method swallowed InterruptedException

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980425#comment-16980425
 ] 

ASF subversion and git services commented on SOLR-13950:


Commit 37512dad4823049cc6e7e0bd832c61df265b6ee1 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=37512da ]

SOLR-13950: Add attribution


> ZkStateReader’s getLeaderRetry method swallowed InterruptedException
> 
>
> Key: SOLR-13950
> URL: https://issues.apache.org/jira/browse/SOLR-13950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (9.0), 8.3
>Reporter: Andy Vuong
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ZkStateReader’s getLeaderRetry(String collection, String shard, int timeout) 
> swallows the InterruptedException and doesn’t interrupt the current thread 
> despite declaring throws InterruptedException.
>  
> This small patch calls Thread.currentThread().interrupt() and passes the 
> InterruptedException up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13950) ZkStateReader’s getLeaderRetry method swallowed InterruptedException

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980426#comment-16980426
 ] 

ASF subversion and git services commented on SOLR-13950:


Commit a25ecd7f309f6476359408d59b607eea5f2f909e in lucene-solr's branch 
refs/heads/branch_8x from Andy Vuong
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a25ecd7 ]

SOLR-13950: Fix getLeaderRetry swallowing interrupt in ZkStateReader (#1023)

Let InterruptedException bubble up


> ZkStateReader’s getLeaderRetry method swallowed InterruptedException
> 
>
> Key: SOLR-13950
> URL: https://issues.apache.org/jira/browse/SOLR-13950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (9.0), 8.3
>Reporter: Andy Vuong
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ZkStateReader’s getLeaderRetry(String collection, String shard, int timeout) 
> swallows the InterruptedException and doesn’t interrupt the current thread 
> despite declaring throws InterruptedException.
>  
> This small patch calls Thread.currentThread().interrupt() and passes the 
> InterruptedException up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13950) ZkStateReader’s getLeaderRetry method swallowed InterruptedException

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980427#comment-16980427
 ] 

ASF subversion and git services commented on SOLR-13950:


Commit 65888d0542c9d7a4576a59cce980b0deaa63d641 in lucene-solr's branch 
refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=65888d0 ]

SOLR-13950: Add attribution


> ZkStateReader’s getLeaderRetry method swallowed InterruptedException
> 
>
> Key: SOLR-13950
> URL: https://issues.apache.org/jira/browse/SOLR-13950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (9.0), 8.3
>Reporter: Andy Vuong
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ZkStateReader’s getLeaderRetry(String collection, String shard, int timeout) 
> swallows the InterruptedException and doesn’t interrupt the current thread 
> despite declaring throws InterruptedException.
>  
> This small patch calls Thread.currentThread().interrupt() and passes the 
> InterruptedException up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13950) ZkStateReader’s getLeaderRetry method swallowed InterruptedException

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980424#comment-16980424
 ] 

ASF subversion and git services commented on SOLR-13950:


Commit 4910c0f558bbf72419e61e3b9bb413348eaea606 in lucene-solr's branch 
refs/heads/master from Andy Vuong
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4910c0f ]

SOLR-13950: Fix getLeaderRetry swallowing interrupt in ZkStateReader (#1023)

Let InterruptedException bubble up


> ZkStateReader’s getLeaderRetry method swallowed InterruptedException
> 
>
> Key: SOLR-13950
> URL: https://issues.apache.org/jira/browse/SOLR-13950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (9.0), 8.3
>Reporter: Andy Vuong
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ZkStateReader’s getLeaderRetry(String collection, String shard, int timeout) 
> swallows the InterruptedException and doesn’t interrupt the current thread 
> despite declaring throws InterruptedException.
>  
> This small patch calls Thread.currentThread().interrupt() and passes the 
> InterruptedException up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #1023: SOLR-13950: Fix getLeaderRetry swallowing interrupt in ZkStateReader

2019-11-22 Thread GitBox
tflobbe merged pull request #1023: SOLR-13950: Fix getLeaderRetry swallowing 
interrupt in ZkStateReader
URL: https://github.com/apache/lucene-solr/pull/1023
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-11-22 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r349748411
 
 

 ##
 File path: 
lucene/grouping/src/java/org/apache/lucene/search/grouping/FirstPassGroupingCollector.java
 ##
 @@ -132,17 +132,28 @@ public ScoreMode scoreMode() {
 final Collection> result = new ArrayList<>();
 int upto = 0;
 final int sortFieldCount = comparators.length;
+assert sortFieldCount > 0; // this must always be true because fields Sort 
must contain at least a field
 for(CollectedSearchGroup group : orderedGroups) {
   if (upto++ < groupOffset) {
 continue;
   }
   // System.out.println("  group=" + (group.groupValue == null ? "null" : 
group.groupValue.toString()));
   SearchGroup searchGroup = new SearchGroup<>();
   searchGroup.groupValue = group.groupValue;
+  // We pass this around so that we can get the corresponding solr id when 
serializing the search group to send to the federator
+  searchGroup.topDocLuceneId = group.topDoc;
   searchGroup.sortValues = new Object[sortFieldCount];
   for(int sortFieldIDX=0;sortFieldIDXhttps://github.com/cpoerschke/lucene-solr/commits/github-bloomberg-SOLR-11831-cpoerschke-13
 explores the 'extend FirstPassGroupingCollector and SearchGroup' route a 
little further:
* presume LUCENE-8728 changes or equivalent are available
* add SolrFirstPassGroupingCollector and SolrSearchGroup classes
* on the `group.skip.second.step` code paths always have SolrSearchGroup 
instead of SearchGroup
* not yet done:
  * creation of SolrFirstPassGroupingCollector instead of 
FirstPassGroupingCollector if-and-only-if appropriate
  * tests need to pass again
  * consider if this would actually be comprehensible, maintainable and 
helpful e.g. w.r.t. (Solr)SearchGroup fields always being filled in.
   
   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12859) DocExpirationUpdateProcessorFactory does not work with BasicAuth

2019-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980416#comment-16980416
 ] 

Jan Høydahl commented on SOLR-12859:


I have the same questions. The concept of {{isSolrThread}} was introduced in 
SOLR-7849. [~noble.paul] do you recall?

> DocExpirationUpdateProcessorFactory does not work with BasicAuth
> 
>
> Key: SOLR-12859
> URL: https://issues.apache.org/jira/browse/SOLR-12859
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Varun Thacker
>Priority: Major
>
> I setup a cluster with basic auth and then wanted to use Solr's TTL feature ( 
> DocExpirationUpdateProcessorFactory ) to auto-delete documents.
>  
> Turns out it doesn't work when Basic Auth is enabled. I get the following 
> stacktrace from the logs
> {code:java}
> 2018-10-12 22:06:38.967 ERROR (autoExpireDocs-42-thread-1) [   ] 
> o.a.s.u.p.DocExpirationUpdateProcessorFactory Runtime error in periodic 
> deletion of expired docs: Async exception during distributed update: Error 
> from server at http://192.168.0.8:8983/solr/gettingstarted_shard2_replica_n6: 
> require authentication
> request: 
> http://192.168.0.8:8983/solr/gettingstarted_shard2_replica_n6/update?update.distrib=TOLEADER=http%3A%2F%2F192.168.0.8%3A8983%2Fsolr%2Fgettingstarted_shard1_replica_n2%2F=javabin=2
> org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  Async exception during distributed update: Error from server at 
> http://192.168.0.8:8983/solr/gettingstarted_shard2_replica_n6: require 
> authentication
> request: 
> http://192.168.0.8:8983/solr/gettingstarted_shard2_replica_n6/update?update.distrib=TOLEADER=http%3A%2F%2F192.168.0.8%3A8983%2Fsolr%2Fgettingstarted_shard1_replica_n2%2F=javabin=2
>     at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:964)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1976)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> jimczi - 2018-09-18 13:07:55]
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)
>  ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - 
> 

[jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Bar Rotstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980409#comment-16980409
 ] 

Bar Rotstein commented on SOLR-13961:
-

Does solr support setting a value as null at the moment?

 

I wonder whether this is the way to go, since you can delete the child document 
using the nested atomic update delete operation.

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13959) Upgrade log4j2 to the current version, presently 2.12.1

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13959:
--
Description: 
First, we still have some interesting logging bleed leaks that we can't account 
for, it's always possible anyway and I'm an optmist

Second, there's a long discussion about leaked threads (albeit under Tomcat) 
and some vague references to rethinking how threads are handled in Log4J2. We 
already have a hack to not fail on leaked logger threads in our tests, perhaps 
this will address that need.

Third, "Version 2.12.0 introduces support for accessing Docker container 
information " which may be increasingly interesting.

I won't get to this soon, so anyone who wants to pick it up please do. Might be 
wise to wait until after the 8.4 branch is cut, I'm always a little leery of 
upgrading things just before a release.

  was:
First, we still have some interesting logging bleed leaks that we can't account 
for, it's always possible anyway and I'm an optmist

Second, there's a long discussion about leaked threads (albeit under Tomcat) 
and some vague references to rethinking how threads are handled in Log4J2. We 
already have a hack to not fail on leaked logger threads in our tests, perhaps 
this will address that need.

Third, "Version 2.12.0 introduces support for accessing Docker container 
information " which may be increasingly interesting.

I won't get to this soon, so anyone who wants to pick it up please do.


> Upgrade log4j2 to the current version, presently 2.12.1 
> 
>
> Key: SOLR-13959
> URL: https://issues.apache.org/jira/browse/SOLR-13959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Priority: Major
>
> First, we still have some interesting logging bleed leaks that we can't 
> account for, it's always possible anyway and I'm an optmist
> Second, there's a long discussion about leaked threads (albeit under Tomcat) 
> and some vague references to rethinking how threads are handled in Log4J2. We 
> already have a hack to not fail on leaked logger threads in our tests, 
> perhaps this will address that need.
> Third, "Version 2.12.0 introduces support for accessing Docker container 
> information " which may be increasingly interesting.
> I won't get to this soon, so anyone who wants to pick it up please do. Might 
> be wise to wait until after the 8.4 branch is cut, I'm always a little leery 
> of upgrading things just before a release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #1031: LUCENE-9059: Reduce garbage created by ByteBuffersDataOutput.

2019-11-22 Thread GitBox
jpountz commented on a change in pull request #1031: LUCENE-9059: Reduce 
garbage created by ByteBuffersDataOutput.
URL: https://github.com/apache/lucene-solr/pull/1031#discussion_r349741552
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/store/ByteBuffersDataOutput.java
 ##
 @@ -279,11 +279,12 @@ public ByteBuffersDataInput toDataInput() {
* Copy the current content of this object into another {@link DataOutput}.
*/
   public void copyTo(DataOutput output) throws IOException {
-for (ByteBuffer bb : toBufferList()) {
+for (ByteBuffer bb : blocks) {
   if (bb.hasArray()) {
 
 Review comment:
   this condition was always `false` before because toBufferList calls 
`asReadOnlyBuffer` on the buffers


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #1031: LUCENE-9059: Reduce garbage created by ByteBuffersDataOutput.

2019-11-22 Thread GitBox
jpountz commented on a change in pull request #1031: LUCENE-9059: Reduce 
garbage created by ByteBuffersDataOutput.
URL: https://github.com/apache/lucene-solr/pull/1031#discussion_r349741288
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/store/ByteBuffersDataOutput.java
 ##
 @@ -412,7 +413,11 @@ public long ramBytesUsed() {
* lead to hard-to-debug issues, use with great care.
*/
   public void reset() {
-blocks.stream().forEach(blockReuse);
+if (blockReuse != NO_REUSE) {
 
 Review comment:
   This check isn't related to what I saw in the profile, though it looks like 
it could be an easy win in some cases. The important change is the move from 
`forEach` to a `for` loop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz opened a new pull request #1031: LUCENE-9059: Reduce garbage created by ByteBuffersDataOutput.

2019-11-22 Thread GitBox
jpountz opened a new pull request #1031: LUCENE-9059: Reduce garbage created by 
ByteBuffersDataOutput.
URL: https://github.com/apache/lucene-solr/pull/1031
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-12217) Add support for shards.preference in SolrJ for single shard cases

2019-11-22 Thread Tomas Eduardo Fernandez Lobbe (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Eduardo Fernandez Lobbe updated SOLR-12217:
-
Description: 
SOLR-11982 Added support for {{shards.preference}}, a way to define the sorting 
of replicas within a shard by preference (replica types/location). This only 
works on multi-shard cases. We should add support for the case of single shards 
when using CloudSolrClient.

*NOTE:* This Jira doesn't cover the non CloudSolrClient cases (i.e. if you do 
an _curl_ request to a random node in the cluster, the {{shards.preference}} 
parameter is not considered in the case of single shards collections).

  was:SOLR-11982 Added support for {{shards.preference}}, a way to define the 
sorting of replicas within a shard by preference (replica types/location). This 
only works on multi-shard cases. We should add support for the case of single 
shards when using CloudSolrClient


> Add support for shards.preference in SolrJ for single shard cases
> -
>
> Key: SOLR-12217
> URL: https://issues.apache.org/jira/browse/SOLR-12217
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tomas Eduardo Fernandez Lobbe
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> SOLR-11982 Added support for {{shards.preference}}, a way to define the 
> sorting of replicas within a shard by preference (replica types/location). 
> This only works on multi-shard cases. We should add support for the case of 
> single shards when using CloudSolrClient.
> *NOTE:* This Jira doesn't cover the non CloudSolrClient cases (i.e. if you do 
> an _curl_ request to a random node in the cluster, the {{shards.preference}} 
> parameter is not considered in the case of single shards collections).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9059) Reduce garbage created by ByteBuffersDataOutput

2019-11-22 Thread Adrien Grand (Jira)
Adrien Grand created LUCENE-9059:


 Summary: Reduce garbage created by ByteBuffersDataOutput
 Key: LUCENE-9059
 URL: https://issues.apache.org/jira/browse/LUCENE-9059
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand


When profiling indexing because of LUCENE-9027, I noticed that indexing 
produces a lot of unnecessary garbage because of ByteBuffersDataOutput, which 
can easily get fixed:
 - reset() is implemented using streams, which apparently create lots of objects
 - copyTo has an optimization for the case that ByteBuffers are backed by an 
array that never gets used because toBufferList makes the buffers read-only, 
which in-turn disallows access to the array



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-3274) ZooKeeper related SolrCloud problems

2019-11-22 Thread Rajeswari Natarajan (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980401#comment-16980401
 ] 

Rajeswari Natarajan commented on SOLR-3274:
---

Still seeing this issue in 7.6 and zk 3.4.12

> ZooKeeper related SolrCloud problems
> 
>
> Key: SOLR-3274
> URL: https://issues.apache.org/jira/browse/SOLR-3274
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
> Environment: Any
>Reporter: Per Steffensen
>Priority: Major
>
> Same setup as in SOLR-3273. Well if I have to tell the entire truth we have 7 
> Solr servers, running 28 slices of the same collection (collA) - all slices 
> have one replica (two shards all in all - leader + replica) - 56 cores all in 
> all (8 shards on each solr instance). But anyways...
> Besides the problem reported in SOLR-3273, the system seems to run fine under 
> high load for several hours, but eventually errors like the ones shown below 
> start to occur. I might be wrong, but they all seem to indicate some kind of 
> unstability in the collaboration between Solr and ZooKeeper. I have to say 
> that I havnt been there to check ZooKeeper "at the moment where those 
> exception occur", but basically I dont believe the exceptions occur because 
> ZooKeeper is not running stable - at least when I go and check ZooKeeper 
> through other "channels" (e.g. my eclipse ZK plugin) it is always accepting 
> my connection and generally seems to be doing fine.
> Exception 1) Often the first error we see in solr.log is something like this
> {code}
> Mar 22, 2012 5:06:43 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - 
> Updates are disabled.
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.zkCheck(DistributedUpdateProcessor.java:678)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:250)
> at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:140)
> at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:80)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:407)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}
> I believe this error basically occurs because SolrZkClient.isConnected 
> reports false, which means that its internal "keeper.getState" does not 
> return ZooKeeper.States.CONNECTED. Im pretty sure that it has been CONNECTED 
> for a long time, since this error starts occuring after several hours of 
> processing without this problem showing. But why is it suddenly not connected 
> anymore?!
> Exception 2) We also see errors like the following, and if Im not mistaken, 
> they start occuring shortly after "Exception 1)" (above) shows for the fist 
> time
> {code}
> Mar 22, 2012 5:07:26 AM org.apache.solr.common.SolrException log
> SEVERE: 

[jira] [Updated] (SOLR-12217) Add support for shards.preference in SolrJ for single shard cases

2019-11-22 Thread Tomas Eduardo Fernandez Lobbe (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Eduardo Fernandez Lobbe updated SOLR-12217:
-
Summary: Add support for shards.preference in SolrJ for single shard cases  
(was: Add support for shards.preference in single shard cases)

> Add support for shards.preference in SolrJ for single shard cases
> -
>
> Key: SOLR-12217
> URL: https://issues.apache.org/jira/browse/SOLR-12217
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tomas Eduardo Fernandez Lobbe
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> SOLR-11982 Added support for {{shards.preference}}, a way to define the 
> sorting of replicas within a shard by preference (replica types/location). 
> This only works on multi-shard cases. We should add support for the case of 
> single shards when using CloudSolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] shows the 
power of combining a composable streaming and mathematics language with a 
search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] shows the 
> power of combining a composable streaming and mathematics language with a 
> search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. And stream sources will be developed that stream this 
> information so that 

[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. And stream sources will be 

[jira] [Created] (SOLR-13962) DIH: fields added by update processors to $deleteDocById documents trigger warnings

2019-11-22 Thread Marco Remy (Jira)
Marco Remy created SOLR-13962:
-

 Summary: DIH: fields added by update processors to $deleteDocById 
documents trigger warnings
 Key: SOLR-13962
 URL: https://issues.apache.org/jira/browse/SOLR-13962
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: UpdateRequestProcessors
Affects Versions: 7.7.1
Reporter: Marco Remy


Hello,

We are processing XML data with the DIH. Deleted documents are also coming in 
with XML. Hence the data-config.xml below.
{code:xml}

  
  


  


  

  

{code}
 

We also configured an DefaultValueUpdateProcessor to add an update timestamp to 
all documents.
{code:xml}


  <.../>

  
  
update_timestamp
NOW
  

  

  <.../>

{code}
 

Even though the document is marked to be deleted, the update processor adds the 
timestamp field, which triggers the warning below.
{noformat}
2019-11-22 18:28:19.241 WARN  (qtp436532993-17) [   x:core] 
o.a.s.h.d.SolrWriter Error creating document : SolrInputDocument(fields: 
[update_timestamp=NOW])
org.apache.solr.common.SolrException: Document is missing mandatory uniqueKey 
field: id
{noformat}
 

However, the documents is deleted properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980384#comment-16980384
 ] 

Thomas Wöckinger commented on SOLR-13961:
-

[~dsmiley] or [~gerlowskija] not sure who is responsilbe for Atomic Update. 
Maybe you can forward this issue. Thx a lot

> Unsetting Nested Documents using Atomic Update leads to SolrException: 
> undefined field
> --
>
> Key: SOLR-13961
> URL: https://issues.apache.org/jira/browse/SOLR-13961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests, UpdateRequestProcessors
>Affects Versions: master (9.0), 8.3, 8.4
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: easyfix
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using null or empty collection to unset nested documents (as suggested by 
> documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13952) Separate out Gradle-specific code from other (mostly test) changes and commit separately

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13952:
--
Attachment: fordavid.patch
Status: Open  (was: Open)

[~dsmiley] The "fordavid.patch" contains the changes Mark made to 
XmlOffsetCorrector.java in the gradle_8 build. Do you have an opinion about 
whether these are valid/dangerous/need-to-be-tested/etc.?

IOW should I revert this change or just commit to master when the time comes?

Thanks

> Separate out Gradle-specific code from other (mostly test) changes and commit 
> separately
> 
>
> Key: SOLR-13952
> URL: https://issues.apache.org/jira/browse/SOLR-13952
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: fordavid.patch
>
>
> The gradle_8 branch has many changes unrelated to gradle. It would be much 
> easier to work on the gradle parts if these were separated. So here's my plan:
> - establish a branch to use for the non-gradle parts of the gradle_8 branch 
> and commit separately. For a first cut, I'll make all the changes I'm 
> confident of, and mark the others with nocommits so we can iterate and decide 
> when to merge to master and 8x.
> - create a "gradle_9" branch that hosts only the gradle changes for us all to 
> iterate on.
> I hope to have a preliminary cut at this over the weekend. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger opened a new pull request #1030: SOLR-13961: Fix Atomic Update unset nested documents

2019-11-22 Thread GitBox
thomaswoeckinger opened a new pull request #1030: SOLR-13961: Fix Atomic Update 
unset nested documents
URL: https://github.com/apache/lucene-solr/pull/1030
 
 
   
   
   
   # Description
   
   Please provide a short description of the changes you're making with this 
pull request.
   
   # Solution
   
   Please provide a short description of the approach taken to implement your 
solution.
   
   # Tests
   
   Please describe the tests you've developed or run to confirm this patch 
implements the feature or solves the problem.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [ ] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [ ] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [ ] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [ ] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [ ] I have developed this patch against the `master` branch.
   - [ ] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] andyvuong opened a new pull request #1029: SOLR-13101: Address flakiness of tests using async pulls and handle i…

2019-11-22 Thread GitBox
andyvuong opened a new pull request #1029: SOLR-13101: Address flakiness of 
tests using async pulls and handle i…
URL: https://github.com/apache/lucene-solr/pull/1029
 
 
   …nterrupt properly
   
   Summary
   - Addressing test flakiness from two issues:
 - Tests that rely on async pulls have test scaffolding that allows a 
custom pulling mechanisms to be injected into mini cluster nodes and countdown 
CDLs on pull completions. We shouldn't instantiate a new callback object on 
each call to getCorePullTaskCallback.
 - Interrupt exception was being swallowed and causing leaking threads 
   
   Changes
   - Refactored and moved the async test logic into 
SolrCloudSharedStoreTestCase. Only initiate the callback instance once and not 
per call
   - Don't catch InterruptedException in ZkStateReader. I opened a JIRA/PR in 
master for this https://github.com/apache/lucene-solr/pull/1023 but it hasn't 
been merged yet so including it here. We'll get it on next upgrade ideally.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13952) Separate out Gradle-specific code from other (mostly test) changes and commit separately

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980373#comment-16980373
 ] 

ASF subversion and git services commented on SOLR-13952:


Commit 38c2ccf951e13886accecfa7ca2f1b0d1a2af8b3 in lucene-solr's branch 
refs/heads/jira/SOLR-13952 from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=38c2ccf ]

SOLR-13952: Resolved issue with QueryParser.jj and executed the 'ant javacc' 
target


> Separate out Gradle-specific code from other (mostly test) changes and commit 
> separately
> 
>
> Key: SOLR-13952
> URL: https://issues.apache.org/jira/browse/SOLR-13952
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> The gradle_8 branch has many changes unrelated to gradle. It would be much 
> easier to work on the gradle parts if these were separated. So here's my plan:
> - establish a branch to use for the non-gradle parts of the gradle_8 branch 
> and commit separately. For a first cut, I'll make all the changes I'm 
> confident of, and mark the others with nocommits so we can iterate and decide 
> when to merge to master and 8x.
> - create a "gradle_9" branch that hosts only the gradle changes for us all to 
> iterate on.
> I hope to have a preliminary cut at this over the weekend. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Jira
Thomas Wöckinger created SOLR-13961:
---

 Summary: Unsetting Nested Documents using Atomic Update leads to 
SolrException: undefined field
 Key: SOLR-13961
 URL: https://issues.apache.org/jira/browse/SOLR-13961
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests, UpdateRequestProcessors
Affects Versions: 8.3, master (9.0), 8.4
Reporter: Thomas Wöckinger


Using null or empty collection to unset nested documents (as suggested by 
documentation) leads to SolrException: undefined field ... .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then also be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. 

[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Function Reference Guide will flow from this as the 
streams of meta-data from the functions are indexed to Solr Cloud collections. 
The full power of Streaming Expressions and Math Expressions can then also be 
used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions are indexed to Solr Cloud collections. The full 
power of Streaming Expressions and Math Expressions can then also be used to 
visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. And 

[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable *Function Reference Guide* will flow from this as 
the streams of meta-data from the functions are indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then also be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Function Reference Guide will flow from this as the 
streams of meta-data from the functions are indexed to Solr Cloud collections. 
The full power of Streaming Expressions and Math Expressions can then also be 
used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions 

[jira] [Updated] (SOLR-13957) Add sensible defaults for the facet, random and update Streaming Expressions

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13957:
--
Summary: Add sensible defaults for the facet, random and update Streaming 
Expressions  (was: Add sensible defaults for the facet and random Streaming 
Expressions)

> Add sensible defaults for the facet, random and update Streaming Expressions
> 
>
> Key: SOLR-13957
> URL: https://issues.apache.org/jira/browse/SOLR-13957
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 8.4
>
>
> This ticket will add sensible defaults to the *facet* and *random* streaming 
> expressions so that users can type in fewer parameters and receive sensible 
> results. This is part of an overall set of changes designed to make Streaming 
> Expressions and Math Expressions as easy as possible to use to drive adoption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13957) Add sensible defaults for the facet, random and update Streaming Expressions

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13957:
--
Description: This ticket will add sensible defaults to the *facet*, 
*random* and *update* streaming expressions so that users can type in fewer 
parameters and receive sensible results. This is part of an overall set of 
changes designed to make Streaming Expressions and Math Expressions as easy as 
possible to use to drive adoption.  (was: This ticket will add sensible 
defaults to the *facet* and *random* streaming expressions so that users can 
type in fewer parameters and receive sensible results. This is part of an 
overall set of changes designed to make Streaming Expressions and Math 
Expressions as easy as possible to use to drive adoption.)

> Add sensible defaults for the facet, random and update Streaming Expressions
> 
>
> Key: SOLR-13957
> URL: https://issues.apache.org/jira/browse/SOLR-13957
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 8.4
>
>
> This ticket will add sensible defaults to the *facet*, *random* and *update* 
> streaming expressions so that users can type in fewer parameters and receive 
> sensible results. This is part of an overall set of changes designed to make 
> Streaming Expressions and Math Expressions as easy as possible to use to 
> drive adoption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9049) Remove FST cachedRootArcs now redundant with direct-addressing

2019-11-22 Thread Jack Conradson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980329#comment-16980329
 ] 

Jack Conradson commented on LUCENE-9049:


That perf test from luceneutil was done with the default of 20 iterations.

> Remove FST cachedRootArcs now redundant with direct-addressing
> --
>
> Key: LUCENE-9049
> URL: https://issues.apache.org/jira/browse/LUCENE-9049
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: LUCENE-9049.patch
>
>
> With LUCENE-8920 FST most often encodes top level nodes with 
> direct-addressing (instead of array for binary search). This probably made 
> the cachedRootArcs redundant. So they should be removed, and this will reduce 
> the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13960) Reproducible failure in HdfsBasicDistributedZk2Test

2019-11-22 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-13960:
-

 Summary: Reproducible failure in HdfsBasicDistributedZk2Test 
 Key: SOLR-13960
 URL: https://issues.apache.org/jira/browse/SOLR-13960
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


This reproduces for me very consistently on a fresh checkout of master.

ant test-nocompile  -Dtestcase=HdfsBasicDistributedZk2Test -Dtests.method=test 
-Dtests.seed=67263E0CD3327A11 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr -Dtests.timezone=America/St_Lucia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13955) SHARED replicas can recover on clean disk

2019-11-22 Thread Yonik Seeley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-13955.
-
Resolution: Fixed

Thanks Bilal!
Context for others: I committed this one quickly to the feature branch since it 
was already reviewed/committed within salesforce.  We're working on having more 
of the discussion & review for future development "in the open"... stay tuned!

> SHARED replicas can recover on clean disk
> -
>
> Key: SOLR-13955
> URL: https://issues.apache.org/jira/browse/SOLR-13955
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Bilal Waheed
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> One of the benefits of SHARED replica is that it provides the ability to run 
> SolrCloud with ephemeral disk(containers). Since the source of truth is in 
> shared store, we can safely recover index from there. But currently there is 
> no support to reason about the core descriptors that do not live locally on 
> the disk. The purpose of this task is to discover missing core descriptors 
> for SHARED replicas from ZK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13952) Separate out Gradle-specific code from other (mostly test) changes and commit separately

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980311#comment-16980311
 ] 

ASF subversion and git services commented on SOLR-13952:


Commit 6474ea3c2296f3a4656c2d58ba6df277b2e2ba59 in lucene-solr's branch 
refs/heads/jira/SOLR-13952 from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6474ea3 ]

SOLR-13952: Removed all of the nocommits associated with TheadLeakFilters. 
These are well-known by Log4j and IBM so seem safe


> Separate out Gradle-specific code from other (mostly test) changes and commit 
> separately
> 
>
> Key: SOLR-13952
> URL: https://issues.apache.org/jira/browse/SOLR-13952
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> The gradle_8 branch has many changes unrelated to gradle. It would be much 
> easier to work on the gradle parts if these were separated. So here's my plan:
> - establish a branch to use for the non-gradle parts of the gradle_8 branch 
> and commit separately. For a first cut, I'll make all the changes I'm 
> confident of, and mark the others with nocommits so we can iterate and decide 
> when to merge to master and 8x.
> - create a "gradle_9" branch that hosts only the gradle changes for us all to 
> iterate on.
> I hope to have a preliminary cut at this over the weekend. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13813) Shared storage online split support

2019-11-22 Thread Yonik Seeley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980307#comment-16980307
 ] 

Yonik Seeley commented on SOLR-13813:
-

If I ignore indexing failures during the split, then the test actually does 
work (so the current NPE doesn't lead to data loss at least.)
I've committed the test, but will leave this issue open until the NPE issue is 
resolved and the test is updated to fail on all indexing failures (see the 
TODOs in the test)

> Shared storage online split support
> ---
>
> Key: SOLR-13813
> URL: https://issues.apache.org/jira/browse/SOLR-13813
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The strategy for online shard splitting is the same as that for normal (non 
> SHARED shards.)
> During a split, the leader will forward updates to sub-shard leaders, those 
> updates will be buffered by the transaction log while the split is in 
> progress, and then the buffered updates are replayed.
> One change that was added was to push the local index to blob store after 
> buffered updates are applied (but before it is marked as ACTIVE):
> See 
> https://github.com/apache/lucene-solr/commit/fe17c813f5fe6773c0527f639b9e5c598b98c7d4#diff-081b7c2242d674bb175b41b6afc21663
> This issue is about adding tests and ensuring that online shard splitting 
> (while updates are flowing) works reliably.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links, categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions are indexed to Solr Cloud collections. The full 
power of Streaming Expressions and Math Expressions can then also be used to 
visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions are indexed to Solr Cloud collections. The full 
power of Streaming Expressions and Math Expressions can then also be used to 
visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links, categories and other meta-data will be added to the 
> functions themselves. And stream 

[jira] [Commented] (SOLR-13813) Shared storage online split support

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980304#comment-16980304
 ] 

ASF subversion and git services commented on SOLR-13813:


Commit d403b4a1261b31f2bde4cbdd30935e5f0042f8ba in lucene-solr's branch 
refs/heads/jira/SOLR-13101 from Yonik Seeley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d403b4a ]

SOLR-13813: add test for shared storage live split (#1003)



> Shared storage online split support
> ---
>
> Key: SOLR-13813
> URL: https://issues.apache.org/jira/browse/SOLR-13813
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The strategy for online shard splitting is the same as that for normal (non 
> SHARED shards.)
> During a split, the leader will forward updates to sub-shard leaders, those 
> updates will be buffered by the transaction log while the split is in 
> progress, and then the buffered updates are replayed.
> One change that was added was to push the local index to blob store after 
> buffered updates are applied (but before it is marked as ACTIVE):
> See 
> https://github.com/apache/lucene-solr/commit/fe17c813f5fe6773c0527f639b9e5c598b98c7d4#diff-081b7c2242d674bb175b41b6afc21663
> This issue is about adding tests and ensuring that online shard splitting 
> (while updates are flowing) works reliably.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] yonik merged pull request #1003: SOLR-13813: add test for shared storage live split

2019-11-22 Thread GitBox
yonik merged pull request #1003: SOLR-13813: add test for shared storage live 
split
URL: https://github.com/apache/lucene-solr/pull/1003
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions are indexed to Solr Cloud collections. The full 
power of Streaming Expressions and Math Expressions can then also be used to 
visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A *computable, searchable Reference Guide* will flow from this as the 
streams of meta-data from the functions can be indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then also be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links,  categories and other meta-data will be added to the 
> functions themselves. And stream 

[jira] [Updated] (LUCENE-9042) Refactor TopGroups.merge tests

2019-11-22 Thread Diego Ceccarelli (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Ceccarelli updated LUCENE-9042:
-
Attachment: (was: LUCENE-9042.patch)

> Refactor TopGroups.merge tests
> --
>
> Key: LUCENE-9042
> URL: https://issues.apache.org/jira/browse/LUCENE-9042
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Diego Ceccarelli
>Priority: Minor
> Attachments: LUCENE-9042.patch
>
>
> This task proposes a refactoring of the test coverage for the 
> {{TopGroups.merge}} method implemented in LUCENE-9010. For now it will cover 
> only 3 main cases. 
> 1. Merging to empty TopGroups
> 2. Merging a TopGroups with scores and a TopGroups without scores (currently 
> broken because of LUCENE-8996 bug) 
> 3. Merging two TopGroups with scores.
> I'm planning to increase the coverage testing also invalid inputs but I would 
> do that in a separate PR to keep the code readable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9042) Refactor TopGroups.merge tests

2019-11-22 Thread Diego Ceccarelli (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Ceccarelli updated LUCENE-9042:
-
Attachment: LUCENE-9042.patch

> Refactor TopGroups.merge tests
> --
>
> Key: LUCENE-9042
> URL: https://issues.apache.org/jira/browse/LUCENE-9042
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Diego Ceccarelli
>Priority: Minor
> Attachments: LUCENE-9042.patch
>
>
> This task proposes a refactoring of the test coverage for the 
> {{TopGroups.merge}} method implemented in LUCENE-9010. For now it will cover 
> only 3 main cases. 
> 1. Merging to empty TopGroups
> 2. Merging a TopGroups with scores and a TopGroups without scores (currently 
> broken because of LUCENE-8996 bug) 
> 3. Merging two TopGroups with scores.
> I'm planning to increase the coverage testing also invalid inputs but I would 
> do that in a separate PR to keep the code readable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13959) Upgrade log4j2 to the current version, presently 2.12.1

2019-11-22 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-13959:
-

 Summary: Upgrade log4j2 to the current version, presently 2.12.1 
 Key: SOLR-13959
 URL: https://issues.apache.org/jira/browse/SOLR-13959
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Erick Erickson


First, we still have some interesting logging bleed leaks that we can't account 
for, it's always possible anyway and I'm an optmist

Second, there's a long discussion about leaked threads (albeit under Tomcat) 
and some vague references to rethinking how threads are handled in Log4J2. We 
already have a hack to not fail on leaked logger threads in our tests, perhaps 
this will address that need.

Third, "Version 2.12.0 introduces support for accessing Docker container 
information " which may be increasingly interesting.

I won't get to this soon, so anyone who wants to pick it up please do.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A *computable, searchable Reference Guide* will flow from this as the 
streams of meta-data from the functions can be indexed to Solr Cloud 
collections. The full power of Streaming Expressions and Math Expressions can 
then also be used to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links,  categories and other meta-data will be added to the 
> functions themselves. And 

[jira] [Resolved] (SOLR-13956) Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-13956.
---
Resolution: Invalid

The JIRA issue tracker is used for known code issues/changes, it is not a 
support portal. Please raise this question on the user's list at 
solr-u...@lucene.apache.org, see: 
(http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are a 
_lot_ more people watching that list who may be able to help and you'll 
probably get responses much more quickly.


If it's determined that this really is a code issue or enhancement to Solr and 
not a configuration/usage problem, we can raise a new JIRA or reopen this one.


> Solr 4 to Solr7 migration DIH behavior change
> -
>
> Key: SOLR-13956
> URL: https://issues.apache.org/jira/browse/SOLR-13956
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.5
>Reporter: shashank bellary
>Priority: Major
> Attachments: serviceprofile-data-import.xml
>
>
> I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. 
> I use `JdbcDataSource` and here the config file is attached
> 1) I started seeing *OutOfMemory* issue since MySQL JDBC driver has that 
> issue of not respecting `batchSize` (though Solr4 didn't show this behavior). 
> So, I added `batchSize=-1` for that
> 2) After adding that I'm running into ResultSet closed exception as shown 
> below while fetching the *child entity*
>  
> getNext() failed for query '  SELECT REVIEW AS REVIEWS  FROM 
> SOLR_SITTER_SERVICE_PROFILE_REVIEWS  WHERE SERVICE_PROFILE_ID = '17' ; 
> ':org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.sql.SQLException: Operation not allowed after ResultSet closed
>   at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
>   at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
>   at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
>   at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
>   at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
>   at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
>   ... 13 more
>  
> Is this a known issue? How do I fix this, any help is greatly appreciated.
> I searched the user/developer group and didn't find an answer. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentations, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The [Visual Guide to Streaming Expressions and Math 
expressions|[https://bitly.com/32srTpA]] is designed to show the power of 
combining a composable streaming and mathematics language with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentations, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentations, 
> visualization links,  categories and other meta-data will be added to the 
> functions themselves. And stream sources will be 

[jira] [Updated] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13958:
--
Description: 
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentation, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 

  was:
*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The Visual Guide to Streaming Expressions and Math 
expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is designed 
to show the power of combining a composable streaming and mathematics language 
with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentations, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 


> Computing on the language: Streaming Expressions phase II
> -
>
> Key: SOLR-13958
> URL: https://issues.apache.org/jira/browse/SOLR-13958
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> *Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
> The goal of this phase was to build a superior tool for analyzing and 
> visualizing data. The Visual Guide to Streaming Expressions and Math 
> expressions ([https://bitly.com/32srTpA)|https://bitly.com/32srTpA] is 
> designed to show the power of combining a composable streaming and 
> mathematics language with a search engine.
> In *Phase II*  we will turn this power onto the language itself. This will 
> happen with the following steps:
> a) Streaming Expressions and Math Expressions will become a Stream Source 
> containing information about the language itself. Reference documentation, 
> visualization links,  categories and other meta-data will be added to the 
> functions themselves. And 

[jira] [Commented] (LUCENE-9027) SIMD-based decoding of postings lists

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980279#comment-16980279
 ] 

ASF subversion and git services commented on LUCENE-9027:
-

Commit bc758601cd8f77136e0f8bb8467927c3e37c7ddf in lucene-solr's branch 
refs/heads/branch_8x from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bc75860 ]

LUCENE-9027: Try to get back some indexing speed.


> SIMD-based decoding of postings lists
> -
>
> Key: LUCENE-9027
> URL: https://issues.apache.org/jira/browse/LUCENE-9027
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.4
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> [~rcmuir] has been mentioning the idea for quite some time that we might be 
> able to write the decoding logic in such a way that Java would use SIMD 
> instructions. More recently [~paul.masurel] wrote a [blog 
> post|https://fulmicoton.com/posts/bitpacking/] that raises the point that 
> Lucene could still do decode multiple ints at once in a single instruction by 
> packing two ints in a long and we've had some discussions about what we could 
> try in Lucene to speed up the decoding of postings. This made me want to look 
> a bit deeper at what we could do.
> Our current decoding logic reads data in a byte[] and decodes packed integers 
> from it. Unfortunately it doesn't make use of SIMD instructions and looks 
> like 
> [this|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/NaiveByteDecoder.java].
> I confirmed by looking at the generated assembly that if I take an array of 
> integers and shift them all by the same number of bits then Java will use 
> SIMD instructions to shift multiple integers at once. This led me to writing 
> this 
> [implementation|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/SimpleSIMDDecoder.java]
>  that tries as much as possible to shift long sequences of ints by the same 
> number of bits to speed up decoding. It is indeed faster than the current 
> logic we have, up to about 2x faster for some numbers of bits per value.
> Currently the best 
> [implementation|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/SIMDDecoder.java]
>  I've been able to come up with combines the above idea with the idea that 
> Paul mentioned in his blog that consists of emulating SIMD from Java by 
> packing multiple integers into a long: 2 ints, 4 shorts or 8 bytes. It is a 
> bit harder to read but gives another speedup on top of the above 
> implementation.
> I have a [JMH 
> benchmark|https://github.com/jpountz/decode-128-ints-benchmark/] available in 
> case someone would like to play with this and maybe even come up with an even 
> faster implementation. It is 2-2.5x faster than our current implementation 
> for most numbers of bits per value. I'm copying results here:
> {noformat}
>  * `readLongs` just reads 2*bitsPerValue longs from the ByteBuffer, it serves 
> as
>a baseline.
>  * `decodeNaiveFromBytes` reads a byte[] and decodes from it. This is what the
>current Lucene codec does.
>  * `decodeNaiveFromLongs` decodes from longs on the fly.
>  * `decodeSimpleSIMD` is a simple implementation that relies on how Java
>recognizes some patterns and uses SIMD instructions.
>  * `decodeSIMD` is a more complex implementation that both relies on the C2
>compiler to generate SIMD instructions and encodes 8 bytes, 4 shorts or
>2 ints in a long in order to decompress multiple values at once.
> Benchmark   (bitsPerValue)  (byteOrder)   
> Mode  Cnt   Score   Error   Units
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   1   LE  
> thrpt5  12.912 ± 0.393  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   1   BE  
> thrpt5  12.862 ± 0.395  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   2   LE  
> thrpt5  13.040 ± 1.162  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   2   BE  
> thrpt5  13.027 ± 0.270  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   3   LE  
> thrpt5  12.409 ± 0.637  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   3   BE  
> thrpt5  12.268 ± 0.947  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   4   LE  
> thrpt5  14.177 ± 2.263  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   4   BE  
> thrpt5  11.457 ± 0.150  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   5   LE  
> thrpt5  10.988 ± 1.179  ops/us
> 

[jira] [Created] (SOLR-13958) Computing on the language: Streaming Expressions phase II

2019-11-22 Thread Joel Bernstein (Jira)
Joel Bernstein created SOLR-13958:
-

 Summary: Computing on the language: Streaming Expressions phase II
 Key: SOLR-13958
 URL: https://issues.apache.org/jira/browse/SOLR-13958
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Reporter: Joel Bernstein


*Phase I* of Streaming Expressions and Math Expressions is coming to an end. 
The goal of this phase was to build a superior tool for analyzing and 
visualizing data. The [Visual Guide to Streaming Expressions and Math 
expressions|[https://bitly.com/32srTpA]] is designed to show the power of 
combining a composable streaming and mathematics language with a search engine.

In *Phase II*  we will turn this power onto the language itself. This will 
happen with the following steps:

a) Streaming Expressions and Math Expressions will become a Stream Source 
containing information about the language itself. Reference documentations, 
visualization links,  categories and other meta-data will be added to the 
functions themselves. And stream sources will be developed that stream this 
information so that it can be operated on by the full power of the language.

b) A computable, searchable Reference Guide will flow from this as the streams 
of meta-data from the functions can be indexed to Solr Cloud collections. The 
full power of Streaming Expressions and Math Expressions can then also be used 
to visualize, analyze and model the Reference Guide.

c) Complete programs will also become Stream Sources. A *parse* Stream will 
parse entire expressions and stream back each function in the expression along 
with it's meta-data. This allows complex programs to be decomposed and 
understood easily and indexed into Solr Cloud collections.

d) Mathematical models in Math Expressions can then be applied to the 
decomposed expressions that are saved in Solr Cloud indexes to predict the next 
function a user wants, and recommend alternative functions. The language begins 
to write itself.

 

 

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9027) SIMD-based decoding of postings lists

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980280#comment-16980280
 ] 

ASF subversion and git services commented on LUCENE-9027:
-

Commit c51006c3c48e41dfb68b62cdaf39916d5eed65b8 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c51006c ]

LUCENE-9027: Try to get back some indexing speed.


> SIMD-based decoding of postings lists
> -
>
> Key: LUCENE-9027
> URL: https://issues.apache.org/jira/browse/LUCENE-9027
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.4
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> [~rcmuir] has been mentioning the idea for quite some time that we might be 
> able to write the decoding logic in such a way that Java would use SIMD 
> instructions. More recently [~paul.masurel] wrote a [blog 
> post|https://fulmicoton.com/posts/bitpacking/] that raises the point that 
> Lucene could still do decode multiple ints at once in a single instruction by 
> packing two ints in a long and we've had some discussions about what we could 
> try in Lucene to speed up the decoding of postings. This made me want to look 
> a bit deeper at what we could do.
> Our current decoding logic reads data in a byte[] and decodes packed integers 
> from it. Unfortunately it doesn't make use of SIMD instructions and looks 
> like 
> [this|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/NaiveByteDecoder.java].
> I confirmed by looking at the generated assembly that if I take an array of 
> integers and shift them all by the same number of bits then Java will use 
> SIMD instructions to shift multiple integers at once. This led me to writing 
> this 
> [implementation|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/SimpleSIMDDecoder.java]
>  that tries as much as possible to shift long sequences of ints by the same 
> number of bits to speed up decoding. It is indeed faster than the current 
> logic we have, up to about 2x faster for some numbers of bits per value.
> Currently the best 
> [implementation|https://github.com/jpountz/decode-128-ints-benchmark/blob/master/src/main/java/jpountz/SIMDDecoder.java]
>  I've been able to come up with combines the above idea with the idea that 
> Paul mentioned in his blog that consists of emulating SIMD from Java by 
> packing multiple integers into a long: 2 ints, 4 shorts or 8 bytes. It is a 
> bit harder to read but gives another speedup on top of the above 
> implementation.
> I have a [JMH 
> benchmark|https://github.com/jpountz/decode-128-ints-benchmark/] available in 
> case someone would like to play with this and maybe even come up with an even 
> faster implementation. It is 2-2.5x faster than our current implementation 
> for most numbers of bits per value. I'm copying results here:
> {noformat}
>  * `readLongs` just reads 2*bitsPerValue longs from the ByteBuffer, it serves 
> as
>a baseline.
>  * `decodeNaiveFromBytes` reads a byte[] and decodes from it. This is what the
>current Lucene codec does.
>  * `decodeNaiveFromLongs` decodes from longs on the fly.
>  * `decodeSimpleSIMD` is a simple implementation that relies on how Java
>recognizes some patterns and uses SIMD instructions.
>  * `decodeSIMD` is a more complex implementation that both relies on the C2
>compiler to generate SIMD instructions and encodes 8 bytes, 4 shorts or
>2 ints in a long in order to decompress multiple values at once.
> Benchmark   (bitsPerValue)  (byteOrder)   
> Mode  Cnt   Score   Error   Units
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   1   LE  
> thrpt5  12.912 ± 0.393  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   1   BE  
> thrpt5  12.862 ± 0.395  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   2   LE  
> thrpt5  13.040 ± 1.162  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   2   BE  
> thrpt5  13.027 ± 0.270  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   3   LE  
> thrpt5  12.409 ± 0.637  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   3   BE  
> thrpt5  12.268 ± 0.947  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   4   LE  
> thrpt5  14.177 ± 2.263  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   4   BE  
> thrpt5  11.457 ± 0.150  ops/us
> PackedIntsDecodeBenchmark.decodeNaiveFromBytes   5   LE  
> thrpt5  10.988 ± 1.179  ops/us
> 

[jira] [Commented] (SOLR-13912) Support Count aggregation in JSON facet module

2019-11-22 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980261#comment-16980261
 ] 

Munendra S N commented on SOLR-13912:
-

I usually resolve issues as Fixed only for bugs whereas tasks, improvements and 
new features resolve them as Done since, we are not usually fixing anything 
those issues. 
Thanks for the suggestion. From now on, I will follow the convention of 
resolving them Fixed.

> Support Count aggregation in JSON facet module
> --
>
> Key: SOLR-13912
> URL: https://issues.apache.org/jira/browse/SOLR-13912
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.4
>
> Attachments: SOLR-13912.patch, SOLR-13912.patch, SOLR-13912.patch, 
> SOLR-13912.patch
>
>
> Add a count aggregation in JSON Facet module which behaves similar to 
> StatsComponent's count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13957) Add sensible defaults for the facet and random Streaming Expressions

2019-11-22 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13957:
--
Summary: Add sensible defaults for the facet and random Streaming 
Expressions  (was: Add sensible defaults for facet and random Streaming 
Expressions)

> Add sensible defaults for the facet and random Streaming Expressions
> 
>
> Key: SOLR-13957
> URL: https://issues.apache.org/jira/browse/SOLR-13957
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 8.3
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 8.4
>
>
> This ticket will add sensible defaults to the *facet* and *random* streaming 
> expressions so that users can type in fewer parameters and receive sensible 
> results. This is part of an overall set of changes designed to make Streaming 
> Expressions and Math Expressions as easy as possible to use to drive adoption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9054) reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under reports number of failures

2019-11-22 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980250#comment-16980250
 ] 

Chris M. Hostetter commented on LUCENE-9054:


yep, sorry folks -- dumb mistake.

> reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under 
> reports number of failures
> ---
>
> Key: LUCENE-9054
> URL: https://issues.apache.org/jira/browse/LUCENE-9054
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: master (9.0), 8.4
>
> Attachments: LUCENE-9054.patch, 
> apache_Lucene-Solr-repro_3760.log.txt, 
> apache_Lucene-Solr-repro_3760.testReport.xml
>
>
> Our {{reproduceJenkinsFailures.py}} script as used by the 
> [https://builds.apache.org/job/Lucene-Solr-repro/] runs the tests multiple 
> times, overwriting the same junit {{TEST-*.xml}} test result files each time, 
> causing the jenkins job to under report how many times the various test(s) 
> fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13957) Add sensible defaults for facet and random Streaming Expressions

2019-11-22 Thread Joel Bernstein (Jira)
Joel Bernstein created SOLR-13957:
-

 Summary: Add sensible defaults for facet and random Streaming 
Expressions
 Key: SOLR-13957
 URL: https://issues.apache.org/jira/browse/SOLR-13957
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Affects Versions: 8.3
Reporter: Joel Bernstein
Assignee: Joel Bernstein
 Fix For: 8.4


This ticket will add sensible defaults to the *facet* and *random* streaming 
expressions so that users can type in fewer parameters and receive sensible 
results. This is part of an overall set of changes designed to make Streaming 
Expressions and Math Expressions as easy as possible to use to drive adoption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13956) Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread shashank bellary (Jira)
shashank bellary created SOLR-13956:
---

 Summary: Solr 4 to Solr7 migration DIH behavior change
 Key: SOLR-13956
 URL: https://issues.apache.org/jira/browse/SOLR-13956
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - DataImportHandler
Affects Versions: 7.5
Reporter: shashank bellary
 Attachments: serviceprofile-data-import.xml

I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. I 
use `JdbcDataSource` and here the config file is attached

1) I started seeing *OutOfMemory* issue since MySQL JDBC driver has that issue 
of not respecting `batchSize` (though Solr4 didn't show this behavior). So, I 
added `batchSize=-1` for that

2) After adding that I'm running into ResultSet closed exception as shown below 
while fetching the *child entity*

 
getNext() failed for query '  SELECT REVIEW AS REVIEWS  FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS  WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
... 13 more
 

Is this a known issue? How do I fix this, any help is greatly appreciated.

I searched the user/developer group and didn't find an answer. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9058) IntervalQuery.matches() does't emit alt field under I,or(I.fixField()) at least

2019-11-22 Thread Mikhail Khludnev (Jira)
Mikhail Khludnev created LUCENE-9058:


 Summary: IntervalQuery.matches() does't emit alt field under 
I,or(I.fixField()) at least
 Key: LUCENE-9058
 URL: https://issues.apache.org/jira/browse/LUCENE-9058
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: modules/queries
Reporter: Mikhail Khludnev


matches FieldOffsetStrategy.createOffsetsEnumsWeightMatcher() doesn't have alt 
fields supposed to provided by underneath Intervals.fixField(). I drop off 
impacted tests from LUCENE-9031
cc [~romseygeek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9054) reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under reports number of failures

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980234#comment-16980234
 ] 

ASF subversion and git services commented on LUCENE-9054:
-

Commit 9302b98baef66df690f515f1c7e58314972459db in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9302b98 ]

LUCENE-9054: fix stupid nocommit comment

(cherry picked from commit acd56b350d74ba9e746dba82d6cb44cfaf2ff68b)


> reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under 
> reports number of failures
> ---
>
> Key: LUCENE-9054
> URL: https://issues.apache.org/jira/browse/LUCENE-9054
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: master (9.0), 8.4
>
> Attachments: LUCENE-9054.patch, 
> apache_Lucene-Solr-repro_3760.log.txt, 
> apache_Lucene-Solr-repro_3760.testReport.xml
>
>
> Our {{reproduceJenkinsFailures.py}} script as used by the 
> [https://builds.apache.org/job/Lucene-Solr-repro/] runs the tests multiple 
> times, overwriting the same junit {{TEST-*.xml}} test result files each time, 
> causing the jenkins job to under report how many times the various test(s) 
> fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9054) reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under reports number of failures

2019-11-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980232#comment-16980232
 ] 

ASF subversion and git services commented on LUCENE-9054:
-

Commit acd56b350d74ba9e746dba82d6cb44cfaf2ff68b in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=acd56b3 ]

LUCENE-9054: fix stupid nocommit comment


> reproduceJenkinsFailures.py usage in the Lucene-Solr-repro jenkins job under 
> reports number of failures
> ---
>
> Key: LUCENE-9054
> URL: https://issues.apache.org/jira/browse/LUCENE-9054
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: master (9.0), 8.4
>
> Attachments: LUCENE-9054.patch, 
> apache_Lucene-Solr-repro_3760.log.txt, 
> apache_Lucene-Solr-repro_3760.testReport.xml
>
>
> Our {{reproduceJenkinsFailures.py}} script as used by the 
> [https://builds.apache.org/job/Lucene-Solr-repro/] runs the tests multiple 
> times, overwriting the same junit {{TEST-*.xml}} test result files each time, 
> causing the jenkins job to under report how many times the various test(s) 
> fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9049) Remove FST cachedRootArcs now redundant with direct-addressing

2019-11-22 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980192#comment-16980192
 ] 

Adrien Grand commented on LUCENE-9049:
--

I think Mike M's point is a good one as well, this could save quite some memory 
for users who have many fields?

> Remove FST cachedRootArcs now redundant with direct-addressing
> --
>
> Key: LUCENE-9049
> URL: https://issues.apache.org/jira/browse/LUCENE-9049
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: LUCENE-9049.patch
>
>
> With LUCENE-8920 FST most often encodes top level nodes with 
> direct-addressing (instead of array for binary search). This probably made 
> the cachedRootArcs redundant. So they should be removed, and this will reduce 
> the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9049) Remove FST cachedRootArcs now redundant with direct-addressing

2019-11-22 Thread Bruno Roustant (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980188#comment-16980188
 ] 

Bruno Roustant commented on LUCENE-9049:


I tested with both the "worst case for direct-addressing" and "english 
dictionary words" FSTs. The removal of the root arcs reduce FST size by 0.25%. 
The main advantage is the removal of the code for clarity (if we confirm same 
perf with the nightly benchmark).

> Remove FST cachedRootArcs now redundant with direct-addressing
> --
>
> Key: LUCENE-9049
> URL: https://issues.apache.org/jira/browse/LUCENE-9049
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: LUCENE-9049.patch
>
>
> With LUCENE-8920 FST most often encodes top level nodes with 
> direct-addressing (instead of array for binary search). This probably made 
> the cachedRootArcs redundant. So they should be removed, and this will reduce 
> the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13948) Tooltip popup for replica information in cloud view clipping

2019-11-22 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980182#comment-16980182
 ] 

Erick Erickson commented on SOLR-13948:
---

I suspect the precommit failure is unrelated to this patch BTW.

[~gus] Are you going to push this? I hate to lose changes like this by having 
them fall through the cracks...

> Tooltip popup for replica information in cloud view clipping
> 
>
> Key: SOLR-13948
> URL: https://issues.apache.org/jira/browse/SOLR-13948
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2
>Reporter: Richard Goodman
>Priority: Minor
> Attachments: SOLR-13948.patch, SOLR-13948.patch, after.png, before.png
>
>
> Our replicas typically have a long name, which has never really a problem for 
> us. But with the new feature in 7.7.2 _(at least for us was new than our 
> previous solr version)_ when on the cloud view and looking at the status of 
> collections, when hovering over a node ip and seeing the tooltip popup of the 
> replica information, this information will go over the tooltip window if the 
> replica name is of a certain size.
> This small patch fixes this, I've attached some screenshots of a before and 
> after, as well as the patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13948) Tooltip popup for replica information in cloud view clipping

2019-11-22 Thread Richard Goodman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980174#comment-16980174
 ] 

Richard Goodman commented on SOLR-13948:


Hmm, I'm not sure why this failing, I couldn't really see anything obvious as 
to why, just;
{code}
validate-source-patterns:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.codehaus.groovy.reflection.CachedClass 
(file:/home/jenkins/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.17.jar)
 to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of 
org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
[source-patterns] nocommit: dev-tools/scripts/reproduceJenkinsFailures.py
{code}

Which doesn't sound related to this patch?

> Tooltip popup for replica information in cloud view clipping
> 
>
> Key: SOLR-13948
> URL: https://issues.apache.org/jira/browse/SOLR-13948
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2
>Reporter: Richard Goodman
>Priority: Minor
> Attachments: SOLR-13948.patch, SOLR-13948.patch, after.png, before.png
>
>
> Our replicas typically have a long name, which has never really a problem for 
> us. But with the new feature in 7.7.2 _(at least for us was new than our 
> previous solr version)_ when on the cloud view and looking at the status of 
> collections, when hovering over a node ip and seeing the tooltip popup of the 
> replica information, this information will go over the tooltip window if the 
> replica name is of a certain size.
> This small patch fixes this, I've attached some screenshots of a before and 
> after, as well as the patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9049) Remove FST cachedRootArcs now redundant with direct-addressing

2019-11-22 Thread Bruno Roustant (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980173#comment-16980173
 ] 

Bruno Roustant commented on LUCENE-9049:


+1. Good point with luceneutil, and we'll confirm with the nightly benchmark.

> Remove FST cachedRootArcs now redundant with direct-addressing
> --
>
> Key: LUCENE-9049
> URL: https://issues.apache.org/jira/browse/LUCENE-9049
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: LUCENE-9049.patch
>
>
> With LUCENE-8920 FST most often encodes top level nodes with 
> direct-addressing (instead of array for binary search). This probably made 
> the cachedRootArcs redundant. So they should be removed, and this will reduce 
> the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13912) Support Count aggregation in JSON facet module

2019-11-22 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980164#comment-16980164
 ] 

Erick Erickson commented on SOLR-13912:
---

[~munendrasn] I don't think there's any formal requirement, but people usually 
resolve these as "Fixed" when the code's been pushed rather than "Done". No big 
deal

> Support Count aggregation in JSON facet module
> --
>
> Key: SOLR-13912
> URL: https://issues.apache.org/jira/browse/SOLR-13912
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.4
>
> Attachments: SOLR-13912.patch, SOLR-13912.patch, SOLR-13912.patch, 
> SOLR-13912.patch
>
>
> Add a count aggregation in JSON Facet module which behaves similar to 
> StatsComponent's count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11695) JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" features

2019-11-22 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-11695:

Fix Version/s: 8.4

> JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" 
> features
> --
>
> Key: SOLR-11695
> URL: https://issues.apache.org/jira/browse/SOLR-11695
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
> Fix For: 8.4
>
>
> StatsComponent supports stats named "count" and "missing":
> * count: for the set of documents we're computing stats over, "how many 
> _non-distinct_ values exist in those documents in the specified field?" (or 
> in the case of an arbitrary function: "in how many of these documents does 
> true==ValueSource.exist()" ?)
> ** no to be confused with the number of _unique_ values (aprox "cardinality" 
> or exact "countDistinct")
> * missing: for the set of documents we're computing stats over, "how many of 
> those documents do not have any value in the specified field?" (or in the 
> case of an arbitrary function: "in how many of thse documents does 
> false==ValueSource.exist()" ?)
> (NOTE: for a single valued field, these are essentially inveses of each 
> other, but for multivalued fields "count" actaully returns the total number 
> of "value instances" not just the number of docs that have at least one value)
> AFAICT there is no equivalent functionality supported by the JSON 
> FacetModule, which will be a blocker preventing some users from migrating 
> from using stats.field (or facet.pivot+stats.field) to json.facet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13912) Support Count aggregation in JSON facet module

2019-11-22 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13912:

Fix Version/s: 8.4
 Assignee: Munendra S N
   Resolution: Done
   Status: Resolved  (was: Patch Available)

> Support Count aggregation in JSON facet module
> --
>
> Key: SOLR-13912
> URL: https://issues.apache.org/jira/browse/SOLR-13912
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.4
>
> Attachments: SOLR-13912.patch, SOLR-13912.patch, SOLR-13912.patch, 
> SOLR-13912.patch
>
>
> Add a count aggregation in JSON Facet module which behaves similar to 
> StatsComponent's count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11695) JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" features

2019-11-22 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-11695.
-
Resolution: Done

> JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" 
> features
> --
>
> Key: SOLR-11695
> URL: https://issues.apache.org/jira/browse/SOLR-11695
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
>
> StatsComponent supports stats named "count" and "missing":
> * count: for the set of documents we're computing stats over, "how many 
> _non-distinct_ values exist in those documents in the specified field?" (or 
> in the case of an arbitrary function: "in how many of these documents does 
> true==ValueSource.exist()" ?)
> ** no to be confused with the number of _unique_ values (aprox "cardinality" 
> or exact "countDistinct")
> * missing: for the set of documents we're computing stats over, "how many of 
> those documents do not have any value in the specified field?" (or in the 
> case of an arbitrary function: "in how many of thse documents does 
> false==ValueSource.exist()" ?)
> (NOTE: for a single valued field, these are essentially inveses of each 
> other, but for multivalued fields "count" actaully returns the total number 
> of "value instances" not just the number of docs that have at least one value)
> AFAICT there is no equivalent functionality supported by the JSON 
> FacetModule, which will be a blocker preventing some users from migrating 
> from using stats.field (or facet.pivot+stats.field) to json.facet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



  1   2   >