[jira] [Commented] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-12 Thread Mahesh (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570932#comment-15570932
 ] 

Mahesh commented on LUCENE-7493:


Should I add test case as an attachment in this bug?


> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-12 Thread Mahesh (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570925#comment-15570925
 ] 

Mahesh commented on LUCENE-7493:


This is the working copy of code that I have right now.

FacetsCollector facetCollector = new FacetsCollector(); 
TopDocs topDocs = null; 
TotalHitCountCollector totalHitCountCollector = null; 
if (limit == 0) { 
totalHitCountCollector = new TotalHitCountCollector(); 
indexSearcher.search(query, MultiCollector.wrap(totalHitCountCollector, 
facetCollector)); 
topDocs = new TopDocs(totalHitCountCollector.getTotalHits(), new 
ScoreDoc[0], Float.NaN); 
} else 
topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
limit, facetCollector); 


> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1936 - Failure!

2016-10-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1936/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrVersionReplicationTest.testCdcrDocVersions

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:40627/solr within 1 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:40627/solr within 1 ms
at 
__randomizedtesting.SeedInfo.seed([D1BB79DE3251762C:292D727CC0379930]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:106)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:226)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:567)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForCollectionToDisappear(BaseCdcrDistributedZkTest.java:494)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.startServers(BaseCdcrDistributedZkTest.java:596)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.createSourceCollection(BaseCdcrDistributedZkTest.java:346)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.baseBefore(BaseCdcrDistributedZkTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-12 Thread HYUNCHANG LEE (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570465#comment-15570465
 ] 

HYUNCHANG LEE commented on SOLR-9615:
-

It is happened in the Solr code.
When I use NamedList convert a map by NamedList:asMap function, can't converted.


> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7494) Explore making PointValues a per-field API like doc values

2016-10-12 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570296#comment-15570296
 ] 

David Smiley commented on LUCENE-7494:
--

+1.

Related conceptually is another idea I've had kicking around: Does {{Fields}} 
need to exist?  It seems like a pointless intermediary.  Why not have 
{{LeafReader.getTerms(fieldName)}} instead?  One loses the ability to get the 
count and iterate over indexed fields, but it's not clear what real use-cases 
are for that and such rare needs could figure that out with FieldInfos.  If it 
sounds reasonable to you all I'll file a separate issue.

> Explore making PointValues a per-field API like doc values
> --
>
> Key: LUCENE-7494
> URL: https://issues.apache.org/jira/browse/LUCENE-7494
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
>
> This is a follow-up to LUCENE-7491. Maybe we could simplify things a bit by 
> changing {{LeafReader.getPointValues()}} to 
> {{LeafReader.getPointValues(String fieldName)}} and removing all {{String 
> fieldName}} parameters from {{PointValues}}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5753) Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize ".local" among others

2016-10-12 Thread Maxim Vladimirskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570228#comment-15570228
 ] 

Maxim Vladimirskiy commented on LUCENE-5753:


Are there plans to fix this? We are hitting this issue with `.solutions` tld.

> Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize 
> ".local" among others
> ---
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569979#comment-15569979
 ] 

Jan Høydahl commented on SOLR-9325:
---

If you see other issues with 6.3-SNAPSHOT, please report it on the solr-user 
mailing list or directly in JIRA if you are certain it is a bug.
I normally search 
http://search-lucene.com/?fc_project=Solr_project=Lucene= for the error 
msg to try to locate an existing JIRA before creating a new one.

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569931#comment-15569931
 ] 

Tim Parker commented on SOLR-9325:
--

further testing shows that the logs all appear to be going to the right places 
- I haven't tried this on Windows 7 yet, but I have seen several other 
(apparently unrelated) problems with 6.3 including '...possible analysis 
error...' when indexing PDF (as parsed by Tika), and some 
AlreadyClosedException entries - should I write these up separately? or are 
these just fallout from this being an interim build?

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8154) Bulk Schema API incorrectly accepts a dynamic field creation request with required=true and/or a default value

2016-10-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8154.
--
Resolution: Duplicate

The same changes in my latest patch were committed (a year later) under 
SOLR-9411.

> Bulk Schema API incorrectly accepts a dynamic field creation request with 
> required=true and/or a default value
> --
>
> Key: SOLR-8154
> URL: https://issues.apache.org/jira/browse/SOLR-8154
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-8154.patch, SOLR-8154.patch
>
>
> The schema API refuses to create a dynamic field with required=true, but 
> accepts one that has a default value. This creates a schema that cannot be 
> loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569912#comment-15569912
 ] 

Tim Parker commented on SOLR-9325:
--

that fixed it... thank you.

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9295) Remove Unicode BOM (U+FEFF) from text files in codebase

2016-10-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-9295.
--
Resolution: Fixed

> Remove Unicode BOM (U+FEFF) from text files in codebase
> ---
>
> Key: SOLR-9295
> URL: https://issues.apache.org/jira/browse/SOLR-9295
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: master (7.0)
>Reporter: Shawn Heisey
>Priority: Trivial
> Fix For: master (7.0), 6.2
>
>
> When starting Solr built from the master branch on Windows, this is what you 
> see:
> {noformat}
> C:\Users\elyograg\git\lucene-solr\solr>bin\solr start
> C:\Users\elyograg\git\lucene-solr\solr>@REM
> '@REM' is not recognized as an internal or external command,
> operable program or batch file.
> {noformat}
> The three extra characters, found at the very beginning of the solr.cmd 
> script, are a Unicode BOM, and are invisible to vim, notepad, and notepad++.  
> The problem does not exist in 6.1.0, but IS present in branch_6x and master.
> Using grep to find this character in the entire codebase, I found one other 
> relevant file with a BOM.  All others were binary (images, jars, git pack 
> files, etc):
> ./solr/webapp/web/js/lib/jquery.blockUI.js



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9636) Use javabin for /stream internode communication

2016-10-12 Thread Noble Paul (JIRA)
Noble Paul created SOLR-9636:


 Summary: Use javabin for /stream internode communication
 Key: SOLR-9636
 URL: https://issues.apache.org/jira/browse/SOLR-9636
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul


currently it uses json, which is verbose and slow





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-12 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569832#comment-15569832
 ] 

Alexandre Rafalovitch commented on SOLR-9516:
-

Could you try that against Solr 6.2? Because there had been a large number of 
issues fixed both for UI and for various security components.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569730#comment-15569730
 ] 

Uwe Schindler commented on LUCENE-5168:
---

I think you need Facebook account. Was a post by Robert 

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-10-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6871.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.3

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569715#comment-15569715
 ] 

Shawn Heisey commented on SOLR-9635:


Had some followup thoughts about collection.properties.

Instead of living in the /collections/XXX zookeeper path, it could live in the 
*config* path, sitting next to solrconfig.xml, and could be a combination of 
collection.properties superseded by collection.XXX.properties, where XXX is the 
collection name.  This would make it a whole lot easier for the user to upload 
and offer a little more flexibility.

> Implement Solr as two java processes -- one process to manage the other.
> 
>
> Key: SOLR-9635
> URL: https://issues.apache.org/jira/browse/SOLR-9635
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>
> One idea that Mark Miller mentioned some time ago that I really like is the 
> idea of implementing Solr as two java processes, with one managing the other.
> When I think about this idea, what I imagine is a manager process with a 
> *very* small heap (I'm thinking single-digit megabytes) that is responsible 
> for starting a separate Solr process with configured values for many 
> different options, which would include the heap size.
> Basically, the manager process would replace most of bin/solr as we know it, 
> would be able to restart a crashed Solr, and the admin UI could have options 
> for changing heap size, restarting Solr, and other things that are currently 
> impossible.  It is likely that this idea would absorb or replace the SolrCLI 
> class.
> Initially, I intend this issue for discussion, and if the idea looks 
> workable, then we can work towards implementation.  There are plenty of 
> bikesheds to paint as we work the details.  I have some preliminary ideas 
> about some parts of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569691#comment-15569691
 ] 

Jan Høydahl commented on SOLR-9325:
---

I welcome any other comments on this approach. Plan to commit on friday.

The only weakness I can see with this now is
* If Solr is started in another way than through bin/solr, people will need to 
supply {{-Dsolr.log.dir}} manually
* If someone use another log framework than log4j, they are on their own. If 
that framework supports var substitution they can insert a $\{solr.log.dir\} in 
the config
* Windows part only tested on Windows 10, could it break with other windows 
versions?

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569678#comment-15569678
 ] 

Jan Høydahl commented on SOLR-9325:
---

Thanks for testing the build.
The empty server/2 folder is a bug in bin/solr where I used {{2&>/dev/null}} 
instead of {{2>/dev/null}} to redirect stderr. You can fix it with this 
oneliner:
{code}
sed -i "" 's|2&>/dev/null|2>/dev/null|g' bin/solr
{coxe}

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569678#comment-15569678
 ] 

Jan Høydahl edited comment on SOLR-9325 at 10/12/16 7:38 PM:
-

Thanks for testing the build.
The empty server/2 folder is a bug in bin/solr where I used {{2&>/dev/null}} 
instead of {{2>/dev/null}} to redirect stderr. You can fix it with this 
oneliner:
{code}
sed -i "" 's|2&>/dev/null|2>/dev/null|g' bin/solr
{code}


was (Author: janhoy):
Thanks for testing the build.
The empty server/2 folder is a bug in bin/solr where I used {{2&>/dev/null}} 
instead of {{2>/dev/null}} to redirect stderr. You can fix it with this 
oneliner:
{code}
sed -i "" 's|2&>/dev/null|2>/dev/null|g' bin/solr
{coxe}

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569652#comment-15569652
 ] 

Michael McCandless commented on LUCENE-5168:


[~thetaphi] I tried to click on your FB link but it's broken :)

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569564#comment-15569564
 ] 

Tim Parker commented on SOLR-9325:
--

Installed the snapshot... chasing a couple of unrelated things on my end, but 
what I've seen so far is:

1) the logs all appear to go into the right place
2) Solr is creating an empty directory '2' under the '.../server' directory - 
not sure if this relates to the specified home directory being empty at 
startup, but... it shouldn't be there

some config info:
SOLR_PID_DIR = /home/content/private/keys
SOLR_LOGS_DIR = /home/content/private/logs
startup command line: /opt/solr/latest/bin/solr start -s 
/home/content/private/solr -p 8987 -force
/opt/solr/latest is a symlink to /media/sf_common/solr/latest, which is itself 
a symbolic link to the latest Solr build

/media/sf_common is a VirtualBox shared folder

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569525#comment-15569525
 ] 

Uwe Schindler edited comment on LUCENE-5168 at 10/12/16 6:42 PM:
-

BTW, JDK 9 b138 is already in the Policeman Jenkins beer brewery 
(https://www.facebook.com/ThetaPh1/posts/1547975418562090 => this bug was fixed 
in b137): https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/


was (Author: thetaphi):
BTW, JDK 9 b138 is already in the Policeman Jenkins beer brewery (this bug was 
fixed in b137): https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569525#comment-15569525
 ] 

Uwe Schindler commented on LUCENE-5168:
---

BTW, JDK 9 b138 is already in the Policeman Jenkins beer brewery (this bug was 
fixed in b137): https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569520#comment-15569520
 ] 

Uwe Schindler commented on LUCENE-5168:
---

Thanks [~rcmuir] for the update. Great news :-)

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 174 - Still Failing

2016-10-12 Thread Dawid Weiss
No heapdump, but the log will tell you what happened:
https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/174/consoleText

At the end you see this:

   [junit4] ERROR: JVM J0 ended with an exception: Quit event not
received from the forked process? This may indicate JVM crash or
runner bugs.

And indeed, when you look at the logs, this is what it has to say:

   [junit4] <<< JVM J0: EOF 
   [junit4] JVM J0: stderr was not empty, see:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/checkout/solr/build/solr-core/test/temp/junit4-J0-20161012_135246_796.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4]
   [junit4] Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"OldIndexDirectoryCleanupThreadForCore-awholynewstresscollection_collection1_2_shard3_replica2"
   [junit4] WARN: Unhandled exception in event serialization. ->
java.lang.OutOfMemoryError: GC overhead limit exceeded

These are the types or exceptions that are very difficult to guard
against -- GC overhead limit simply happens when the JVM is low on
memory and GC can't keep up with cleaning up. Here it happened on
event serialization back to the master process controlling JUnit
execution.

The runner has to have enough room to serialize events, otherwise the
results will be hard to predict or control.

Dawid

P.S. The entire log dump contains more stack dumps, but these seem to
be dumped from Solr -- perhaps there is an obscured reason for running
so low on memory there.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5168.
-
Resolution: Fixed

Fixed in https://bugs.openjdk.java.net/browse/JDK-8038348

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569422#comment-15569422
 ] 

Dawid Weiss commented on LUCENE-5168:
-

Sorry, Robert updated me here -- this bug has been fixed by Tobias Hartmann, 
see here:
https://bugs.openjdk.java.net/browse/JDK-8038348

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7490) SimpleQueryParser should parse "*" as MatchAllDocsQuery

2016-10-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7490.

   Resolution: Fixed
Fix Version/s: (was: 6.x)
   6.3

Thanks [~dakrone]!

> SimpleQueryParser should parse "*" as MatchAllDocsQuery
> ---
>
> Key: LUCENE-7490
> URL: https://issues.apache.org/jira/browse/LUCENE-7490
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 6.2.1
>Reporter: Lee Hinman
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: 
> 0001-Parse-as-MatchAllDocsQuery-in-SimpleQueryParser.patch
>
>
> It would be beneficial for SimpleQueryString to parse as a MatchAllDocsQuery, 
> rather than a "field:*" query.
> Related discussion on the Elasticsearch project about this: 
> https://github.com/elastic/elasticsearch/issues/10632



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7490) SimpleQueryParser should parse "*" as MatchAllDocsQuery

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569333#comment-15569333
 ] 

ASF subversion and git services commented on LUCENE-7490:
-

Commit 4ae1643f66bb2d90b04b1dd7c12c55d9c24bcd33 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ae1643 ]

LUCENE-7490: SimpleQueryParser now parses '*' as MatchAllDocsQuery


> SimpleQueryParser should parse "*" as MatchAllDocsQuery
> ---
>
> Key: LUCENE-7490
> URL: https://issues.apache.org/jira/browse/LUCENE-7490
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 6.2.1
>Reporter: Lee Hinman
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: 
> 0001-Parse-as-MatchAllDocsQuery-in-SimpleQueryParser.patch
>
>
> It would be beneficial for SimpleQueryString to parse as a MatchAllDocsQuery, 
> rather than a "field:*" query.
> Related discussion on the Elasticsearch project about this: 
> https://github.com/elastic/elasticsearch/issues/10632



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7490) SimpleQueryParser should parse "*" as MatchAllDocsQuery

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569336#comment-15569336
 ] 

ASF subversion and git services commented on LUCENE-7490:
-

Commit 67d206c665dd476501d696474a393d6588e7c56d in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=67d206c ]

LUCENE-7490: SimpleQueryParser now parses '*' as MatchAllDocsQuery


> SimpleQueryParser should parse "*" as MatchAllDocsQuery
> ---
>
> Key: LUCENE-7490
> URL: https://issues.apache.org/jira/browse/LUCENE-7490
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 6.2.1
>Reporter: Lee Hinman
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: 
> 0001-Parse-as-MatchAllDocsQuery-in-SimpleQueryParser.patch
>
>
> It would be beneficial for SimpleQueryString to parse as a MatchAllDocsQuery, 
> rather than a "field:*" query.
> Related discussion on the Elasticsearch project about this: 
> https://github.com/elastic/elasticsearch/issues/10632



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7494) Explore making PointValues a per-field API like doc values

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569229#comment-15569229
 ] 

Michael McCandless commented on LUCENE-7494:


+1

> Explore making PointValues a per-field API like doc values
> --
>
> Key: LUCENE-7494
> URL: https://issues.apache.org/jira/browse/LUCENE-7494
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
>
> This is a follow-up to LUCENE-7491. Maybe we could simplify things a bit by 
> changing {{LeafReader.getPointValues()}} to 
> {{LeafReader.getPointValues(String fieldName)}} and removing all {{String 
> fieldName}} parameters from {{PointValues}}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9103) Restore ability for users to add custom Streaming Expressions

2016-10-12 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat closed SOLR-9103.
--

> Restore ability for users to add custom Streaming Expressions
> -
>
> Key: SOLR-9103
> URL: https://issues.apache.org/jira/browse/SOLR-9103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Dennis Gove
> Fix For: 6.3
>
> Attachments: HelloStream.class, SOLR-9103.PATCH, SOLR-9103.PATCH, 
> SOLR-9103.patch, SOLR-9103.patch
>
>
> StreamHandler is an implicit handler. So to make it extensible, we can 
> introduce the below syntax in solrconfig.xml. 
> {code}
> 
> {code}
> This will add hello function to streamFactory of StreamHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9103) Restore ability for users to add custom Streaming Expressions

2016-10-12 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569144#comment-15569144
 ] 

Cao Manh Dat edited comment on SOLR-9103 at 10/12/16 4:19 PM:
--

Thanks Dennis for reviewing the patch.


was (Author: caomanhdat):
[~dpgove] Thanks Dennis for reviewing the patch.

> Restore ability for users to add custom Streaming Expressions
> -
>
> Key: SOLR-9103
> URL: https://issues.apache.org/jira/browse/SOLR-9103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Dennis Gove
> Fix For: 6.3
>
> Attachments: HelloStream.class, SOLR-9103.PATCH, SOLR-9103.PATCH, 
> SOLR-9103.patch, SOLR-9103.patch
>
>
> StreamHandler is an implicit handler. So to make it extensible, we can 
> introduce the below syntax in solrconfig.xml. 
> {code}
> 
> {code}
> This will add hello function to streamFactory of StreamHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569150#comment-15569150
 ] 

Shawn Heisey commented on SOLR-9635:


Possibly even better:  Support both service.properties and 
service.XXX.properties, with entries from the service file superseding the 
global file.

> Implement Solr as two java processes -- one process to manage the other.
> 
>
> Key: SOLR-9635
> URL: https://issues.apache.org/jira/browse/SOLR-9635
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>
> One idea that Mark Miller mentioned some time ago that I really like is the 
> idea of implementing Solr as two java processes, with one managing the other.
> When I think about this idea, what I imagine is a manager process with a 
> *very* small heap (I'm thinking single-digit megabytes) that is responsible 
> for starting a separate Solr process with configured values for many 
> different options, which would include the heap size.
> Basically, the manager process would replace most of bin/solr as we know it, 
> would be able to restart a crashed Solr, and the admin UI could have options 
> for changing heap size, restarting Solr, and other things that are currently 
> impossible.  It is likely that this idea would absorb or replace the SolrCLI 
> class.
> Initially, I intend this issue for discussion, and if the idea looks 
> workable, then we can work towards implementation.  There are plenty of 
> bikesheds to paint as we work the details.  I have some preliminary ideas 
> about some parts of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9103) Restore ability for users to add custom Streaming Expressions

2016-10-12 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569144#comment-15569144
 ] 

Cao Manh Dat commented on SOLR-9103:


[~dpgove] Thanks Dennis for reviewing the patch.

> Restore ability for users to add custom Streaming Expressions
> -
>
> Key: SOLR-9103
> URL: https://issues.apache.org/jira/browse/SOLR-9103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Dennis Gove
> Fix For: 6.3
>
> Attachments: HelloStream.class, SOLR-9103.PATCH, SOLR-9103.PATCH, 
> SOLR-9103.patch, SOLR-9103.patch
>
>
> StreamHandler is an implicit handler. So to make it extensible, we can 
> introduce the below syntax in solrconfig.xml. 
> {code}
> 
> {code}
> This will add hello function to streamFactory of StreamHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569131#comment-15569131
 ] 

Shawn Heisey commented on SOLR-9635:


I realize that what I've written above assumes that one install directory only 
handles one Solr service, and that currently it is possible to run multiple 
services out of one directory.  I personally prefer one service per install 
directory, but I'm guessing that this might need modifying.  Perhaps 
service.properties can become service.XXX.properties, where XXX is the service 
name, and the file would most commonly be named service.solr.properties.

> Implement Solr as two java processes -- one process to manage the other.
> 
>
> Key: SOLR-9635
> URL: https://issues.apache.org/jira/browse/SOLR-9635
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>
> One idea that Mark Miller mentioned some time ago that I really like is the 
> idea of implementing Solr as two java processes, with one managing the other.
> When I think about this idea, what I imagine is a manager process with a 
> *very* small heap (I'm thinking single-digit megabytes) that is responsible 
> for starting a separate Solr process with configured values for many 
> different options, which would include the heap size.
> Basically, the manager process would replace most of bin/solr as we know it, 
> would be able to restart a crashed Solr, and the admin UI could have options 
> for changing heap size, restarting Solr, and other things that are currently 
> impossible.  It is likely that this idea would absorb or replace the SolrCLI 
> class.
> Initially, I intend this issue for discussion, and if the idea looks 
> workable, then we can work towards implementation.  There are plenty of 
> bikesheds to paint as we work the details.  I have some preliminary ideas 
> about some parts of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569115#comment-15569115
 ] 

Shawn Heisey commented on SOLR-9635:


User Customization:

I think we should go with Java properties files for user customizations.  Only 
when we reach the core config and the schema should we switch to a structured 
format (currently XML, possibly moving to JSON).

With a little bit of thought and work, it should be possible to provide the 
user with several layers of property configuration.  here are the properties 
filenames I can imagine:

service.properties:  At the root of the install directory.  Would contain 
things currently handled in solr.in.sh and solr.xml.  Accessible in structured 
config as $\{solr.service.XXX\}.  Might want to replace "service" with 
"instance".

cloud.properties: At the root of the zookeeper tree.  In addition to 
configuring settings for the whole cloud, would be accessible as 
$\{solr.cloud.XXX\}.  In conjunction with service.properties, might replace 
solr.xml.

collection.properties: In the collection path in zookeeper.  In addition to 
providing collection settings, would be accessible as $\{solr.collection.XXX\}.

core.properties: No real change here.  Still accessible as $\{solr.core.XXX\}.

This would be an ideal time to implement the service breadcrumbs idea I have 
mentioned previously.  A /etc/default/solr.properties file could be 
added/modified by the service install script and have simple mappings of 
service name to install directory, where service.properties would fill in the 
rest of the blanks.

> Implement Solr as two java processes -- one process to manage the other.
> 
>
> Key: SOLR-9635
> URL: https://issues.apache.org/jira/browse/SOLR-9635
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>
> One idea that Mark Miller mentioned some time ago that I really like is the 
> idea of implementing Solr as two java processes, with one managing the other.
> When I think about this idea, what I imagine is a manager process with a 
> *very* small heap (I'm thinking single-digit megabytes) that is responsible 
> for starting a separate Solr process with configured values for many 
> different options, which would include the heap size.
> Basically, the manager process would replace most of bin/solr as we know it, 
> would be able to restart a crashed Solr, and the admin UI could have options 
> for changing heap size, restarting Solr, and other things that are currently 
> impossible.  It is likely that this idea would absorb or replace the SolrCLI 
> class.
> Initially, I intend this issue for discussion, and if the idea looks 
> workable, then we can work towards implementation.  There are plenty of 
> bikesheds to paint as we work the details.  I have some preliminary ideas 
> about some parts of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-12 Thread loushang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569094#comment-15569094
 ] 

loushang edited comment on SOLR-9516 at 10/12/16 4:02 PM:
--

i get the same problem now. the solr version is 5.5.2 

see the QQ20161012-0.png in the attachment


was (Author: loushang):
i get the same problem now. the solr version is 5.5.2

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-12 Thread loushang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

loushang updated SOLR-9516:
---
Attachment: QQ20161012-0.png

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-12 Thread loushang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569094#comment-15569094
 ] 

loushang commented on SOLR-9516:


i get the same problem now. the solr version is 5.5.2

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569088#comment-15569088
 ] 

Shawn Heisey commented on SOLR-9635:


Inter-Process Communication:

I would prefer a strictly local communication method, not TCP.  The ideas that 
come to mind are sockets and named pipes ... but I haven't yet researched 
available options in Java.  A cross-platform option is best.

> Implement Solr as two java processes -- one process to manage the other.
> 
>
> Key: SOLR-9635
> URL: https://issues.apache.org/jira/browse/SOLR-9635
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>
> One idea that Mark Miller mentioned some time ago that I really like is the 
> idea of implementing Solr as two java processes, with one managing the other.
> When I think about this idea, what I imagine is a manager process with a 
> *very* small heap (I'm thinking single-digit megabytes) that is responsible 
> for starting a separate Solr process with configured values for many 
> different options, which would include the heap size.
> Basically, the manager process would replace most of bin/solr as we know it, 
> would be able to restart a crashed Solr, and the admin UI could have options 
> for changing heap size, restarting Solr, and other things that are currently 
> impossible.  It is likely that this idea would absorb or replace the SolrCLI 
> class.
> Initially, I intend this issue for discussion, and if the idea looks 
> workable, then we can work towards implementation.  There are plenty of 
> bikesheds to paint as we work the details.  I have some preliminary ideas 
> about some parts of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9635) Implement Solr as two java processes -- one process to manage the other.

2016-10-12 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-9635:
--

 Summary: Implement Solr as two java processes -- one process to 
manage the other.
 Key: SOLR-9635
 URL: https://issues.apache.org/jira/browse/SOLR-9635
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shawn Heisey


One idea that Mark Miller mentioned some time ago that I really like is the 
idea of implementing Solr as two java processes, with one managing the other.

When I think about this idea, what I imagine is a manager process with a *very* 
small heap (I'm thinking single-digit megabytes) that is responsible for 
starting a separate Solr process with configured values for many different 
options, which would include the heap size.

Basically, the manager process would replace most of bin/solr as we know it, 
would be able to restart a crashed Solr, and the admin UI could have options 
for changing heap size, restarting Solr, and other things that are currently 
impossible.  It is likely that this idea would absorb or replace the SolrCLI 
class.

Initially, I intend this issue for discussion, and if the idea looks workable, 
then we can work towards implementation.  There are plenty of bikesheds to 
paint as we work the details.  I have some preliminary ideas about some parts 
of it, which I will discuss in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7494) Explore making PointValues a per-field API like doc values

2016-10-12 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7494:


 Summary: Explore making PointValues a per-field API like doc values
 Key: LUCENE-7494
 URL: https://issues.apache.org/jira/browse/LUCENE-7494
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Adrien Grand
Priority: Minor


This is a follow-up to LUCENE-7491. Maybe we could simplify things a bit by 
changing {{LeafReader.getPointValues()}} to {{LeafReader.getPointValues(String 
fieldName)}} and removing all {{String fieldName}} parameters from 
{{PointValues}}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 174 - Still Failing

2016-10-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/174/

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=24687, name=Thread-11778, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=24687, name=Thread-11778, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:51065/xvo/mi/collection1
at __randomizedtesting.SeedInfo.seed([C3DFCCDCCB98882]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:644)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:51065/xvo/mi/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:604)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:642)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:498)
... 5 more


FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:35901

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:35901
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:604)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:399)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:437)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1932 - Failure!

2016-10-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1932/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

17 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([84B01BD5627B9468]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=2086, 
name=Scheduler-25750801, state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2086, name=Scheduler-25750801, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=2066, 
name=solr-idle-connections-evictor, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2066, name=solr-idle-connections-evictor, 
state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=2031, 
name=Scheduler-2218166, state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2031, name=Scheduler-2218166, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=6807, 
name=qtp13514556-6807, state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6807, name=qtp13514556-6807, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=2120, 
name=Scheduler-10413639, state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2120, name=Scheduler-10413639, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=6818, 
name=Scheduler-2218166, state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6818, name=Scheduler-2218166, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=2135, 
name=httpShardExecutor-1429-thread-4-processing-n:127.0.0.1:38477_ 
[https:127.0.0.1:35360] https:127.0.0.1:35360, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2135, 
name=httpShardExecutor-1429-thread-4-processing-n:127.0.0.1:38477_ 
[https:127.0.0.1:35360] https:127.0.0.1:35360, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=1990, name=qtp7529702-1990, 
state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1990, name=qtp7529702-1990, state=RUNNABLE, 
group=TGRP-CdcrVersionReplicationTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
Captured an uncaught exception in thread: Thread[id=6817, name=qtp6904305-6817, 
state=RUNNABLE, group=TGRP-CdcrVersionReplicationTest]

Stack Trace:

[jira] [Updated] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-12 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9182:

Attachment: SOLR-9182-solrj-supprel.patch

Here's a patch removing SuppressSSL from all solrj tests, as a start.

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182-solrj-supprel.patch, SOLR-9182.patch, SOLR-9182.patch, 
> SOLR-9182.patch, SOLR-9182.patch
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7492) Javadoc example of LRUQueryCache doesn't work.

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568909#comment-15568909
 ] 

ASF subversion and git services commented on LUCENE-7492:
-

Commit f10312a4c451a54e1bae77daabd52ffeb087d155 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f10312a ]

LUCENE-7492: Fix LRUQueryCache javadocs.


> Javadoc example of LRUQueryCache doesn't work.
> --
>
> Key: LUCENE-7492
> URL: https://issues.apache.org/jira/browse/LUCENE-7492
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Affects Versions: 6.2.1
>Reporter: Florian Hopf
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7492.patch
>
>
> The Javadoc example in LRUQueryCache still uses a Query, the implementation 
> uses a Weight instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7492) Javadoc example of LRUQueryCache doesn't work.

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568910#comment-15568910
 ] 

ASF subversion and git services commented on LUCENE-7492:
-

Commit 175370f232503644d176461253b8d604ae7cde97 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=175370f ]

LUCENE-7492: Fix LRUQueryCache javadocs.


> Javadoc example of LRUQueryCache doesn't work.
> --
>
> Key: LUCENE-7492
> URL: https://issues.apache.org/jira/browse/LUCENE-7492
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Affects Versions: 6.2.1
>Reporter: Florian Hopf
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7492.patch
>
>
> The Javadoc example in LRUQueryCache still uses a Query, the implementation 
> uses a Weight instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9634) Deprecate collection methods on MiniSolrCloudCluster

2016-10-12 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9634:

Attachment: SOLR-9634.patch

Patch deprecating the collection methods, and cutting tests over to use 
CollectionAdminRequests instead.

This also deprecates the uploadConfig() method that uses java.io.File in favour 
of one using java.nio.Path

> Deprecate collection methods on MiniSolrCloudCluster
> 
>
> Key: SOLR-9634
> URL: https://issues.apache.org/jira/browse/SOLR-9634
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9634.patch
>
>
> MiniSolrCloudCluster has a bunch of createCollection() and deleteCollection() 
> special methods, which aren't really necessary given that we expose a 
> solrClient.  We should deprecate these, and point users to the 
> CollectionAdminRequest API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9634) Deprecate collection methods on MiniSolrCloudCluster

2016-10-12 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-9634:
---

 Summary: Deprecate collection methods on MiniSolrCloudCluster
 Key: SOLR-9634
 URL: https://issues.apache.org/jira/browse/SOLR-9634
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Alan Woodward
Assignee: Alan Woodward


MiniSolrCloudCluster has a bunch of createCollection() and deleteCollection() 
special methods, which aren't really necessary given that we expose a 
solrClient.  We should deprecate these, and point users to the 
CollectionAdminRequest API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7492) Javadoc example of LRUQueryCache doesn't work.

2016-10-12 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7492.
--
   Resolution: Fixed
Fix Version/s: 6.3
   master (7.0)

> Javadoc example of LRUQueryCache doesn't work.
> --
>
> Key: LUCENE-7492
> URL: https://issues.apache.org/jira/browse/LUCENE-7492
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Affects Versions: 6.2.1
>Reporter: Florian Hopf
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7492.patch
>
>
> The Javadoc example in LRUQueryCache still uses a Query, the implementation 
> uses a Weight instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9633) Limit FastLRUCache by RAM Usage

2016-10-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9633:

Attachment: SOLR-9633.patch

First cut. This is slightly different than the impl for LRUCache because it 
completely ignores sizes when maxRamMB is specified. (We should probably throw 
an exception in that case than ignoring). Also the eviction logic is not as 
optimized as the one for the size based policy.

> Limit FastLRUCache by RAM Usage
> ---
>
> Key: SOLR-9633
> URL: https://issues.apache.org/jira/browse/SOLR-9633
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9633.patch
>
>
> SOLR-7372 added a maxRamMB parameter to LRUCache to evict items based on 
> memory usage. That helps with the query result cache but not with the filter 
> cache which defaults to FastLRUCache. This issue intends to add the same 
> feature to FastLRUCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9633) Limit FastLRUCache by RAM Usage

2016-10-12 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-9633:
---

 Summary: Limit FastLRUCache by RAM Usage
 Key: SOLR-9633
 URL: https://issues.apache.org/jira/browse/SOLR-9633
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shalin Shekhar Mangar
 Fix For: 6.3, master (7.0)


SOLR-7372 added a maxRamMB parameter to LRUCache to evict items based on memory 
usage. That helps with the query result cache but not with the filter cache 
which defaults to FastLRUCache. This issue intends to add the same feature to 
FastLRUCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7491.

Resolution: Fixed

Thanks [~jpountz].

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9632) Add a deleteAllCollections() method to MiniSolrCloudCluster

2016-10-12 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9632:

Attachment: SOLR-9632.patch

Patch, adding the method as described and changing RulesTest to use it.

> Add a deleteAllCollections() method to MiniSolrCloudCluster
> ---
>
> Key: SOLR-9632
> URL: https://issues.apache.org/jira/browse/SOLR-9632
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9632.patch
>
>
> This would make test tearDown easier in lots of places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9632) Add a deleteAllCollections() method to MiniSolrCloudCluster

2016-10-12 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568746#comment-15568746
 ] 

Alan Woodward commented on SOLR-9632:
-

>From the mailing list, re SOLR-9132:

[~hossman] wrote:
It's not immediately obvious to me why these collection deletions can't be 
done in an @After method -- but if they need to live in each test method 
can we at least have an @After method that asserts no collections exist 
(via a STATUS call) so if someone writes a new test method but forgets to 
delete that collection then the @After method will catch it and give them 
a self explanatory failure instead of some future confusing/trappy error 
that depends on test order or what not?

[~romseygeek] wrote:
They all have different collection names, which is why we can’t do it in an 
@After method, but you’re right, it is trappy.  How about instead we add a 
.deleteAllCollections() command to MiniSolrCloudCluster, which will ensure that 
each test starts up in an empty cluster?

> Add a deleteAllCollections() method to MiniSolrCloudCluster
> ---
>
> Key: SOLR-9632
> URL: https://issues.apache.org/jira/browse/SOLR-9632
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>
> This would make test tearDown easier in lots of places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9632) Add a deleteAllCollections() method to MiniSolrCloudCluster

2016-10-12 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-9632:
---

 Summary: Add a deleteAllCollections() method to 
MiniSolrCloudCluster
 Key: SOLR-9632
 URL: https://issues.apache.org/jira/browse/SOLR-9632
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Alan Woodward
Assignee: Alan Woodward


This would make test tearDown easier in lots of places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-12 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568729#comment-15568729
 ] 

Alan Woodward commented on SOLR-9182:
-

Now that SOLR-9604 is in, I think we can start removing the SuppressSSL 
annotations from various test cases.  We should try deleting a half-dozen, and 
checking that we don't start to see test failures.

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182.patch, SOLR-9182.patch, SOLR-9182.patch, SOLR-9182.patch
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568697#comment-15568697
 ] 

ASF subversion and git services commented on LUCENE-7491:
-

Commit 86b03358d59c584c89823e187b8806da48eb82af in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=86b0335 ]

LUCENE-7491: fix merge exception if the same field has points in some segments 
but not in others


> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568692#comment-15568692
 ] 

Michael McCandless commented on LUCENE-7493:


Thank you [~maahi333] ... maybe you could make a test case in a patch showing 
the exception when you pass limit=0?  I think the fix should be simple enough, 
basically the code you posted on the mailing list (once we debug it!)...

> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568648#comment-15568648
 ] 

ASF subversion and git services commented on LUCENE-7491:
-

Commit 1b7a88f61ea44ecc873d7c7d135ce5c6ab88bb0a in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1b7a88f ]

LUCENE-7491: fix merge exception if the same field has points in some segments 
but not in others


> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-12 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568644#comment-15568644
 ] 

Dawid Weiss commented on LUCENE-5168:
-

No, not much. I still have a virtualmachine where I used to reproduce this, but 
it'd require an update to the most recent openjdk, recompilation and some sweat 
to reproduce the original issue (it wasn't always reproducible).

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9631) Log cleanup -- On startup, SolrCloud should log precise node info that will be registered in zookeeper

2016-10-12 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-9631:
--

 Summary: Log cleanup -- On startup, SolrCloud should log precise 
node info that will be registered in zookeeper
 Key: SOLR-9631
 URL: https://issues.apache.org/jira/browse/SOLR-9631
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Shawn Heisey


This is similar to what SOLR-9548 did for all modes, but only applies to cloud 
mode.

On startup, the precise information that will be used when registering the node 
in zookeeper should be logged before the registration happens.  This info 
includes (but might not be limited to) host, port, context path, and protocol, 
which will be http or https.  This will make it easier to troubleshoot problems 
with the mechanism that determines the node name/address for registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9630) Kerberos delegation tokens requires missing winutils.exe on Windows

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568470#comment-15568470
 ] 

Uwe Schindler commented on SOLR-9630:
-

Please disable all those tests in Windows using assumeFalse(Constants.WINDOWS). 
We did this already with all other Hadoop related tests.

> Kerberos delegation tokens requires missing winutils.exe on Windows
> ---
>
> Key: SOLR-9630
> URL: https://issues.apache.org/jira/browse/SOLR-9630
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6175/
> {code}
>[junit4]   2> 1072871 ERROR (jetty-launcher-1462-thread-2) 
> [n:127.0.0.1:64463_solr] o.a.h.u.Shell Failed to locate the winutils 
> binary in the hadoop binary path
>[junit4]   2> java.io.IOException: Could not locate executable 
> null\bin\winutils.exe in the Hadoop binaries.
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.(Shell.java:364)
>[junit4]   2>at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
> {code}
> Original comment on SOLR-9200, 
> https://issues.apache.org/jira/browse/SOLR-9200?focusedCommentId=15567838=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567838



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr

2016-10-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568463#comment-15568463
 ] 

Uwe Schindler commented on SOLR-9200:
-

Please add a assumeFalse(Constants.WINDOWS) for *all* hadoop tests. We do this 
on all other tests already.

> Add Delegation Token Support to Solr
> 
>
> Key: SOLR-9200
> URL: https://issues.apache.org/jira/browse/SOLR-9200
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, 
> SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, 
> SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch
>
>
> SOLR-7468 added support for kerberos authentication via the hadoop 
> authentication filter.  Hadoop also has support for an authentication filter 
> that supports delegation tokens, which allow authenticated users the ability 
> to grab/renew/delete a token that can be used to bypass the normal 
> authentication path for a time.  This is useful in a variety of use cases:
> 1) distributed clients (e.g. MapReduce) where each client may not have access 
> to the user's kerberos credentials.  Instead, the job runner can grab a 
> delegation token and use that during task execution.
> 2) If the load on the kerberos server is too high, delegation tokens can 
> avoid hitting the kerberos server after the first request
> 3) If requests/permissions need to be delegated to another user: the more 
> privileged user can request a delegation token that can be passed to the less 
> privileged user.
> Note to self:
> In 
> https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636
>  I made the following comment which I need to investigate further, since I 
> don't know if anything changed in this area:
> {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin 
> moving forward (I understand this is more a generic auth question than 
> kerberos specific). For example, in the latest version of the filter we are 
> using at Cloudera, we play around with the ServletContext in order to pass 
> information around 
> (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106).
>  Is there any way we can get the actual ServletContext in a plugin?{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-12 Thread Mahesh (JIRA)
Mahesh created LUCENE-7493:
--

 Summary: Support of TotalHitCountCollector for 
FacetCollector.search api if numdocs passed as zero.
 Key: LUCENE-7493
 URL: https://issues.apache.org/jira/browse/LUCENE-7493
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mahesh


Hi, 

I want to do drill down search using FacetCollection below is the code 

FacetsCollector facetCollector = new FacetsCollector();
TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
facetCollector);

I just want facet information so I pass limit as zero but I get error "numHits 
must be > 0; please use TotalHitCountCollector if you just need the total hit 
count".

For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
Internally it always create either 'TopFieldCollector' or 
'TopScoreDocCollector' which does not allow limit as 0. 

So if limit should be zero then there should be a way that 
'TotalHitCountCollector' should be initialized. 

Better way would be to provide an api which takes query and collector as inputs 
just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Drill down facet using numhits as zero.

2016-10-12 Thread maahi333
Sorry for wrong version of code :( below is the correct code 

FacetsCollector facetCollector = new FacetsCollector();
TopDocs topDocs = null;
TotalHitCountCollector totalHitCountCollector = null;
if (limit == 0) {
totalHitCountCollector = new TotalHitCountCollector();
indexSearcher.search(query, MultiCollector.wrap(totalHitCountCollector,
facetCollector));
topDocs = new TopDocs(totalHitCountCollector.getTotalHits(), new
ScoreDoc[0], Float.NaN);
} else
topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
limit,
facetCollector);



Yes initially I was searching for search method same as in DrillSideways 
which takes query and collector but could not find one also I saw that it is
not possible to use FacetCollector search API since there is no provision of
having TotalHitCountCollector and because of which I wrote above code but
end result is not as per expected.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Drill-down-facet-using-numhits-as-zero-tp4300838p4300845.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568386#comment-15568386
 ] 

Adrien Grand commented on LUCENE-7491:
--

+1 to the patch

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7489) Improve sparsity support of Lucene70DocValuesFormat

2016-10-12 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7489:
-
Attachment: LUCENE-7489.patch

Here is a prototype that passes tests. It uses the same sparse DISI as norms in 
order to be able to only store actual values. Other than that it is mostly the 
same as the old format, it doesn't leverage the fact that we have an iterator 
in order to do RLE for instance (this should be explored in a different issue I 
think). I still need to review it a bit more carefully and work on the format 
docs.

> Improve sparsity support of Lucene70DocValuesFormat
> ---
>
> Key: LUCENE-7489
> URL: https://issues.apache.org/jira/browse/LUCENE-7489
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7489.patch
>
>
> Like Lucene70NormsFormat, it should be able to only encode actual values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568357#comment-15568357
 ] 

Michael McCandless commented on LUCENE-7491:


+1

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Drill down facet using numhits as zero.

2016-10-12 Thread Michael McCandless
I think we should just make FacetsCollector robust if you pass a
limit=0?  Under the hood, it should use a TotalHitCountCollector
instead.  Can you open a Lucene Jira issue?
https://issues.apache.org/jira/browse/lucene

Alternatively, or maybe in addition, FacetsCollector should have
search methods that only take a Collector and not an "int n", so you
can pass your own TotalHitCountCollector.

In the limit=0 case you are still passing the same "int n" down to
FacetsCollector.search, as "first + limit", so shouldn't that case
also hit the same exception?

I'm not sure why you see different facet results in the two branches.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Oct 12, 2016 at 6:13 AM, maahi333  wrote:
> HI,
>
> Is there a way by using FacetCollector to get the result with numhits as 0.
> Below is the code that I am using for facet search.
>
> FacetsCollector facetCollector = new FacetsCollector();
> FacetsCollector.search(searcher, drillDownQuery, limit, facetCollector);
>
> Here if we pass limit as 0 then we get error as "numHits must be > 0; please
> use TotalHitCountCollector if you just need the total hit count"
>
> So in order to have this fix I changed the code like
>
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = null;
> TotalHitCountCollector totalHitCountCollector = null;
> if (limit == 0) {
> totalHitCountCollector = new TotalHitCountCollector();
> topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
> limit,
> MultiCollector.wrap(totalHitCountCollector, facetCollector));
> } else
> topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
> limit,
> facetCollector);
>
> But there is difference in output when limit is 0 and limit is greater than
> 0.
>
> E.g. if we provide facet filter which does not fetch any record then for
> limit greater than 0 which uses FacetCollector search we do not get any
> facet information since result is not returned.
>
> But for limit=0 we get facet information even though the result is not
> present.
>
>
>
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Drill-down-facet-using-numhits-as-zero-tp4300838.html
> Sent from the Lucene - Java Developer mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568354#comment-15568354
 ] 

Adrien Grand commented on LUCENE-7491:
--

bq. But would this API return null if that field did not index points in that 
segment?

Yes, like doc values?

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568327#comment-15568327
 ] 

Michael McCandless commented on LUCENE-7491:


bq. we could have LeafReader.getPointValues(String fieldName) and remove all 
String fieldName parameters from PointValues?

I think that's compelling!  Maybe open a new issue?

But would this API return null if that field did not index points in that 
segment?

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Drill down facet using numhits as zero.

2016-10-12 Thread maahi333
HI,

Is there a way by using FacetCollector to get the result with numhits as 0.
Below is the code that I am using for facet search.

FacetsCollector facetCollector = new FacetsCollector();
FacetsCollector.search(searcher, drillDownQuery, limit, facetCollector);

Here if we pass limit as 0 then we get error as "numHits must be > 0; please
use TotalHitCountCollector if you just need the total hit count"

So in order to have this fix I changed the code like 

FacetsCollector facetCollector = new FacetsCollector();
TopDocs topDocs = null;
TotalHitCountCollector totalHitCountCollector = null;
if (limit == 0) {
totalHitCountCollector = new TotalHitCountCollector();
topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
limit,
MultiCollector.wrap(totalHitCountCollector, facetCollector));
} else
topDocs = FacetsCollector.search(st.searcher, filterQuery, first + 
limit,
facetCollector);

But there is difference in output when limit is 0 and limit is greater than
0. 

E.g. if we provide facet filter which does not fetch any record then for
limit greater than 0 which uses FacetCollector search we do not get any
facet information since result is not returned.

But for limit=0 we get facet information even though the result is not
present.








--
View this message in context: 
http://lucene.472066.n3.nabble.com/Drill-down-facet-using-numhits-as-zero-tp4300838.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5149) Query facet to respect mincount

2016-10-12 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-5149:

Attachment: SOLR-5149.patch

Updated patch for 6.2.1.

> Query facet to respect mincount
> ---
>
> Key: SOLR-5149
> URL: https://issues.apache.org/jira/browse/SOLR-5149
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, 
> SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, SOLR-5149-trunk.patch, 
> SOLR-5149.patch, SOLR-5149.patch, SOLR-5149.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9200) Add Delegation Token Support to Solr

2016-10-12 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-9200.
--
Resolution: Fixed

> Add Delegation Token Support to Solr
> 
>
> Key: SOLR-9200
> URL: https://issues.apache.org/jira/browse/SOLR-9200
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: master (7.0), 6.2
>
> Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, 
> SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, 
> SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch
>
>
> SOLR-7468 added support for kerberos authentication via the hadoop 
> authentication filter.  Hadoop also has support for an authentication filter 
> that supports delegation tokens, which allow authenticated users the ability 
> to grab/renew/delete a token that can be used to bypass the normal 
> authentication path for a time.  This is useful in a variety of use cases:
> 1) distributed clients (e.g. MapReduce) where each client may not have access 
> to the user's kerberos credentials.  Instead, the job runner can grab a 
> delegation token and use that during task execution.
> 2) If the load on the kerberos server is too high, delegation tokens can 
> avoid hitting the kerberos server after the first request
> 3) If requests/permissions need to be delegated to another user: the more 
> privileged user can request a delegation token that can be passed to the less 
> privileged user.
> Note to self:
> In 
> https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636
>  I made the following comment which I need to investigate further, since I 
> don't know if anything changed in this area:
> {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin 
> moving forward (I understand this is more a generic auth question than 
> kerberos specific). For example, in the latest version of the filter we are 
> using at Cloudera, we play around with the ServletContext in order to pass 
> information around 
> (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106).
>  Is there any way we can get the actual ServletContext in a plugin?{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9630) Kerberos delegation tokens requires missing winutils.exe on Windows

2016-10-12 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9630:
---
Description: 
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6175/
{code}
   [junit4]   2> 1072871 ERROR (jetty-launcher-1462-thread-2) 
[n:127.0.0.1:64463_solr] o.a.h.u.Shell Failed to locate the winutils binary 
in the hadoop binary path
   [junit4]   2> java.io.IOException: Could not locate executable 
null\bin\winutils.exe in the Hadoop binaries.
   [junit4]   2>at 
org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
   [junit4]   2>at 
org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
   [junit4]   2>at org.apache.hadoop.util.Shell.(Shell.java:364)
   [junit4]   2>at 
org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
   [junit4]   2>at 
org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
   [junit4]   2>at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)

{code}

Original comment on SOLR-9200, 
https://issues.apache.org/jira/browse/SOLR-9200?focusedCommentId=15567838=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567838

> Kerberos delegation tokens requires missing winutils.exe on Windows
> ---
>
> Key: SOLR-9630
> URL: https://issues.apache.org/jira/browse/SOLR-9630
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6175/
> {code}
>[junit4]   2> 1072871 ERROR (jetty-launcher-1462-thread-2) 
> [n:127.0.0.1:64463_solr] o.a.h.u.Shell Failed to locate the winutils 
> binary in the hadoop binary path
>[junit4]   2> java.io.IOException: Could not locate executable 
> null\bin\winutils.exe in the Hadoop binaries.
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
>[junit4]   2>at 
> org.apache.hadoop.util.Shell.(Shell.java:364)
>[junit4]   2>at 
> org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
>[junit4]   2>at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
>[junit4]   2>at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
> {code}
> Original comment on SOLR-9200, 
> https://issues.apache.org/jira/browse/SOLR-9200?focusedCommentId=15567838=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567838



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr

2016-10-12 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568209#comment-15568209
 ] 

Ishan Chattopadhyaya commented on SOLR-9200:


How about we re-close this, since this has already been released, and use 
another issue to track this failure? I've created SOLR-9630 for this.

> Add Delegation Token Support to Solr
> 
>
> Key: SOLR-9200
> URL: https://issues.apache.org/jira/browse/SOLR-9200
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, 
> SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, 
> SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch
>
>
> SOLR-7468 added support for kerberos authentication via the hadoop 
> authentication filter.  Hadoop also has support for an authentication filter 
> that supports delegation tokens, which allow authenticated users the ability 
> to grab/renew/delete a token that can be used to bypass the normal 
> authentication path for a time.  This is useful in a variety of use cases:
> 1) distributed clients (e.g. MapReduce) where each client may not have access 
> to the user's kerberos credentials.  Instead, the job runner can grab a 
> delegation token and use that during task execution.
> 2) If the load on the kerberos server is too high, delegation tokens can 
> avoid hitting the kerberos server after the first request
> 3) If requests/permissions need to be delegated to another user: the more 
> privileged user can request a delegation token that can be passed to the less 
> privileged user.
> Note to self:
> In 
> https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636
>  I made the following comment which I need to investigate further, since I 
> don't know if anything changed in this area:
> {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin 
> moving forward (I understand this is more a generic auth question than 
> kerberos specific). For example, in the latest version of the filter we are 
> using at Cloudera, we play around with the ServletContext in order to pass 
> information around 
> (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106).
>  Is there any way we can get the actual ServletContext in a plugin?{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9630) Kerberos delegation tokens requires missing winutils.exe on Windows

2016-10-12 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-9630:
--

 Summary: Kerberos delegation tokens requires missing winutils.exe 
on Windows
 Key: SOLR-9630
 URL: https://issues.apache.org/jira/browse/SOLR-9630
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568192#comment-15568192
 ] 

Adrien Grand commented on LUCENE-7491:
--

bq. I'm generally not really a fan of returning fake empty "should be null but 
caller can't be trusted" objects though

I don't disagree with this statement but I like the current situation even 
less. It makes things hard to test because of the branches it creates. Say you 
want to test the point range query on field {{foo}}, you need to test what 
happens when no fields have points, when foo has points and when foo does not 
have points but other fields from the same segment do. If you don't like 
returning non-null even when no fields have points, then maybe we should 
consider making points work per field like doc values, so instead of having 
{{LeafReader.getPointValues()}} and all methods of {{PointValues}} that take a 
{{String fieldName}} parameter, we could have 
{{LeafReader.getPointValues(String fieldName)}} and remove all {{String 
fieldName}} parameters from {{PointValues}}?

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9432) JSON Facet refactoring to support refinement

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568175#comment-15568175
 ] 

Michael McCandless commented on SOLR-9432:
--

Thanks [~ysee...@gmail.com].

Yeah the numerous warnings are annoying...

> JSON Facet refactoring to support refinement
> 
>
> Key: SOLR-9432
> URL: https://issues.apache.org/jira/browse/SOLR-9432
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Yonik Seeley
> Attachments: SOLR-9432.patch, SOLR-9432.patch
>
>
> Refactor the faceting code, add methods to support facet refinement.
> Committing some of the work of the parent issue in smaller chunks will make 
> it easier for others to introduce additional changes/refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568160#comment-15568160
 ] 

Michael McCandless commented on LUCENE-7491:


bq. To make things less trappy, I'm wondering that maybe 
LeafReader.getPointValues() should never return null.

We could maybe separately consider that?  I don't think that would have 
prevented this particular bug.

I'm generally not really a fan of returning fake empty "should be null but 
caller can't be trusted" objects though: I think it's a degree of API leniency 
that if you take it to its limit, never ends, i.e. how deeply do you keep 
returning null as you dig into the fake object?  These are quite expert APIs 
and I think it's reasonable to expect the caller to be careful with the return 
result...

Today, a null return from {{LeafReader.getPointValues}} is meaningful: it 
notifies you this segment has no points indexed at all.  We would be hiding 
that information if instead we returned a fake empty object.

Not helping matters, I do realize we are inconsistent here, e.g. 
{{LeafReader.fields()}} is not null even if there were no postings in that 
segment, yet {{Fields.terms(String field)}} is null if the postings didn't have 
that field.

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4720) Admin UI - Empty List of Iterations on Slave

2016-10-12 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-4720.
---
   Resolution: Incomplete
Fix Version/s: (was: 6.0)
   (was: 4.9)

> Admin UI - Empty List of Iterations on Slave
> 
>
> Key: SOLR-4720
> URL: https://issues.apache.org/jira/browse/SOLR-4720
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Stefan Matheis (steffkes)
>Assignee: Stefan Matheis (steffkes)
>Priority: Trivial
>
> If you start your slave and have a look at the Replication Page, the list of 
> iterations may be empty - but it's not cristal clear if it's a bug (iteration 
> happend, info available but not shown) or just a matter of fact, that there 
> has nothing happend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7491:
---
Attachment: LUCENE-7491.patch

Patch w/ the fix ... the problem was that the merge logic was assuming just 
because one segment had points for a given field that all segments must have 
points for that field, which is clearly not the case!

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch, LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1127 - Still unstable

2016-10-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1127/

8 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([DC469116CE6EC137]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([DC469116CE6EC137]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchAddsWithDelete

Error Message:
Timeout while trying to assert number of documents @ target_collection

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert number of documents @ 
target_collection
at 
__randomizedtesting.SeedInfo.seed([1CF269DB07F9BC4A:6BE8107FAFE0BD66]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:273)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchAddsWithDelete(CdcrReplicationDistributedZkTest.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+138) - Build # 1919 - Unstable!

2016-10-12 Thread Michael McCandless
This is a nasty looking failure ... it tripped on an assert that I
don't think is possible :)

[junit4]   2> NOTE: reproduce with: ant test
-Dtestcase=TestBagOfPostings -Dtests.method=test
-Dtests.seed=ACF487FB7141D4A9 -Dtests.multiplier=3 -Dtests.slow=true
-Dtests.locale=fr-BI -Dtests.timezone=America/Porto_Velho
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   5.97s J1 | TestBagOfPostings.test <<<
   [junit4]> Throwable #1:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an
uncaught exception in thread: Thread[id=302, name=Thread-226,
state=RUNNABLE, group=TGRP-TestBagOfPostings]
   [junit4]> at
__randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9:24A0B821DFBDB951]:0)
   [junit4]> Caused by: java.lang.AssertionError
   [junit4]> at __randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9]:0)
   [junit4]> at
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:409)
   [junit4]> at
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2087)
   [junit4]> at
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2051)
   [junit4]> at
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4953)
   [junit4]> at
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
   [junit4]> at
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4991)
   [junit4]> at
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4982)
   [junit4]> at
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1565)
   [junit4]> at
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1307)
   [junit4]> at
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
   [junit4]> at
org.apache.lucene.index.TestBagOfPostings$1.run(TestBagOfPostings.java:111)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62):
{field=PostingsFormat(name=LuceneVarGapDocFreqInterval)},
docValues:{}, maxPointsInLeafNode=153,
maxMBSortInHeap=6.690350232411388, sim=ClassicSimilarity,
locale=fr-BI, timezone=America/Porto_Velho
   [junit4]   2> NOTE: Linux 4.4.0-36-generic amd64/Oracle Corporation
9-ea (64-bit)/cpus=12,threads=1,free=204277432,total=508887040
   [junit4]   2> NOTE: All tests run in this JVM: [TestFieldReuse,
TestLucene60PointsFormat, TestDuelingCodecs, TestIsCurrent,
TestIndexWriterCommit, TestQueryRescorer, TestLSBRadixSorter,
TestSimilarity2, TestPerFieldPostingsFormat, TestPostingsOffsets,
TestDocumentsWriterDeleteQueue, TestDateTools, TestDemo,
TestReadOnlyIndex, TestSearchForDuplicates, TestConjunctions,
TestSimpleFSDirectory, MultiCollectorTest, TestBytesRefHash,
TestPagedBytes, TestMixedDocValuesUpdates,
TestLucene50StoredFieldsFormat, TestTotalHitCountCollector,
TestAllFilesHaveChecksumFooter, TestRegexpRandom2,
TestScoreCachingWrappingScorer, TestIndexWriterUnicode,
TestPriorityQueue, TestBagOfPostings]

It's this assert:

  // We should never see an empty candidate: we iterated over
maxMergeAtOnce
  // segments, and already pre-excluded the too-large segments:
  assert candidate.size() > 0;

candidate is an ArrayList, and it should always have at least one
element based on the (admittedly rather hairy) logic above ... I
suspect there is an exciting Java 9 hotspot bug here.  The failure
doesn't repro on Java 1.8.0_101 nor Java 9-ea+139.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Oct 10, 2016 at 6:45 PM, Policeman Jenkins Server
 wrote:
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1919/
> Java: 64bit/jdk-9-ea+138 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.index.TestBagOfPostings.test
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=302, name=Thread-226, 
> state=RUNNABLE, group=TGRP-TestBagOfPostings]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=302, name=Thread-226, state=RUNNABLE, 
> group=TGRP-TestBagOfPostings]
> at 
> __randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9:24A0B821DFBDB951]:0)
> Caused by: java.lang.AssertionError
> at __randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9]:0)
> at 
> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:409)
> at 
> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2087)
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2051)
> at 
> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4953)
> at 
> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4991)
> at 
> 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 468 - Still Unstable!

2016-10-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/468/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<152> but was:<135>

Stack Trace:
java.lang.AssertionError: expected:<152> but was:<135>
at 
__randomizedtesting.SeedInfo.seed([EFDE2E82746845B5:678A1158DA94284D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:280)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-4720) Admin UI - Empty List of Iterations on Slave

2016-10-12 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568009#comment-15568009
 ] 

Stefan Matheis (steffkes) commented on SOLR-4720:
-

Go for it - this was basically more of a reminder than a real issue. At least 
i've never heard anyone talking about it and it's a edge case.

> Admin UI - Empty List of Iterations on Slave
> 
>
> Key: SOLR-4720
> URL: https://issues.apache.org/jira/browse/SOLR-4720
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Stefan Matheis (steffkes)
>Assignee: Stefan Matheis (steffkes)
>Priority: Trivial
> Fix For: 4.9, 6.0
>
>
> If you start your slave and have a look at the Replication Page, the list of 
> iterations may be empty - but it's not cristal clear if it's a bug (iteration 
> happend, info available but not shown) or just a matter of fact, that there 
> has nothing happend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1930 - Unstable!

2016-10-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1930/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.DistributedVersionInfoTest.testReplicaVersionHandling

Error Message:
Captured an uncaught exception in thread: Thread[id=5344, name=Thread-1423, 
state=RUNNABLE, group=TGRP-DistributedVersionInfoTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5344, name=Thread-1423, state=RUNNABLE, 
group=TGRP-DistributedVersionInfoTest]
at 
__randomizedtesting.SeedInfo.seed([7631F3C14C3F151C:AAC8243BEE44DF5D]:0)
Caused by: java.lang.IllegalArgumentException: bound must be positive
at __randomizedtesting.SeedInfo.seed([7631F3C14C3F151C]:0)
at java.util.Random.nextInt(Random.java:388)
at 
org.apache.solr.cloud.DistributedVersionInfoTest$3.run(DistributedVersionInfoTest.java:204)




Build Log:
[...truncated 11294 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistributedVersionInfoTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.DistributedVersionInfoTest_7631F3C14C3F151C-001/init-core-data-001
   [junit4]   2> 584982 INFO  
(SUITE-DistributedVersionInfoTest-seed#[7631F3C14C3F151C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 584982 INFO  
(SUITE-DistributedVersionInfoTest-seed#[7631F3C14C3F151C]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.DistributedVersionInfoTest_7631F3C14C3F151C-001/tempDir-001
   [junit4]   2> 584983 INFO  
(SUITE-DistributedVersionInfoTest-seed#[7631F3C14C3F151C]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 584983 INFO  (Thread-1387) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 584983 INFO  (Thread-1387) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 585083 INFO  
(SUITE-DistributedVersionInfoTest-seed#[7631F3C14C3F151C]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:45128
   [junit4]   2> 585097 INFO  (jetty-launcher-875-thread-2) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 585097 INFO  (jetty-launcher-875-thread-3) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 585098 INFO  (jetty-launcher-875-thread-1) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 585098 INFO  (jetty-launcher-875-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1184506{/solr,null,AVAILABLE}
   [junit4]   2> 585099 INFO  (jetty-launcher-875-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1433bd1{/solr,null,AVAILABLE}
   [junit4]   2> 585099 INFO  (jetty-launcher-875-thread-2) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@b92bd6{HTTP/1.1,[http/1.1]}{127.0.0.1:35495}
   [junit4]   2> 585099 INFO  (jetty-launcher-875-thread-2) [] 
o.e.j.s.Server Started @587325ms
   [junit4]   2> 585099 INFO  (jetty-launcher-875-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=35495}
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@1f48f97{HTTP/1.1,[http/1.1]}{127.0.0.1:34969}
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.e.j.s.Server Started @587335ms
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=34969}
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
6.3.0
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 585109 INFO  (jetty-launcher-875-thread-3) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2016-10-12T07:24:03.378Z
   [junit4]   2> 585110 INFO  (jetty-launcher-875-thread-2) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
6.3.0
   [junit4]   2> 585110 INFO  (jetty-launcher-875-thread-2) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 585110 INFO  (jetty-launcher-875-thread-2) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 585110 INFO  (jetty-launcher-875-thread-2) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2016-10-12T07:24:03.379Z
   [junit4]   

[jira] [Commented] (LUCENE-7491) Unexpected merge exception when merging sparse points fields

2016-10-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567920#comment-15567920
 ] 

Adrien Grand commented on LUCENE-7491:
--

To make things less trappy, I'm wondering that maybe 
{{LeafReader.getPointValues()}} should never return {{null}}. Otherwise the 
code gets into different branches depending on whether *other* fields index 
points or not.

> Unexpected merge exception when merging sparse points fields
> 
>
> Key: LUCENE-7491
> URL: https://issues.apache.org/jira/browse/LUCENE-7491
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7491.patch
>
>
> Spinoff from this user thread: http://markmail.org/thread/vwdvjgupyz6heep5
> If you have a segment that has points, but a given field ("id") didn't index 
> points, and a later segment where field "id" does index points, when try to 
> merge those segments we hit this (incorrect!) exception:
> {noformat}
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalArgumentException: field="id" did not index point values
>   at __randomizedtesting.SeedInfo.seed([9F3E7B030EF482BD]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
> Caused by: java.lang.IllegalArgumentException: field="id" did not index point 
> values
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.getBKDReader(Lucene60PointsReader.java:126)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.size(Lucene60PointsReader.java:224)
>   at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:169)
>   at 
> org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4287)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3864)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9610) New AssertTool in SolrCLI

2016-10-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9610.
---
Resolution: Fixed

Pushed this. I is still not advertised as a tool in main help as it is perhaps 
not as much for end users as for developers. It would be easy to add the word 
"assert" to the end of the COMMAND list though.

> New AssertTool in SolrCLI
> -
>
> Key: SOLR-9610
> URL: https://issues.apache.org/jira/browse/SOLR-9610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9610.patch, SOLR-9610.patch
>
>
> Moving some code from SOLR-7826 over here. This is a new AssertTool which can 
> be used to assert that we are (not) root user and more. Usage:
> {noformat}
> usage: bin/solr assert [-m ] [-e] [-rR] [-s ] [-S ] [-u
> ] [-x ] [-X ]
>  -e,--exitcode Return an exit code instead of printing
>error message on assert fail.
>  -help Print this message
>  -m,--message Exception message to be used in place of
>the default error message
>  -R,--not-root Asserts that we are NOT the root user
>  -r,--root Asserts that we are the root user
>  -S,--not-started Asserts that Solr is NOT started on a
>certain URL
>  -s,--started Asserts that Solr is started on a certain
>URL
>  -u,--same-user Asserts that we run as same user that owns
>
>  -x,--existsAsserts that directory  exists
>  -X,--not-existsAsserts that directory  does NOT
> {noformat}
> This can then also be used from bin/solr through e.g. {{run_tool assert -r}}, 
> or from Java Code static methods such as 
> {{AssertTool.assertSolrRunning(String url)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9610) New AssertTool in SolrCLI

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567895#comment-15567895
 ] 

ASF subversion and git services commented on SOLR-9610:
---

Commit 6512d0c62024177cc5d6c8b7086faaa149565dfb in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6512d0c ]

SOLR-9610: New AssertTool in SolrCLI for easier cross platform assertions from 
command line


> New AssertTool in SolrCLI
> -
>
> Key: SOLR-9610
> URL: https://issues.apache.org/jira/browse/SOLR-9610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9610.patch, SOLR-9610.patch
>
>
> Moving some code from SOLR-7826 over here. This is a new AssertTool which can 
> be used to assert that we are (not) root user and more. Usage:
> {noformat}
> usage: bin/solr assert [-m ] [-e] [-rR] [-s ] [-S ] [-u
> ] [-x ] [-X ]
>  -e,--exitcode Return an exit code instead of printing
>error message on assert fail.
>  -help Print this message
>  -m,--message Exception message to be used in place of
>the default error message
>  -R,--not-root Asserts that we are NOT the root user
>  -r,--root Asserts that we are the root user
>  -S,--not-started Asserts that Solr is NOT started on a
>certain URL
>  -s,--started Asserts that Solr is started on a certain
>URL
>  -u,--same-user Asserts that we run as same user that owns
>
>  -x,--existsAsserts that directory  exists
>  -X,--not-existsAsserts that directory  does NOT
> {noformat}
> This can then also be used from bin/solr through e.g. {{run_tool assert -r}}, 
> or from Java Code static methods such as 
> {{AssertTool.assertSolrRunning(String url)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9610) New AssertTool in SolrCLI

2016-10-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567900#comment-15567900
 ] 

ASF subversion and git services commented on SOLR-9610:
---

Commit df4170629587ff60e10b93dbe16d607ca798e894 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df41706 ]

SOLR-9610: New AssertTool in SolrCLI for easier cross platform assertions from 
command line

(cherry picked from commit 6512d0c)


> New AssertTool in SolrCLI
> -
>
> Key: SOLR-9610
> URL: https://issues.apache.org/jira/browse/SOLR-9610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9610.patch, SOLR-9610.patch
>
>
> Moving some code from SOLR-7826 over here. This is a new AssertTool which can 
> be used to assert that we are (not) root user and more. Usage:
> {noformat}
> usage: bin/solr assert [-m ] [-e] [-rR] [-s ] [-S ] [-u
> ] [-x ] [-X ]
>  -e,--exitcode Return an exit code instead of printing
>error message on assert fail.
>  -help Print this message
>  -m,--message Exception message to be used in place of
>the default error message
>  -R,--not-root Asserts that we are NOT the root user
>  -r,--root Asserts that we are the root user
>  -S,--not-started Asserts that Solr is NOT started on a
>certain URL
>  -s,--started Asserts that Solr is started on a certain
>URL
>  -u,--same-user Asserts that we run as same user that owns
>
>  -x,--existsAsserts that directory  exists
>  -X,--not-existsAsserts that directory  does NOT
> {noformat}
> This can then also be used from bin/solr through e.g. {{run_tool assert -r}}, 
> or from Java Code static methods such as 
> {{AssertTool.assertSolrRunning(String url)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7492) Javadoc example of LRUQueryCache doesn't work.

2016-10-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567893#comment-15567893
 ] 

Adrien Grand commented on LUCENE-7492:
--

+1 I'll merge it later today. Thanks Florian!

> Javadoc example of LRUQueryCache doesn't work.
> --
>
> Key: LUCENE-7492
> URL: https://issues.apache.org/jira/browse/LUCENE-7492
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Affects Versions: 6.2.1
>Reporter: Florian Hopf
>Priority: Minor
> Attachments: LUCENE-7492.patch
>
>
> The Javadoc example in LRUQueryCache still uses a Query, the implementation 
> uses a Weight instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9629) Fix SolrJ warnings and use of deprecated methods in org.apache.solr.client.solrj.impl package

2016-10-12 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9629:
---
Attachment: SOLR-9629.patch

Updated patch.  Added CHANGES.txt entry.

I noticed a wildcard import in BinaryRequestWriter, so I opted to re-organize 
the imports and change the wildcard to specific imports.

> Fix SolrJ warnings and use of deprecated methods in 
> org.apache.solr.client.solrj.impl package
> -
>
> Key: SOLR-9629
> URL: https://issues.apache.org/jira/browse/SOLR-9629
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (7.0)
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9629.patch, SOLR-9629.patch
>
>
> There are some warnings (generic types and deprecation) that appear in the 
> org.apache.solr.client.solrj.impl package which can be easily fixed. Other 
> than some simple fixes, includes a fix to use a MultiPartEntityBuilder to 
> create rather than using a deprecated constructor on MultiPartEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9200) Add Delegation Token Support to Solr

2016-10-12 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reopened SOLR-9200:


> Add Delegation Token Support to Solr
> 
>
> Key: SOLR-9200
> URL: https://issues.apache.org/jira/browse/SOLR-9200
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, 
> SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, 
> SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch
>
>
> SOLR-7468 added support for kerberos authentication via the hadoop 
> authentication filter.  Hadoop also has support for an authentication filter 
> that supports delegation tokens, which allow authenticated users the ability 
> to grab/renew/delete a token that can be used to bypass the normal 
> authentication path for a time.  This is useful in a variety of use cases:
> 1) distributed clients (e.g. MapReduce) where each client may not have access 
> to the user's kerberos credentials.  Instead, the job runner can grab a 
> delegation token and use that during task execution.
> 2) If the load on the kerberos server is too high, delegation tokens can 
> avoid hitting the kerberos server after the first request
> 3) If requests/permissions need to be delegated to another user: the more 
> privileged user can request a delegation token that can be passed to the less 
> privileged user.
> Note to self:
> In 
> https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636
>  I made the following comment which I need to investigate further, since I 
> don't know if anything changed in this area:
> {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin 
> moving forward (I understand this is more a generic auth question than 
> kerberos specific). For example, in the latest version of the filter we are 
> using at Cloudera, we play around with the ServletContext in order to pass 
> information around 
> (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106).
>  Is there any way we can get the actual ServletContext in a plugin?{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr

2016-10-12 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567838#comment-15567838
 ] 

Mikhail Khludnev commented on SOLR-9200:


https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6175/
{code}
   [junit4]   2> 1072871 ERROR (jetty-launcher-1462-thread-2) 
[n:127.0.0.1:64463_solr] o.a.h.u.Shell Failed to locate the winutils binary 
in the hadoop binary path
   [junit4]   2> java.io.IOException: Could not locate executable 
null\bin\winutils.exe in the Hadoop binaries.
   [junit4]   2>at 
org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
   [junit4]   2>at 
org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
   [junit4]   2>at org.apache.hadoop.util.Shell.(Shell.java:364)
   [junit4]   2>at 
org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
   [junit4]   2>at 
org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
   [junit4]   2>at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:115)
{code}
[~thetaphi], is it possible to provide  -Dhadoop.home.dir=C:\hadoop where 
bin\winutils.exe is located. Or just ignore it from windows run? 

> Add Delegation Token Support to Solr
> 
>
> Key: SOLR-9200
> URL: https://issues.apache.org/jira/browse/SOLR-9200
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, 
> SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, 
> SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch
>
>
> SOLR-7468 added support for kerberos authentication via the hadoop 
> authentication filter.  Hadoop also has support for an authentication filter 
> that supports delegation tokens, which allow authenticated users the ability 
> to grab/renew/delete a token that can be used to bypass the normal 
> authentication path for a time.  This is useful in a variety of use cases:
> 1) distributed clients (e.g. MapReduce) where each client may not have access 
> to the user's kerberos credentials.  Instead, the job runner can grab a 
> delegation token and use that during task execution.
> 2) If the load on the kerberos server is too high, delegation tokens can 
> avoid hitting the kerberos server after the first request
> 3) If requests/permissions need to be delegated to another user: the more 
> privileged user can request a delegation token that can be passed to the less 
> privileged user.
> Note to self:
> In 
> https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636
>  I made the following comment which I need to investigate further, since I 
> don't know if anything changed in this area:
> {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin 
> moving forward (I understand this is more a generic auth question than 
> kerberos specific). For example, in the latest version of the filter we are 
> using at Cloudera, we play around with the ServletContext in order to pass 
> information around 
> (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106).
>  Is there any way we can get the actual ServletContext in a plugin?{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567828#comment-15567828
 ] 

Jan Høydahl commented on SOLR-9325:
---

I made a 6.3.0-SNAPSHOT build from current branch_6x with this patch added, and 
uploaded to https://dl.dropboxusercontent.com/u/20080302/solr-6.3.0-SNAPSHOT.tgz
MD5 checksum: dc6a7ec7b2d6daf6016588134772ce31
SHA checksum: e434d14e49bb5965bfbb3e105210fa4c1a5178ee

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9548) solr.log should start with informative welcome message

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567818#comment-15567818
 ] 

Jan Høydahl commented on SOLR-9548:
---

Please open a new issue. I think you may want to add some logging in 
{{ZkContainer#initZookeeper()}}, by adding the extra info you need in this line 
which is already there:
{code}
log.info("Zookeeper client=" + zookeeperHost);  
{code}

> solr.log should start with informative welcome message
> --
>
> Key: SOLR-9548
> URL: https://issues.apache.org/jira/browse/SOLR-9548
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9548-detailversion.patch, SOLR-9548.patch
>
>
> When starting Solr, the first log line should be more informative, such as
> {code}
> Welcome to Apache Solr™ version 7.0.0, running in standalone mode on port 
> 8983 from folder /Users/janhoy/git/lucene-solr/solr
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9629) Fix SolrJ warnings and use of deprecated methods in org.apache.solr.client.solrj.impl package

2016-10-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567811#comment-15567811
 ] 

Shawn Heisey commented on SOLR-9629:


Tests pass with this patch applied, at least on my Windows machine.  I can't 
run precommit on Windows, so I will need to do it again on Linux and then check 
precommit there.  That's going to be tomorrow -- right now it's after bedtime.  
I see that you popped into the IRC channel earlier.  I idle there almost all 
the time, and check in on most days.

> Fix SolrJ warnings and use of deprecated methods in 
> org.apache.solr.client.solrj.impl package
> -
>
> Key: SOLR-9629
> URL: https://issues.apache.org/jira/browse/SOLR-9629
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (7.0)
>Reporter: Michael Braun
>Priority: Trivial
> Attachments: SOLR-9629.patch
>
>
> There are some warnings (generic types and deprecation) that appear in the 
> org.apache.solr.client.solrj.impl package which can be easily fixed. Other 
> than some simple fixes, includes a fix to use a MultiPartEntityBuilder to 
> create rather than using a deprecated constructor on MultiPartEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >