[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839374#comment-15839374
 ] 

ASF subversion and git services commented on LUCENE-7659:
-

Commit 733060121dc6f5cbc1b0e0e1412e396a3241240b in lucene-solr's branch 
refs/heads/apiv2 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7330601 ]

LUCENE-7659: Added IndexWriter#getFieldNames() to return all visible field names


> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839375#comment-15839375
 ] 

ASF subversion and git services commented on SOLR-5944:
---

Commit 5375410807aecf3cc67f82ca1e9ee591f39d0ac7 in lucene-solr's branch 
refs/heads/apiv2 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5375410 ]

SOLR-5944: In-place updates of Numeric DocValues


> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: defensive-checks.log.gz, 
> demo-why-dynamic-fields-cannot-be-inplace-updated-first-time.patch, 
> DUP.patch, hoss.62D328FA1DEA57FD.fail2.txt, hoss.62D328FA1DEA57FD.fail3.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt, master-vs-5944-regular-updates.png, 
> regular-vs-dv-updates.png, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10026) JavaBinCodec should initialize maps and namedLists with known capacity

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839369#comment-15839369
 ] 

ASF subversion and git services commented on SOLR-10026:


Commit 9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9899cbd ]

SOLR-10026: JavaBinCodec should initialize maps and namedLists with known 
capacity


> JavaBinCodec should initialize maps and namedLists with known capacity
> --
>
> Key: SOLR-10026
> URL: https://issues.apache.org/jira/browse/SOLR-10026
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.3, master (7.0)
>Reporter: John Call
>Assignee: Noble Paul
>Priority: Minor
>  Labels: javabincodec
> Fix For: master (7.0), 6.5
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When unmarshalling maps and namedLists the size of theses lists are known but 
> the constructors for initializing these maps and namedLists with a 
> initialSize are not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839373#comment-15839373
 ] 

ASF subversion and git services commented on SOLR-9969:
---

Commit ae269f13162119c8105020a6481b800377297764 in lucene-solr's branch 
refs/heads/apiv2 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae269f1 ]

SOLR-9969: Plugins/Stats section of the UI doesn't display empty metric types


> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, UI
>Affects Versions: 6.4
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 6.4.1
>
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7657) Queries that reference a TermContext can cause a memory leak when they are cached

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839370#comment-15839370
 ] 

ASF subversion and git services commented on LUCENE-7657:
-

Commit f5301428452ee5f9145ef4ecb889442d4e09f1cb in lucene-solr's branch 
refs/heads/apiv2 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f530142 ]

LUCENE-7657: Fixed potential memory leak when a (Span)TermQuery that wraps a 
TermContext is cached.


> Queries that reference a TermContext can cause a memory leak when they are 
> cached
> -
>
> Key: LUCENE-7657
> URL: https://issues.apache.org/jira/browse/LUCENE-7657
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Fix For: master (7.0), 6.5, 6.4.1
>
> Attachments: LUCENE-7657.patch, LUCENE-7657.patch
>
>
> The {{TermContext}} class has a reference to the top reader context of the 
> IndexReader that was used to build it. So if you build a {{(Span)TermQuery}} 
> that references a {{TermContext}} and this query gets cached, then it will 
> keep holding a reference to the index reader, even after the latter gets 
> closed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7543) Make changes-to-html target an offline operation

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839372#comment-15839372
 ] 

ASF subversion and git services commented on LUCENE-7543:
-

Commit 1b80691f28b045c7a8d9552f3c63f7bafdf52d48 in lucene-solr's branch 
refs/heads/apiv2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1b80691 ]

LUCENE-7543: Treat product name passed into changes2html.pl case-insensitively, 
and validate that the product name is either 'lucene' or 'solr'


> Make changes-to-html target an offline operation
> 
>
> Key: LUCENE-7543
> URL: https://issues.apache.org/jira/browse/LUCENE-7543
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 
> 6.3.1, 6.5, 6.4.1
>
> Attachments: LUCENE-7543-drop-XML-Simple.patch, LUCENE-7543.patch, 
> LUCENE-7543.patch, LUCENE-7543.patch
>
>
> Currently changes-to-html pulls release dates from JIRA, and so fails when 
> JIRA is inaccessible (e.g. from behind a firewall).
> SOLR-9711 advocates adding a build sysprop to ignore JIRA connection 
> failures, but I'd rather make the operation always offline.
> In an offline discussion, [~hossman] advocated moving Lucene's and Solr's 
> {{doap.rdf}} files, which contain all of the release dates that the 
> changes-to-html now pulls from JIRA, from the CMS Subversion repository 
> (downloadable from the website at http://lucene.apache.org/core/doap.rdf and 
> http://lucene.apache.org/solr/doap.rdf) to the Lucene/Solr git repository. If 
> we did that, then the process could be entirely offline if release dates were 
> taken from the local {{doap.rdf}} files instead of downloaded from JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7647) CompressingStoredFieldsFormat should reclaim memory more aggressively

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839371#comment-15839371
 ] 

ASF subversion and git services commented on LUCENE-7647:
-

Commit 94530940e4de8b476a5886f284578c933a8f33ef in lucene-solr's branch 
refs/heads/apiv2 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9453094 ]

LUCENE-7647: CompressingStoredFieldsFormat should reclaim memory more 
aggressively.


> CompressingStoredFieldsFormat should reclaim memory more aggressively
> -
>
> Key: LUCENE-7647
> URL: https://issues.apache.org/jira/browse/LUCENE-7647
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Fix For: master (7.0), 6.5, 6.4.1
>
> Attachments: LUCENE-7647.patch
>
>
> When stored fields are configured with {{BEST_COMPRESSION}}, we rely on 
> garbage collection to reclaim Deflater/Inflater instances. However these 
> classes use little JVM memory but may use significant native memory, so if 
> may happen that the OS runs out of native memory before the JVM collects 
> these unreachable Deflater/Inflater instances. We should look into reclaiming 
> native memory more aggressively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.4 - Build # 10 - Unstable

2017-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.4/10/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor170.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:930)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor170.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.(SolrCore.java:930)
at org.apache.solr.core.SolrCore.(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([5570611C89273B46]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] lucene-solr pull request #143: Rc

2017-01-25 Thread yaoyaowd
GitHub user yaoyaowd opened a pull request:

https://github.com/apache/lucene-solr/pull/143

Rc



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yaoyaowd/lucene-solr rc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #143


commit 5d65deca53e51d7b0ed131686a0ad7adab543f73
Author: Steve Rowe 
Date:   2016-09-13T15:25:00Z

LUCENE-7446: don't ask about version back-compatibility when we know it's 
not applicable (i.e., the version to be added is greater than the latest on the 
branch)

commit 82af82c50f589156da51f3a09d21e6cc315e0378
Author: Noble Paul 
Date:   2016-09-14T07:13:27Z

Extract out the ExclusiveSliceProperty as a top level class

commit f87a677dc7522f856bcb0ec43cc8e11b932f3d50
Author: Shalin Shekhar Mangar 
Date:   2016-09-14T19:39:55Z

Adding version 5.5.3

commit 375aea8286b067db99c3f955ec19e1d767715a37
Author: Shalin Shekhar Mangar 
Date:   2016-09-14T19:42:09Z

Add 5.5.3 back compat test indexes

commit 34b1f65c4d0d884528620c96430096539e9fb743
Author: Shalin Shekhar Mangar 
Date:   2016-09-15T05:34:48Z

SOLR-9484: The modify collection API should wait for the modified 
properties to show up in the cluster state

(cherry picked from commit 70fd627)

commit bd9962aba6437dda4d9119bd2cba1fc743187bf5
Author: Nicholas Knize 
Date:   2016-09-15T16:29:57Z

fix RangeField tests so they use actual ranges, not just 0 ranges

commit 471f90cf825ee3106fef1fa4c1094d0ca461e7fb
Author: Mike McCandless 
Date:   2016-09-15T19:45:41Z

LUCENE-7439: FuzzyQuery now matches all terms within the specified edit 
distance, even if they are short

commit 526551ff2a77a05e5fb35770f43d1bb6bf38247f
Author: Mike McCandless 
Date:   2016-09-15T22:44:47Z

Ignore flaky test

commit e55b6f49913cb962cc40b3578951a23283317b29
Author: Noble Paul 
Date:   2016-09-16T11:38:55Z

shallowMap() should behave like a map. testcase added

commit 8352ff21cd3a21db5174b6e7af4b00fd2d373d5b
Author: Alan Woodward 
Date:   2016-09-16T12:33:07Z

SOLR-9507: Correctly set MDC values for CoreContainer threads

commit f728a646f388733cfb57f8d4d9a0d9217f42fd38
Author: Varun Thacker 
Date:   2016-09-16T13:17:06Z

SOLR-9522: Improve error handling in ZKPropertiesWriter

commit 380800261009fd04df8ffb73f030846b6d0d5bf9
Author: Mike McCandless 
Date:   2016-09-16T13:54:17Z

make test less evil: don't use random codec, even for the last IndexWriter

commit e8eadedb85c577ec2aed84d0281d45774f75bdc9
Author: Varun Thacker 
Date:   2016-09-16T13:41:59Z

SOLR-9451: Make clusterstatus command logging less verbose

commit 68d9d97510c8c46992cca06c0874cbe0169cdd22
Author: Noble Paul 
Date:   2016-09-17T07:32:09Z

SOLR-9523: Refactor CoreAdminOperation into smaller classes

commit 924e2da5e3e32e3703a471cfea6a8ab5b4d7c6c3
Author: Noble Paul 
Date:   2016-09-17T07:32:32Z

Merge remote-tracking branch 'origin/branch_6x' into branch_6x

commit 1a3bacfc0f55fba0a00fbc03eb49cd19f68167f2
Author: Noble Paul 
Date:   2016-09-19T12:15:17Z

SOLR-9502: ResponseWriters should natively support MapSerializable

commit f96017d9e10c665e7ab6b9161f2af08efc491946
Author: Alan Woodward 
Date:   2016-09-19T14:29:14Z

SOLR-9512: CloudSolrClient tries other replicas if a cached leader is down

commit b67a062f9db6372cf654a4366233e953c89f2722
Author: Uwe Schindler 
Date:   2016-09-19T22:01:45Z

LUCENE-7292: Fix build to use "--release 8" instead of "-release 8" on Java 
9 (this changed with recent EA build b135)

commit 09d399791a37681b5233248841bae738b799d8e1
Author: Jan Høydahl 
Date:   2016-09-20T08:56:25Z

SOLR-8080: bin/solr start script now exits with informative message if 
using wrong Java version

(cherry picked from commit 4574cb8)

commit 74bf88f8fe50b59e666f9387ca65ec26f821089d
Author: Jan Høydahl 
Date:   2016-09-20T09:22:53Z

SOLR-9475: Add install script support for CentOS and better distro 
detection under Docker

(cherry picked from commit a1bbc99)

commit a4293ce7c4e849b171430a34f36b830a84927a93
Author: Alan Woodward 
Date:   2016-09-20T13:33:38Z

Revert "SOLR-9512: CloudSolrClient tries other replicas if a cached leader 
is down"

This reverts commit f96017d9e10c665e7ab6b9161f2af08efc491946.

commit aeb1a173c7cf7f83b2ef2d45aa1b431580238edd
Author: Shalin Shekhar 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1098 - Still unstable!

2017-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1098/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([D7CE5BBDD31F896E:5F9A64677DE3E496]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest

2017-01-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839131#comment-15839131
 ] 

Erick Erickson commented on SOLR-10036:
---

Is there any chance you could give it a try and submit a patch since you have a 
test setup?


> Revise jackson-core version from 2.5.4 to latest
> 
>
> Key: SOLR-10036
> URL: https://issues.apache.org/jira/browse/SOLR-10036
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shashank Pedamallu
>Priority: Blocker
>
> The current jackson-core dependency in Solr is not compatible with Amazon AWS 
> S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
> jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
> from S3.
> It would be greatly helpful if someone could revise the jackson-core jar in 
> Solr to the latest version. This is a ShowStopper for our Public company.
> Details of my Setup:
> Solr Version: 6.3
> AWS SDK version: 1.11.76



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9510) child level facet exclusions

2017-01-25 Thread Hyun Goo Kang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839067#comment-15839067
 ] 

Hyun Goo Kang commented on SOLR-9510:
-

Hi [~mkhludnev], do we have an ETA for this feature? This would allows us to 
generate multi-select facets with our block-join queries!

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..=color:Red=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}=text:word={!tag=color}color:Red=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}=comment_t:good={!tag=author}author_s:yonik={!tag=stars}stars_i:(5
>  3)=json=on={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2017-01-25 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839058#comment-15839058
 ] 

Ben Manes commented on SOLR-8241:
-

[~elyograg]: Solr 6.4.0 was just released. Do you think we can make a 
commitment to resolve this for 6.5.0? We've iterated on the patch for about a 
year now.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: proposal.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9987) Implement support for multi-valued DocValues in PointFields

2017-01-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-9987:
---

Assignee: Tomás Fernández Löbbe

> Implement support for multi-valued DocValues in PointFields
> ---
>
> Key: SOLR-9987
> URL: https://issues.apache.org/jira/browse/SOLR-9987
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>
> This is not currently supported, and since PointFields can't use FieldCache, 
> faceting, stats, etc is not supported on multi-valued point fields. Followup 
> task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5944) Support updates of numeric DocValues

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-5944:
--

Assignee: Ishan Chattopadhyaya  (was: Shalin Shekhar Mangar)

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: defensive-checks.log.gz, 
> demo-why-dynamic-fields-cannot-be-inplace-updated-first-time.patch, 
> DUP.patch, hoss.62D328FA1DEA57FD.fail2.txt, hoss.62D328FA1DEA57FD.fail3.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt, master-vs-5944-regular-updates.png, 
> regular-vs-dv-updates.png, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838991#comment-15838991
 ] 

Ishan Chattopadhyaya commented on SOLR-8396:


[~tomasflobbe], are you planning to backport now? AFAICT, this is stable and 
tests are passing fine.

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838969#comment-15838969
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


Planning to backport to 6x after SOLR-8396 is backported.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: defensive-checks.log.gz, 
> demo-why-dynamic-fields-cannot-be-inplace-updated-first-time.patch, 
> DUP.patch, hoss.62D328FA1DEA57FD.fail2.txt, hoss.62D328FA1DEA57FD.fail3.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt, master-vs-5944-regular-updates.png, 
> regular-vs-dv-updates.png, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838967#comment-15838967
 ] 

ASF subversion and git services commented on SOLR-5944:
---

Commit 5375410807aecf3cc67f82ca1e9ee591f39d0ac7 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5375410 ]

SOLR-5944: In-place updates of Numeric DocValues


> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: defensive-checks.log.gz, 
> demo-why-dynamic-fields-cannot-be-inplace-updated-first-time.patch, 
> DUP.patch, hoss.62D328FA1DEA57FD.fail2.txt, hoss.62D328FA1DEA57FD.fail3.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt, master-vs-5944-regular-updates.png, 
> regular-vs-dv-updates.png, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838966#comment-15838966
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 5a8dfd96a28bc316d74b5b7e74b28f16b5bd3f4b in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a8dfd9 ]

SOLR-8029: fixing precommit errors


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838956#comment-15838956
 ] 

Adrien Grand commented on LUCENE-6959:
--

I think I'm the one responsible for the decimated tests. Agreed it would be 
nice to restore them using an API that is able to compute top groups.

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7656) Implement geo box and distance queries using doc values.

2017-01-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838949#comment-15838949
 ] 

Adrien Grand commented on LUCENE-7656:
--

bq. I suppose it would help "big" distance queries more and maybe hurt "tiny" 
distance queries, since it does the up front work

I think the only scenario that gets worse is when the distance is so tiny that 
the distance range is always contained in a single BKD cell. As soon as you 
start having crossing cells, that cost is quickly amortized. For instance, say 
your index has 30 segments with one crossing cell each (which is still a 
best-case scenario), we already need to perform 30*1024~=30k distance 
computations. On the other hand, this change needs to do 4096*4~=16k up-front 
distance computations (regardless of the number of segments since it is 
computed for a whole query) so if it allows to save 1/2 distance computations, 
its cost is already amortized.

bq. the same up front work is done twice, and one of them won't be used

True, this should be easy to fix!

bq. Since you use bit shifting, it looks like the number of effective cells may 
be anywhere between 1024 and 4096 right? Do you think two straight integer 
divisions instead, which could get us usually to 4096 cells, is too costly per 
hit?

You are right about the fact that there are lost cells. Avoiding integer 
divisions was one reason in favor of bit shifting, but there was another one, 
which is that they do not create boxes that cross the dateline.

That said, you make a good point that we should not have to both store and 
compute relations for those lost cells, let me look into fixing that.

> Implement geo box and distance queries using doc values.
> 
>
> Key: LUCENE-7656
> URL: https://issues.apache.org/jira/browse/LUCENE-7656
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7656.patch, LUCENE-7656.patch
>
>
> Having geo box and distance queries available as both point and 
> doc-values-based queries means we could use them with 
> {{IndexOrDocValuesQuery}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved LUCENE-7659.
--
   Resolution: Fixed
Fix Version/s: 6.5
   master (7.0)

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838930#comment-15838930
 ] 

ASF subversion and git services commented on LUCENE-7659:
-

Commit aa467e39f04a5592e97c11c15fc936be60ad2f10 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa467e3 ]

LUCENE-7659: Added IndexWriter#getFieldNames() to return all visible field names


> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838904#comment-15838904
 ] 

ASF subversion and git services commented on LUCENE-7659:
-

Commit 733060121dc6f5cbc1b0e0e1412e396a3241240b in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7330601 ]

LUCENE-7659: Added IndexWriter#getFieldNames() to return all visible field names


> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3798 - Still Unstable!

2017-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3798/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting

Error Message:
Should have exactly 4 documents returned expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: Should have exactly 4 documents returned expected:<4> 
but was:<3>
at 
__randomizedtesting.SeedInfo.seed([36B7C7B063946B0B:288FCFB81F3FD18B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.checkSortOrder(DocValuesNotIndexedTest.java:259)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting(DocValuesNotIndexedTest.java:244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Updated] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7659:
-
Attachment: LUCENE-7659.patch

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7659:
-
Attachment: LUCENE-7659.patch

Thanks [~mikemccand] and [~jpountz] for your reviews. I've updated the patch 
here based on your reviews.

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch, 
> LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10041) Leader Initiated Recovery happening when the leader also fails to index the content

2017-01-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-10041:
-

Assignee: Noble Paul

> Leader Initiated Recovery happening when the leader also fails to index the 
> content
> ---
>
> Key: SOLR-10041
> URL: https://issues.apache.org/jira/browse/SOLR-10041
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Grant Ingersoll
>Assignee: Noble Paul
> Fix For: 6.3
>
>
> 1 shard, 3 replica setup.  Documents are being fairly rapidly sent in for 
> indexing which are being rejected (due to a too long of a string field) by 
> the leader, which is then cascading outwards to put the replicas into Leader 
> Initiated Recovery, from which they never recover.
> the stacktrace
> {code}
> 2017-01-25 20:44:46.796 ERROR  [c: s:shard1 r:core_node2 
> x:lucidfind_shard1_replica1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Exception writing document id 
> <.jenkins@crius> to the index; possible analysis error: Document 
> contains at least one immense term in field="body_display" (whose UTF8 
> encoding is longer than the max length 32766), all of which were skipped.  
> Please correct the analyzer to not produce such terms.  The prefix of the 
> first immense term is: '[74, 105, 114, 97, 58, 32, 104, 116, 116, 112, 115, 
> 58, 47, 47, 105, 115, 115, 117, 101, 115, 46, 97, 112, 97, 99, 104, 101, 46, 
> 111, 114]...', original message: bytes can be at most 32766 in length; got 
> 65085. Perhaps the document has an indexed string field (solr.StrField) which 
> is too large
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:171)
> at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:335)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:74)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:957)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1112)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:738)
> at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessor
> Factory.java:91)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-

[jira] [Updated] (SOLR-10041) Leader Initiated Recovery happening when the leader also fails to index the content

2017-01-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-10041:
--
Description: 
1 shard, 3 replica setup.  Documents are being fairly rapidly sent in for 
indexing which are being rejected (due to a too long of a string field) by the 
leader, which is then cascading outwards to put the replicas into Leader 
Initiated Recovery, from which they never recover.

the stacktrace
{code}
2017-01-25 20:44:46.796 ERROR  [c: s:shard1 r:core_node2 
x:lucidfind_shard1_replica1] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Exception writing document id 
<.jenkins@crius> to the index; possible analysis error: Document 
contains at least one immense term in field="body_display" (whose UTF8 encoding 
is longer than the max length 32766), all of which were skipped.  Please 
correct the analyzer to not produce such terms.  The prefix of the first 
immense term is: '[74, 105, 114, 97, 58, 32, 104, 116, 116, 112, 115, 58, 47, 
47, 105, 115, 115, 117, 101, 115, 46, 97, 112, 97, 99, 104, 101, 46, 111, 
114]...', original message: bytes can be at most 32766 in length; got 65085. 
Perhaps the document has an indexed string field (solr.StrField) which is too 
large
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:171)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:335)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:74)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:957)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1112)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:738)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessor
Factory.java:91)
{code}  

  was:1 shard, 3 replica setup.  Documents are being fairly rapidly sent in for 
indexing which are being rejected (due to a too long of a string field) by the 
leader, which is then cascading outwards to put the replicas into Leader 
Initiated Recovery, from which they never recover.


> Leader Initiated Recovery happening when the leader also fails to index the 
> content
> ---
>
> Key: SOLR-10041
> URL: https://issues.apache.org/jira/browse/SOLR-10041
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Grant Ingersoll
> Fix For: 6.3
>
>
> 1 shard, 3 replica 

[jira] [Updated] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest

2017-01-25 Thread Shashank Pedamallu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashank Pedamallu updated SOLR-10036:
--
Description: 
The current jackson-core dependency in Solr is not compatible with Amazon AWS 
S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
from S3.

It would be greatly helpful if someone could revise the jackson-core jar in 
Solr to the latest version. This is a ShowStopper for our Public company.

Details of my Setup:
Solr Version: 6.3
AWS SDK version: 1.11.76

  was:
The current jackson-core dependency in Solr is not compatible with Amazon AWS 
S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
from S3.

It would be greatly helpful if someone could revise the jackson-core jar in 
Solr to the latest version.

Details of my Setup:
Solr Version: 6.3
AWS SDK version: 1.11.76


> Revise jackson-core version from 2.5.4 to latest
> 
>
> Key: SOLR-10036
> URL: https://issues.apache.org/jira/browse/SOLR-10036
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shashank Pedamallu
>Priority: Blocker
>
> The current jackson-core dependency in Solr is not compatible with Amazon AWS 
> S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
> jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
> from S3.
> It would be greatly helpful if someone could revise the jackson-core jar in 
> Solr to the latest version. This is a ShowStopper for our Public company.
> Details of my Setup:
> Solr Version: 6.3
> AWS SDK version: 1.11.76



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest

2017-01-25 Thread Shashank Pedamallu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashank Pedamallu updated SOLR-10036:
--
Priority: Blocker  (was: Major)

> Revise jackson-core version from 2.5.4 to latest
> 
>
> Key: SOLR-10036
> URL: https://issues.apache.org/jira/browse/SOLR-10036
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shashank Pedamallu
>Priority: Blocker
>
> The current jackson-core dependency in Solr is not compatible with Amazon AWS 
> S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
> jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
> from S3.
> It would be greatly helpful if someone could revise the jackson-core jar in 
> Solr to the latest version.
> Details of my Setup:
> Solr Version: 6.3
> AWS SDK version: 1.11.76



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838861#comment-15838861
 ] 

Adrien Grand commented on LUCENE-7659:
--

I think this change is not thread-safe? It currently returns a view 
({{Map.keySet()}}) of the field numbers map which may be written to at any time 
by {{IndexWriter}}, I think it should rather take a snapshot under the lock? 
Ie. something like this:
{code}
+synchronized Set getFieldNames() {
+  return Collections.unmodifiableSet(new HashSet<>(nameToNumber.keySet()));
+}
{code}

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10041) Leader Initiated Recovery happening when the leader also fails to index the content

2017-01-25 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838856#comment-15838856
 ] 

Grant Ingersoll commented on SOLR-10041:


If the leader can't index the docs, it shouldn't cause the replicas to go into 
recovery.

> Leader Initiated Recovery happening when the leader also fails to index the 
> content
> ---
>
> Key: SOLR-10041
> URL: https://issues.apache.org/jira/browse/SOLR-10041
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Grant Ingersoll
> Fix For: 6.3
>
>
> 1 shard, 3 replica setup.  Documents are being fairly rapidly sent in for 
> indexing which are being rejected (due to a too long of a string field) by 
> the leader, which is then cascading outwards to put the replicas into Leader 
> Initiated Recovery, from which they never recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10041) Leader Initiated Recovery happening when the leader also fails to index the content

2017-01-25 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created SOLR-10041:
--

 Summary: Leader Initiated Recovery happening when the leader also 
fails to index the content
 Key: SOLR-10041
 URL: https://issues.apache.org/jira/browse/SOLR-10041
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Grant Ingersoll
 Fix For: 6.3


1 shard, 3 replica setup.  Documents are being fairly rapidly sent in for 
indexing which are being rejected (due to a too long of a string field) by the 
leader, which is then cascading outwards to put the replicas into Leader 
Initiated Recovery, from which they never recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned LUCENE-7659:


Assignee: Ishan Chattopadhyaya

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10035) Admin UI cannot find dataimport handlers

2017-01-25 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838830#comment-15838830
 ] 

Shawn Heisey commented on SOLR-10035:
-

Something that would be good to add is a test of the dataimport tab in the 
admin UI ... I've got absolutely no idea how to write that test.  Code that can 
run javascript like a browser would be required.

> Admin UI cannot find dataimport handlers
> 
>
> Key: SOLR-10035
> URL: https://issues.apache.org/jira/browse/SOLR-10035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.4.0
>Reporter: Shawn Heisey
>  Labels: regression
>
> The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin 
> UI.  It will say "Sorry, no dataimport-handler defined" when trying to access 
> that tab.
> The root cause of the problem is a change in the /admin/mbeans handler, by 
> SOLR-9947.  The section of the output where defined dataimport handlers are 
> listed was changed from QUERYHANDLER to QUERY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10035) Admin UI cannot find dataimport handlers

2017-01-25 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838825#comment-15838825
 ] 

Shawn Heisey commented on SOLR-10035:
-

A binary install of Solr 6.4.0 can be fixed without a new version.  Edit the 
following file:

solr/server/solr-webapp/webapp/js/angular/controllers/dataimport.js

The string "QUERYHANDLER" will show up once in the file.  Change this text to 
"QUERY".  Be sure to only change the one that's all uppercase.


> Admin UI cannot find dataimport handlers
> 
>
> Key: SOLR-10035
> URL: https://issues.apache.org/jira/browse/SOLR-10035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.4.0
>Reporter: Shawn Heisey
>  Labels: regression
>
> The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin 
> UI.  It will say "Sorry, no dataimport-handler defined" when trying to access 
> that tab.
> The root cause of the problem is a change in the /admin/mbeans handler, by 
> SOLR-9947.  The section of the output where defined dataimport handlers are 
> listed was changed from QUERYHANDLER to QUERY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838804#comment-15838804
 ] 

Michael McCandless commented on LUCENE-7659:


OK I see, tricky.  I think it's OK to add this (experimental) method to IW, and 
I agree it would be cleaner if IW could just bring a new DV field into 
existence on update.

Such a thing used to be terrifying, because you were in fact bringing an entire 
column into existence, but in 7.0 we've fixed sparse doc values to be written 
sparsely.

The patch wraps in {{Collections.unmodifiableSet}} twice now ... maybe remove 
the one in IW and add a comment saying {{FieldInfos}} already did so?

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838793#comment-15838793
 ] 

Michael McCandless commented on LUCENE-6959:


Thanks [~martijn.v.groningen].

I think it's dangerous that we hold onto a {{LeafReader}} in the new 
{{ParentChildrenBlockJoinQuery}}?

Can we maybe change the new query to instead hold the parent's docID in the 
top-level reader's space, and then in the {{scorer}} method, check the incoming 
reader context to see if this is the segment that holds the parent?  This would 
also simplify usage, so users wouldn't have to create their own weights?  Then 
I think you don't need the {{LeafReader}} reference.

Also, the {{TestBlockJoin}} tests got a little over decimated I think :)  Can 
we restore at least some of the places that were verifying children?  Or maybe 
we could make a simple sugar API that returns {{TopGroups}} again, and then we 
wouldn't need to change the tests (except to switch to this sugar API)?

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7652) LRUQueryCache / IndexSearcher.DEFAULT_QUERY_CACHE memory leak

2017-01-25 Thread Lae (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838787#comment-15838787
 ] 

Lae commented on LUCENE-7652:
-

I have found our application was indeed leaking, we basically have something 
like:
{code:java}
Directory dir = FSDirectory.open(path);
DirectoryReader reader = DirectoryReader.open(dir);
{code}
{{reader}} was closed after used but {{dir}} was never closed, therefore 
causing this leak.

I have not yet verified whether we are impacted by LUCENE-7657.


> LRUQueryCache / IndexSearcher.DEFAULT_QUERY_CACHE memory leak
> -
>
> Key: LUCENE-7652
> URL: https://issues.apache.org/jira/browse/LUCENE-7652
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.4, 5.5
>Reporter: Lae
>Priority: Critical
>
> Our {{IndexSearcher.DEFAULT_QUERY_CACHE}} is set to use 32MB of heap (the 
> default), however upon inspection of our application's heap, it's retaining 
> ~280MB of memory and increasing slowly.
> {{LRUQueryCache.cache.size}} was at 12,099, and 
> {{LRUQueryCache.cache.modCount}} was also 12,099, meaning nothing was removed 
> from {{LRUQueryCache.cache}} at all.
> The keys of {{LRUQueryCache.cache}} are instances of {{SegmentCoreReaders}}, 
> and I've checked many of the keys, the only reference to them is 
> {{LRUQueryCache.cache}}, given {{LRUQueryCache.cache}} is an 
> {{IdentityHashMap}}, that means you can't even get to them outside of the 
> cache because you can't get a key that's equivalent to one of these in the 
> cache.
> This affectively makes {{IndexSearcher.DEFAULT_QUERY_CACHE}} a memory black 
> hole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838776#comment-15838776
 ] 

Ishan Chattopadhyaya commented on LUCENE-7659:
--

bq. I'm confused here: doesn't Solr know, from its schema, whether a field was 
indexed as doc values or not?
Fields that have DVs enabled and have not been indexed before cannot be used 
for DV updates. Dynamic fields are examples. We know that *_l_dvo are docValues 
fields. But if someone tries to update a field for that pattern, say 
price_l_dvo, it wouldn't exist as a DV field in the index.

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7656) Implement geo box and distance queries using doc values.

2017-01-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838767#comment-15838767
 ] 

Michael McCandless commented on LUCENE-7656:


I like this change!  It's nice you see a perf gain on the OSM benchmarks.  I 
suppose it would help "big" distance queries more and maybe hurt "tiny" 
distance queries, since it does the up front work (the {{DistancePredicate}}, 
but that's the right tradeoff.

It's a bit annoying that, if you use the {{IndexOrDocValuesQuery}}, all the 
same up front work is done twice, and one of them won't be used; maybe we could 
make it lazy?  But that can wait, it's just an opto.

Since you use bit shifting, it looks like the number of effective cells may be 
anywhere between 1024 and 4096 right?  Do you think two straight integer 
divisions instead, which could get us usually to 4096 cells, is too costly per 
hit?

bq. maybe the way LatLonPointDistanceQuery computes relations between a box and 
a circle relies on assumptions that are not met in this new code

I believe you are using it in essentially the same way as before, just 
different sized cells, so this should be fine.

> Implement geo box and distance queries using doc values.
> 
>
> Key: LUCENE-7656
> URL: https://issues.apache.org/jira/browse/LUCENE-7656
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7656.patch, LUCENE-7656.patch
>
>
> Having geo box and distance queries available as both point and 
> doc-values-based queries means we could use them with 
> {{IndexOrDocValuesQuery}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838757#comment-15838757
 ] 

Michael McCandless commented on LUCENE-7659:


I'm confused here: doesn't Solr know, from its schema, whether a field was 
indexed as doc values or not?

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 267 - Still unstable

2017-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/267/

14 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([F8E8309AAA84A9BD:DAC7E9CFD320685A]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries(CdcrReplicationDistributedZkTest.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-9969:

Affects Version/s: 6.4
 Priority: Minor  (was: Major)
  Component/s: UI

> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, UI
>Affects Versions: 6.4
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 6.4.1
>
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-9969.
-
   Resolution: Fixed
 Assignee: Tomás Fernández Löbbe
Fix Version/s: 6.4.1

> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
> Fix For: 6.4.1
>
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838726#comment-15838726
 ] 

ASF subversion and git services commented on SOLR-9969:
---

Commit dc9df10ad54f098892d095b2e39298eb093e6cb3 in lucene-solr's branch 
refs/heads/branch_6_4 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dc9df10 ]

SOLR-9969: Plugins/Stats section of the UI doesn't display empty metric types


> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838723#comment-15838723
 ] 

ASF subversion and git services commented on SOLR-9969:
---

Commit cd3b795b1f2945e4d9517927046a4137224d3ae1 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd3b795 ]

SOLR-9969: Plugins/Stats section of the UI doesn't display empty metric types


> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838720#comment-15838720
 ] 

ASF subversion and git services commented on SOLR-9969:
---

Commit ae269f13162119c8105020a6481b800377297764 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae269f1 ]

SOLR-9969: Plugins/Stats section of the UI doesn't display empty metric types


> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10023) Improve single unit test run time with ant.

2017-01-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838676#comment-15838676
 ] 

Mark Miller commented on SOLR-10023:


Steve is right that going into the module is light years better. In my script I 
at least can now regex out the module (or contrib) from the test file path and 
then go into the right module to run the test. Would be a cool hack if the top 
level solr test even just did that for you. That or it probably should fail and 
tell you how to run a single test rather than punish you with an extra 2 
minutes plus.

> Improve single unit test run time with ant.
> ---
>
> Key: SOLR-10023
> URL: https://issues.apache.org/jira/browse/SOLR-10023
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
> Attachments: stdout.tar.gz
>
>
> It seems to take 2 minutes and 45 seconds to run a single test with the 
> latest build design and the test itself is only 4 seconds. I've noticed this 
> for a long time, and it seems because ant is running through a billion 
> targets first. 
> I haven't checked yet, so maybe it's a Solr specific issue? I'll check with 
> Lucene and move this issue if necessary.
> There is hopefully something we can do to improve this though. At least we 
> should try and get some sharp minds to take first / second look. If I did not 
> use an IDE so much to run tests, this would drive me nuts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838668#comment-15838668
 ] 

David Smiley commented on SOLR-8029:


Exciting indeed :-)  Congrats Noble & everyone else for working so hard on it.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10040) HdfsNNFailoverTest currently wraps BasicDistributedZkTest and so runs for no reason as what it tests is currently disabled. We should ignore this test for now.

2017-01-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10040:
--

 Summary: HdfsNNFailoverTest currently wraps BasicDistributedZkTest 
and so runs for no reason as what it tests is currently disabled. We should 
ignore this test for now.
 Key: SOLR-10040
 URL: https://issues.apache.org/jira/browse/SOLR-10040
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10039) LatLonPointSpatialField

2017-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838659#comment-15838659
 ] 

David Smiley commented on SOLR-10039:
-

The patch includes some references to a HeatmapSpatialField that is erroneous 
for this patch as it's actually for another issue I'm working on.

> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...
> This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5170) Spatial multi-value distance sort via DocValues

2017-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838651#comment-15838651
 ] 

David Smiley commented on SOLR-5170:


Jeff, I wound up doing this today; see SOLR-10039.  I plan to close this issue 
on the completion of that issue.

> Spatial multi-value distance sort via DocValues
> ---
>
> Key: SOLR-5170
> URL: https://issues.apache.org/jira/browse/SOLR-5170
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch, 
> SOLR-5170_spatial_multi-value_sort_via_docvalues.patch.txt
>
>
> The attached patch implements spatial multi-value distance sorting.  In other 
> words, a document can have more than one point per field, and using a 
> provided function query, it will return the distance to the closest point.  
> The data goes into binary DocValues, and as-such it's pretty friendly to 
> realtime search requirements, and it only uses 8 bytes per point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10039) LatLonPointSpatialField

2017-01-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10039:

Description: 
The fastest and most efficient spatial field for point data in Lucene/Solr is 
{{LatLonPoint}} in Lucene's sandbox module.  I'll include 
{{LatLonDocValuesField}} with this even though it's a separate class.  
LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
is also multi-valued capable (a big deal as the existing Solr ones either 
aren't or do poorly at it).  Note that this feature is limited to a 
latitude/longitude spherical world model.  And furthermore the precision is at 
about a centimeter -- less precise than the other spatial fields but 
nonetheless plenty good for most applications.  Last but not least, this 
capability natively supports polygons, albeit those that don't wrap the 
dateline or a pole.

I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
forthcoming...

This development was funded by the Harvard Center for Geographic Analysis as 
part of the HHypermap project

  was:
The fastest and most efficient spatial field for point data in Lucene/Solr is 
{{LatLonPoint}} in Lucene's sandbox module.  I'll include 
{{LatLonDocValuesField}} with this even though it's a separate class.  
LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
is also multi-valued capable (a big deal as the existing Solr ones either 
aren't or do poorly at it).  Note that this feature is limited to a 
latitude/longitude spherical world model.  And furthermore the precision is at 
about a centimeter -- less precise than the other spatial fields but 
nonetheless plenty good for most applications.  Last but not least, this 
capability natively supports polygons, albeit those that don't wrap the 
dateline or a pole.

I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
forthcoming...


> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...
> This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10039) LatLonPointSpatialField

2017-01-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10039:

Attachment: SOLR_10039_LatLonPointSpatialField.patch

The attached patch has most everything but it's not quite committable.  I'm 
internally using a 6.4 based branch so the diff includes stuff that won't be 
needed for trunk or 6.5+.

The field extends {{AbstractSpatialFieldType}} and thus inherits the 
functionality and integration with the rest of Solr spatial.  The main TODOs 
are:
* make indexed & docValues attributes configurable
* integrate Polygon support.

I would have liked to have introduced this embedded {{SpatialStrategy}} 
implementation to Lucene spatial-extras but I didn't think depending on sandbox 
was a good idea, at least not now, so I opted not to.

> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838632#comment-15838632
 ] 

Tomás Fernández Löbbe commented on SOLR-9969:
-

I saw your comment on some other Jira and thought you were back :). It does 
work, it just skips the empty stats

> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10039) LatLonPointSpatialField

2017-01-25 Thread David Smiley (JIRA)
David Smiley created SOLR-10039:
---

 Summary: LatLonPointSpatialField
 Key: SOLR-10039
 URL: https://issues.apache.org/jira/browse/SOLR-10039
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley


The fastest and most efficient spatial field for point data in Lucene/Solr is 
{{LatLonPoint}} in Lucene's sandbox module.  I'll include 
{{LatLonDocValuesField}} with this even though it's a separate class.  
LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
is also multi-valued capable (a big deal as the existing Solr ones either 
aren't or do poorly at it).  Note that this feature is limited to a 
latitude/longitude spherical world model.  And furthermore the precision is at 
about a centimeter -- less precise than the other spatial fields but 
nonetheless plenty good for most applications.  Last but not least, this 
capability natively supports polygons, albeit those that don't wrap the 
dateline or a pole.

I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
forthcoming...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7

2017-01-25 Thread Anshum Gupta
+1 to early May.

We might also want to evaluate the Ver 2 APIs for Solr and default to that
if that's ready.

-Anshum

On Wed, Jan 25, 2017 at 4:51 AM Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:

> Should SOLR-8396 be a prerequisite?
>
> On 25 Jan 2017 10:38, "Christine Poerschke (BLOOMBERG/ LONDON)" <
> cpoersc...@bloomberg.net> wrote:
>
> +1 for May.
>
> I'd like to see https://issues.apache.org/jira/browse/SOLR-8668 in the
> 7.0 release (and have tagged/updated the ticket to indicate so).
>
> Christine
>
> From: dev@lucene.apache.org At: 01/24/17 17:17:27
> To: dev@lucene.apache.org
> Subject: Re: Lucene/Solr 7
>
> I would love to see SOLR-5944, SOLR-8029, SOLR-9835 in a 7.0 release. I
> think all of these are very close to landing up on master.
>
> On Tue, Jan 24, 2017 at 10:22 PM, Adrien Grand  wrote:
>
> Hi all,
>
> We have accumulated some good changes in master, like point support in
> Solr or sparse norms/doc-values in Lucene. I think it would be nice to
> expose these new features to our users, so what would you think about
> starting to work on making master ready to be released?
>
> Since the question about the timeframe will be asked, I think we could
> target something like early May 2017, which is a bit more than 3 months
> away from now. What do you think?
>
> Adrien
>
>
>


[jira] [Assigned] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index

2017-01-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-10038:
---

Assignee: David Smiley

> Spatial Intersect Very Slow For Large Polygon and Large Index
> -
>
> Key: SOLR-10038
> URL: https://issues.apache.org/jira/browse/SOLR-10038
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4.0
> Environment: Linux Ubuntu + Solr 6.4.0
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatialsearch
>
> Hi all, I have indexed the entire geonames points (lat/long) with JTS 
> enabled, and I am trying return all points (geonameids) within a certain 
> polygon (e.g. Netherlands country polygon). This query takes 3 minutes to 
> return only 10.000  points. I am using only solr intersect. no facets. no 
> extra fitering.
> Is there any configuration that could slow down such a query to less than 300 
> ms?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index

2017-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838609#comment-15838609
 ] 

David Smiley commented on SOLR-10038:
-

Oh and finally, before you do all this, do an optimize :-)

> Spatial Intersect Very Slow For Large Polygon and Large Index
> -
>
> Key: SOLR-10038
> URL: https://issues.apache.org/jira/browse/SOLR-10038
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4.0
> Environment: Linux Ubuntu + Solr 6.4.0
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatialsearch
>
> Hi all, I have indexed the entire geonames points (lat/long) with JTS 
> enabled, and I am trying return all points (geonameids) within a certain 
> polygon (e.g. Netherlands country polygon). This query takes 3 minutes to 
> return only 10.000  points. I am using only solr intersect. no facets. no 
> extra fitering.
> Is there any configuration that could slow down such a query to less than 300 
> ms?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-10006:
-
Attachment: SOLR-10006.patch

New patch that fixes your specific issue, however it probably still needs a 
little work.

First, we would probably want to catch EOF and FileNotFound in addition to 
NoSuchFile in IndexWriter.
Second, do we actually want to catch that at IndexWriter? There's a wide range 
of where we can catch and rethrow, and one could reasonably make an argument 
for any of them:

{noformat}
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)
at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
at 
org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473)
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79)
at 
org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958)
{noformat}

That might be better as a lucene discussion though?

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index

2017-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838607#comment-15838607
 ] 

David Smiley commented on SOLR-10038:
-

Wow; 3 minutes for 10k points.  Roughly how many vertices are on the polygon?

https://cwiki.apache.org/confluence/display/solr/Spatial+Search
There are some tricks to speed up polygonal search.  One is setting "autoIndex" 
on the field.  (no re-index required).  Just set that the true and leave it.  
Then, fiddle with either distErr or distErrPct to get the precision you want 
and no more than you need.  Lastly, fiddle with prefixGridScanLevel  -- set it 
to the grid level that the internal algorithm switches from recursive 
decomposition to scanning.  If you have a 20 level prefixTree, I recall it 
defaults to 4 off from the bottom, thus 16.

> Spatial Intersect Very Slow For Large Polygon and Large Index
> -
>
> Key: SOLR-10038
> URL: https://issues.apache.org/jira/browse/SOLR-10038
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4.0
> Environment: Linux Ubuntu + Solr 6.4.0
>Reporter: samur araujo
>  Labels: spatialsearch
>
> Hi all, I have indexed the entire geonames points (lat/long) with JTS 
> enabled, and I am trying return all points (geonameids) within a certain 
> polygon (e.g. Netherlands country polygon). This query takes 3 minutes to 
> return only 10.000  points. I am using only solr intersect. no facets. no 
> extra fitering.
> Is there any configuration that could slow down such a query to less than 300 
> ms?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7269) ZK as truth for SolrCloud

2017-01-25 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838581#comment-15838581
 ] 

Jeff Wartes commented on SOLR-7269:
---

Any life still here? I've always thought it was strange that Solr effectively 
had two sources of truth. (disk and Zk)

> ZK as truth for SolrCloud
> -
>
> Key: SOLR-7269
> URL: https://issues.apache.org/jira/browse/SOLR-7269
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>
> We have been wanting to do this for a long time. 
> Mark listed out what all should go into this here - 
> https://issues.apache.org/jira/browse/SOLR-7248?focusedCommentId=14363441=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14363441
> The best approach as Mark suggested would be to work on these under 
> legacyCloud=false and once we are confident switch over to it as default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838566#comment-15838566
 ] 

Upayavira commented on SOLR-9969:
-

[~tomasflobbe] one of the points of doing the conversion was to make the code 
more accessible to non-JS developers. I'm afraid I'm not doing Solr dev at the 
moment, but your patch looks simple - does it work?

> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 634 - Still unstable!

2017-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/634/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([4785D7EF93182F22:2F3AE2C543823DCE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10023) Improve single unit test run time with ant.

2017-01-25 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838550#comment-15838550
 ] 

Dawid Weiss commented on SOLR-10023:


I bet the timings here could be cut to very reasonable few seconds... if the 
dependencies are scanned properly once, not over and over again. But I don't 
know how to do this in Ant, Maven has its own set of nighmares ({{-am -pl 
...}}) and a complex Gradle build is no simpler to understand than a complex 
Ant build (my personal opinion).

Back to drawing board. Or {{Make}}...

> Improve single unit test run time with ant.
> ---
>
> Key: SOLR-10023
> URL: https://issues.apache.org/jira/browse/SOLR-10023
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
> Attachments: stdout.tar.gz
>
>
> It seems to take 2 minutes and 45 seconds to run a single test with the 
> latest build design and the test itself is only 4 seconds. I've noticed this 
> for a long time, and it seems because ant is running through a billion 
> targets first. 
> I haven't checked yet, so maybe it's a Solr specific issue? I'll check with 
> Lucene and move this issue if necessary.
> There is hopefully something we can do to improve this though. At least we 
> should try and get some sharp minds to take first / second look. If I did not 
> use an IDE so much to run tests, this would drive me nuts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+153) - Build # 18847 - Unstable!

2017-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18847/
Java: 64bit/jdk-9-ea+153 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:36906","node_name":"127.0.0.1:36906_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/31)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:44057;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:44057_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:43089;,   "node_name":"127.0.0.1:43089_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:36906;,   "node_name":"127.0.0.1:36906_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:36906","node_name":"127.0.0.1:36906_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/31)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:44057;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:44057_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:43089;,
  "node_name":"127.0.0.1:43089_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:36906;,
  "node_name":"127.0.0.1:36906_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3FA369F184B18AA4:B7F7562B2A4DE75C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Commented] (SOLR-9969) Display new metrics on the UI

2017-01-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838537#comment-15838537
 ] 

Tomás Fernández Löbbe commented on SOLR-9969:
-

[~upayavira] any thoughts? Maybe we can get this in 6.4.1

> Display new metrics on the UI
> -
>
> Key: SOLR-9969
> URL: https://issues.apache.org/jira/browse/SOLR-9969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
> Attachments: mbeans_handler.png, SOLR-9969.patch
>
>
> The current Core Selector -> Core -> Plugin/Stats UI shows tabs for the new 
> metrics information we are adding but doesn't populate correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-01-25 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838528#comment-15838528
 ] 

Upayavira commented on SOLR-10037:
--

There is a ticket that takes the /solr out of URLs used in the services.js 
file, making them relative such that Solr might be deployed to a different URL. 
It looks like this might be an inadvertent consequence of that change. See 
SOLR-9584.

> (non-original) Solr Admin UI > query tab > unexpected url above results
> ---
>
> Key: SOLR-10037
> URL: https://issues.apache.org/jira/browse/SOLR-10037
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> To reproduce, in a browser run a search from the query tab and then notice 
> the url shown above the results
> {code}
> # actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
> # expected: 
> http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
> {code}
> (We had noticed this when using the (master branch) Admin UI during the 
> [London Lucene Hackday for Full 
> Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
>  on Friday, I just tried to reproduce both on master (reproducible with 
> non-original version only) and on branch_6_4 (not reproducible) and search 
> for an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-01-25 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838526#comment-15838526
 ] 

Christine Poerschke commented on SOLR-10037:


Yes, the new, current UI (the UI that has a _"Use original UI"_ text in the top 
right corner).

> (non-original) Solr Admin UI > query tab > unexpected url above results
> ---
>
> Key: SOLR-10037
> URL: https://issues.apache.org/jira/browse/SOLR-10037
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> To reproduce, in a browser run a search from the query tab and then notice 
> the url shown above the results
> {code}
> # actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
> # expected: 
> http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
> {code}
> (We had noticed this when using the (master branch) Admin UI during the 
> [London Lucene Hackday for Full 
> Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
>  on Friday, I just tried to reproduce both on master (reproducible with 
> non-original version only) and on branch_6_4 (not reproducible) and search 
> for an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838523#comment-15838523
 ] 

Mike Drob commented on SOLR-10006:
--

Apparently {{.doc}} files are read at a different point than the segments and 
{{.ti}} files, so that's causing your exception. I can reproduce this locally 
and will work on a fix.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-01-25 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838520#comment-15838520
 ] 

Alexandre Rafalovitch commented on SOLR-10037:
--

What do you mean by "non-original". Do you mean the new AngularJS-based UI?

> (non-original) Solr Admin UI > query tab > unexpected url above results
> ---
>
> Key: SOLR-10037
> URL: https://issues.apache.org/jira/browse/SOLR-10037
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> To reproduce, in a browser run a search from the query tab and then notice 
> the url shown above the results
> {code}
> # actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
> # expected: 
> http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
> {code}
> (We had noticed this when using the (master branch) Admin UI during the 
> [London Lucene Hackday for Full 
> Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
>  on Friday, I just tried to reproduce both on master (reproducible with 
> non-original version only) and on branch_6_4 (not reproducible) and search 
> for an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10023) Improve single unit test run time with ant.

2017-01-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838519#comment-15838519
 ] 

Mark Miller commented on SOLR-10023:


bq.  If you want speed, you should run individual tests from the module that 
contains them

We should almost fail single test running from a higher level and tell the dev 
to do this then - would not have guessed it would make so much difference. I'll 
give that a try.

bq. Looks like the recursion is doing things over and over again.

Probably not so easy to fix sadly. I found that if you remove the test call 
depends on compile and just run the precompiled test, I could go from 2 minutes 
40 seconds to about 40 seconds of build time.

> Improve single unit test run time with ant.
> ---
>
> Key: SOLR-10023
> URL: https://issues.apache.org/jira/browse/SOLR-10023
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
> Attachments: stdout.tar.gz
>
>
> It seems to take 2 minutes and 45 seconds to run a single test with the 
> latest build design and the test itself is only 4 seconds. I've noticed this 
> for a long time, and it seems because ant is running through a billion 
> targets first. 
> I haven't checked yet, so maybe it's a Solr specific issue? I'll check with 
> Lucene and move this issue if necessary.
> There is hopefully something we can do to improve this though. At least we 
> should try and get some sharp minds to take first / second look. If I did not 
> use an IDE so much to run tests, this would drive me nuts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index

2017-01-25 Thread samur araujo (JIRA)
samur araujo created SOLR-10038:
---

 Summary: Spatial Intersect Very Slow For Large Polygon and Large 
Index
 Key: SOLR-10038
 URL: https://issues.apache.org/jira/browse/SOLR-10038
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spatial
Affects Versions: 6.4.0
 Environment: Linux Ubuntu + Solr 6.4.0
Reporter: samur araujo


Hi all, I have indexed the entire geonames points (lat/long) with JTS enabled, 
and I am trying return all points (geonameids) within a certain polygon (e.g. 
Netherlands country polygon). This query takes 3 minutes to return only 10.000  
points. I am using only solr intersect. no facets. no extra fitering.

Is there any configuration that could slow down such a query to less than 300 
ms?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838489#comment-15838489
 ] 

Mark Miller edited comment on SOLR-10032 at 1/25/17 8:12 PM:
-

I think there is likely too much of a test coverage problem if we take that 
approach.

I'd like to instead push gradually, though perhaps 'Apache time' quickly.

First I will create critical issues for the worst offenders, if they cannot be 
fixed pretty much right away, I will badapple or awaitsfix them.

I'll also create critical issues for other fails above a certain threshold and 
ping appropriate JIRA issues to try and bring attention to them. Over time we 
can ignore these as well if they are not addressed and someone doesn't find 
them important enough to keep coverage.

We can then tighten this net down to a certain level. 

I think if we commit to following through on some progress, we can take an 
iterative approach that gives people ample time to fix important tests and us 
time to evaluate loss of important test coverage (even flakey test coverage is 
very valuable info to us right now, and some flakey tests pass 90%+ of the time 
- we want to harden them, but they provide critical coverage in many cases).

I'll also ping the dev list with a summary occasionally to bring attention to 
this and the current state.


was (Author: markrmil...@gmail.com):
I think there is likely too much of a test coverage problem if we take that 
approach.

I'd like to instead push gradually, though perhaps 'Apache time' quickly.

First I will great critical issues for the worst offenders, if they cannot be 
fixed pretty much right away, I will badapple or awaitsfix them.

I'll also create critical issues for other fails above a certain threshold and 
ping appropriate JIRA issues to try and bring attention to them. Over time we 
can ignore these as well if they are not addressed and someone doesn't find 
them important enough to keep coverage.

We can then tighten this net down to a certain level. 

I think if we commit to following through on some progress, we can take an 
iterative approach that gives people ample time to fix important tests and us 
time to evaluate loss of important test coverage (even flakey test coverage is 
very valuable info to us right now, and some flakey tests pass 90%+ of the time 
- we want to harden them, but they provide critical coverage in many cases).

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838489#comment-15838489
 ] 

Mark Miller commented on SOLR-10032:


I think there is likely too much of a test coverage problem if we take that 
approach.

I'd like to instead push gradually, though perhaps 'Apache time' quickly.

First I will great critical issues for the worst offenders, if they cannot be 
fixed pretty much right away, I will badapple or awaitsfix them.

I'll also create critical issues for other fails above a certain threshold and 
ping appropriate JIRA issues to try and bring attention to them. Over time we 
can ignore these as well if they are not addressed and someone doesn't find 
them important enough to keep coverage.

We can then tighten this net down to a certain level. 

I think if we commit to following through on some progress, we can take an 
iterative approach that gives people ample time to fix important tests and us 
time to evaluate loss of important test coverage (even flakey test coverage is 
very valuable info to us right now, and some flakey tests pass 90%+ of the time 
- we want to harden them, but they provide critical coverage in many cases).

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #140: [SOLR-9997] Enable configuring SolrHttpClientBuilder...

2017-01-25 Thread hgadre
Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/140
  
@janhoy Now that we have proper support for configuring basic auth 
credentials, we should also consider deprecating following logic,


https://github.com/apache/lucene-solr/blob/1b80691f28b045c7a8d9552f3c63f7bafdf52d48/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java#L50



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9997) Enable configuring SolrHttpClientBuilder via java system property

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838484#comment-15838484
 ] 

ASF GitHub Bot commented on SOLR-9997:
--

Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/140
  
@janhoy Now that we have proper support for configuring basic auth 
credentials, we should also consider deprecating following logic,


https://github.com/apache/lucene-solr/blob/1b80691f28b045c7a8d9552f3c63f7bafdf52d48/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java#L50



> Enable configuring SolrHttpClientBuilder via java system property
> -
>
> Key: SOLR-9997
> URL: https://issues.apache.org/jira/browse/SOLR-9997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>
> Currently SolrHttpClientBuilder needs to be configured via invoking 
> HttpClientUtil#setHttpClientBuilder(...) API. On the other hand SolrCLI 
> attempts to support configuring SolrHttpClientBuilder via Java system 
> property.  
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L265
> But after changes for SOLR-4509, this is no longer working. This is because 
> we need to configure HttpClientBuilderFactory which can provide appropriate 
> SolrHttpClientBuilder instance (e.g. Krb5HttpClientBuilder). I verified that 
> SolrCLI does not work in a kerberos enabled cluster. During the testing I 
> also found that SolrCLI is hardcoded to use basic authentication,
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L156
> This jira is to add support for configuring HttpClientBuilderFactory as a 
> java system property so that SolrCLI as well as other Solr clients can also 
> benefit this. Also we should provide a HttpClientBuilderFactory which support 
> configuring preemptive basic authentication so that we can remove the 
> hardcoded basic auth usage in SolrCLI (and enable it work with kerberos). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-01-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838482#comment-15838482
 ] 

Erick Erickson edited comment on SOLR-10006 at 1/25/17 8:05 PM:


Mike:

First of all thanks for looking. This is the full log file after starting, 
fresh trunk pull this AM. Since it's pretty short I decided to upload the whole 
thing.

Here's what I did to make this happen:
1> set up a 2x2 collection
2> indexed a bunch of docs. Stupid-simple indexing, just wanted to get more 
than one segment. I'm not sure having more than one segment is relevant 
actually
3> shut down a follower
4> removed a few of the segment files. Not an entire segment, just 3 files at 
random from a single segment. 
5> removed all the logs from the log directory.
6> tried to start the replica.


was (Author: erickerickson):
Mike:

First of all thanks for looking. This is the full log file after starting, 
fresh trunk pull this AM.

Here's what I did to make this happen:
1> set up a 2x2 collection
2> indexed a bunch of docs. Stupid-simple indexing, just wanted to get more 
than one segment. I'm not sure having more than one segment is relevant 
actually
3> shut down a follower
4> removed a few of the segment files. Not an entire segment, just 3 files at 
random from a single segment. 
5> removed all the logs from the log directory.
6> tried to start the replica.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-01-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10006:
--
Attachment: solr.log

Mike:

First of all thanks for looking. This is the full log file after starting, 
fresh trunk pull this AM.

Here's what I did to make this happen:
1> set up a 2x2 collection
2> indexed a bunch of docs. Stupid-simple indexing, just wanted to get more 
than one segment. I'm not sure having more than one segment is relevant 
actually
3> shut down a follower
4> removed a few of the segment files. Not an entire segment, just 3 files at 
random from a single segment. 
5> removed all the logs from the log directory.
6> tried to start the replica.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838475#comment-15838475
 ] 

Ishan Chattopadhyaya commented on SOLR-10032:
-

I think a hammer approach (and probably effective) for now would be disable all 
flaky tests. While someone should anyway need to work on them, their getting 
resolved would not get in the way of a regular developer now trying to figure 
out the basic questions Mark mentioned.

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838422#comment-15838422
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7659 at 1/25/17 7:55 PM:
---

Thanks [~jpountz] for looking into this.

bq. If I understand the Solr issue correctly, your use-case is to check whether 
an update can be applied using dv-updates only, or whether it requires an 
regular update. Do I get it right?
Yes, exactly.

bq. maybe a better way to address this use-case would be to either try the 
dv-only update and fallback to a regular update if it failed
There are few issues with that approach: 1. When a user's command comes in, it 
has operations like ("set": 3), or ("inc": 5). At the UpdateProcessor, we 
resolve it to a merged document (either partial document, or a regular full 
document) by pulling the last document from the index (or transaction log) to 
merge the command with that document. We then send the "resolved" document 
(partial or full) to the DirectUpdateHandler, which performs the IW update. 
However, by this time, if the IW were to throw an exception for a partial 
update from the IW.updateDocValues() method, we have already lost the 
information about the original operation ("set", "inc" etc.), but instead just 
have the merged values.
2. The second problem is that if we wish to handle the exception for 
IW.updateDocValues() and decide to fallback on regular update, we could now 
potentially be merging against a different previous document than the one that 
was merged with in the failed attempt. 3. The performance cost of a regular 
update would increase due to merging twice against the previously indexed 
document.

bq. change the semantics of dv updates to create fields if they did not exist 
already
I agree that this is the cleanest way forward. From the IndexWriter's API 
standpoint, I think it would certainly be cleanest if updateDocValues() method 
were to create non-existent DVs. Till the time we have such functionality in 
the updateDocValues() method, do you think we could expose the field names 
through a method marked as internal and/or experimental, with the intention of 
phasing it out after we have such functionality in IW's updateDocValues()? I 
think it would be suitable (interim) workaround for applications who find 
themselves in such a situation.


was (Author: ichattopadhyaya):
Thanks [~jpountz] for looking into this.

bq. If I understand the Solr issue correctly, your use-case is to check whether 
an update can be applied using dv-updates only, or whether it requires an 
regular update. Do I get it right?
Yes, exactly.

bq. maybe a better way to address this use-case would be to either try the 
dv-only update and fallback to a regular update if it failed
There are few issues with that approach: 1. When a user's command comes in, it 
has operations like ("set": 3), or ("inc": 5). At the UpdateProcessor, we 
resolve it to a merged document (either partial document, or a regular full 
document) by pulling the last document from the index (or transaction log) to 
merge the command with that document. We then send the "resolved" document 
(partial or full) to the DirectUpdateHandler, which performs the IW update. 
However, by this time, if the IW were to throw an exception for a partial 
update from the IW.updateDocValues() method, we have already lost the 
information about the original operation ("set", "inc" etc.), but instead just 
have the merged values.
2. The second problem is that if we wish to handle the exception for 
IW.updateDocValues() and decide to fallback on regular update, we could now 
potentially be merging against a different previous document than the one that 
was merged with in the failed attempt. 3. The performance cost of a regular 
update would increase due to merging twice against the previously indexed 
document.

bq. change the semantics of dv updates to create fields if they did not exist 
already
I agree that this is the cleanest way forward. From the IndexWriter's API 
standpoint, I think it would certainly be cleanest if updateDocValues() method 
were to create non-existent DVs. Till the time we have such functionality in 
the updateDocValues() method, do you think we could expose the field names 
through a method marked as internal and/or experimental, with the intention of 
phasing it out after we have such functionality in IW's updateDocValues()?

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> 

[jira] [Updated] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7659:
-
Attachment: LUCENE-7659.patch

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9997) Enable configuring SolrHttpClientBuilder via java system property

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838460#comment-15838460
 ] 

ASF GitHub Bot commented on SOLR-9997:
--

Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/140
  
@janhoy Done! Please take a look and let me know your feedback.


> Enable configuring SolrHttpClientBuilder via java system property
> -
>
> Key: SOLR-9997
> URL: https://issues.apache.org/jira/browse/SOLR-9997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>
> Currently SolrHttpClientBuilder needs to be configured via invoking 
> HttpClientUtil#setHttpClientBuilder(...) API. On the other hand SolrCLI 
> attempts to support configuring SolrHttpClientBuilder via Java system 
> property.  
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L265
> But after changes for SOLR-4509, this is no longer working. This is because 
> we need to configure HttpClientBuilderFactory which can provide appropriate 
> SolrHttpClientBuilder instance (e.g. Krb5HttpClientBuilder). I verified that 
> SolrCLI does not work in a kerberos enabled cluster. During the testing I 
> also found that SolrCLI is hardcoded to use basic authentication,
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L156
> This jira is to add support for configuring HttpClientBuilderFactory as a 
> java system property so that SolrCLI as well as other Solr clients can also 
> benefit this. Also we should provide a HttpClientBuilderFactory which support 
> configuring preemptive basic authentication so that we can remove the 
> hardcoded basic auth usage in SolrCLI (and enable it work with kerberos). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #140: [SOLR-9997] Enable configuring SolrHttpClientBuilder...

2017-01-25 Thread hgadre
Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/140
  
@janhoy Done! Please take a look and let me know your feedback.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7659:
-
Attachment: LUCENE-7659.patch

Adding @lucene.internal and @lucene.experimental annotations to the method.

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch, LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838422#comment-15838422
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7659 at 1/25/17 7:40 PM:
---

Thanks [~jpountz] for looking into this.

bq. If I understand the Solr issue correctly, your use-case is to check whether 
an update can be applied using dv-updates only, or whether it requires an 
regular update. Do I get it right?
Yes, exactly.

bq. maybe a better way to address this use-case would be to either try the 
dv-only update and fallback to a regular update if it failed
There are few issues with that approach: 1. When a user's command comes in, it 
has operations like ("set": 3), or ("inc": 5). At the UpdateProcessor, we 
resolve it to a merged document (either partial document, or a regular full 
document) by pulling the last document from the index (or transaction log) to 
merge the command with that document. We then send the "resolved" document 
(partial or full) to the DirectUpdateHandler, which performs the IW update. 
However, by this time, if the IW were to throw an exception for a partial 
update from the IW.updateDocValues() method, we have already lost the 
information about the original operation ("set", "inc" etc.), but instead just 
have the merged values.
2. The second problem is that if we wish to handle the exception for 
IW.updateDocValues() and decide to fallback on regular update, we could now 
potentially be merging against a different previous document than the one that 
was merged with in the failed attempt. 3. The performance cost of a regular 
update would increase due to merging twice against the previously indexed 
document.

bq. change the semantics of dv updates to create fields if they did not exist 
already
I agree that this is the cleanest way forward. From the IndexWriter's API 
standpoint, I think it would certainly be cleanest if updateDocValues() method 
were to create non-existent DVs. Till the time we have such functionality in 
the updateDocValues() method, do you think we could expose the field names 
through a method marked as internal and/or experimental, with the intention of 
phasing it out after we have such functionality in IW's updateDocValues()?


was (Author: ichattopadhyaya):
Thanks [~jpountz] for looking into this.

bq. If I understand the Solr issue correctly, your use-case is to check whether 
an update can be applied using dv-updates only, or whether it requires an 
regular update. Do I get it right?
Yes, exactly.

bq. maybe a better way to address this use-case would be to either try the 
dv-only update and fallback to a regular update if it failed
There are few issues with that approach: 1. When a user's command comes in, it 
has operations like {"set": 3}, or {"inc": 5}. At the UpdateProcessor, we 
resolve it to a merged document (either partial document, or a regular full 
document) by pulling the last document from the index (or transaction log) to 
merge the command with that document. We then send the "resolved" document 
(partial or full) to the DirectUpdateHandler, which performs the IW update. 
However, by this time, if the IW were to throw an exception for a partial 
update from the IW.updateDocValues() method, we have already lost the 
information about the original operation ("set", "inc" etc.), but instead just 
have the merged values.
2. The second problem is that if we wish to handle the exception for 
IW.updateDocValues() and decide to fallback on regular update, we could now 
potentially be merging against a different previous document than the one that 
was merged with in the failed attempt. 3. The performance cost of a regular 
update would increase due to merging twice against the previously indexed 
document.

bq. change the semantics of dv updates to create fields if they did not exist 
already
I agree that this is the cleanest way forward. From the IndexWriter's API 
standpoint, I think it would certainly be cleanest if updateDocValues() method 
were to create non-existent DVs. Till the time we have such functionality in 
the updateDocValues() method, do you think we could expose the field names 
through a method marked as internal and/or experimental, with the intention of 
phasing it out after we have such functionality in IW's updateDocValues()?

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full 

[jira] [Commented] (LUCENE-7659) IndexWriter should expose field names

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838422#comment-15838422
 ] 

Ishan Chattopadhyaya commented on LUCENE-7659:
--

Thanks [~jpountz] for looking into this.

bq. If I understand the Solr issue correctly, your use-case is to check whether 
an update can be applied using dv-updates only, or whether it requires an 
regular update. Do I get it right?
Yes, exactly.

bq. maybe a better way to address this use-case would be to either try the 
dv-only update and fallback to a regular update if it failed
There are few issues with that approach: 1. When a user's command comes in, it 
has operations like {"set": 3}, or {"inc": 5}. At the UpdateProcessor, we 
resolve it to a merged document (either partial document, or a regular full 
document) by pulling the last document from the index (or transaction log) to 
merge the command with that document. We then send the "resolved" document 
(partial or full) to the DirectUpdateHandler, which performs the IW update. 
However, by this time, if the IW were to throw an exception for a partial 
update from the IW.updateDocValues() method, we have already lost the 
information about the original operation ("set", "inc" etc.), but instead just 
have the merged values.
2. The second problem is that if we wish to handle the exception for 
IW.updateDocValues() and decide to fallback on regular update, we could now 
potentially be merging against a different previous document than the one that 
was merged with in the failed attempt. 3. The performance cost of a regular 
update would increase due to merging twice against the previously indexed 
document.

bq. change the semantics of dv updates to create fields if they did not exist 
already
I agree that this is the cleanest way forward. From the IndexWriter's API 
standpoint, I think it would certainly be cleanest if updateDocValues() method 
were to create non-existent DVs. Till the time we have such functionality in 
the updateDocValues() method, do you think we could expose the field names 
through a method marked as internal and/or experimental, with the intention of 
phasing it out after we have such functionality in IW's updateDocValues()?

> IndexWriter should expose field names
> -
>
> Key: LUCENE-7659
> URL: https://issues.apache.org/jira/browse/LUCENE-7659
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7659.patch
>
>
> While working on SOLR-5944, I needed a way to know whether applying an update 
> to a DV is possible (i.e. the DV exists or not), while deciding upon whether 
> or not to apply the update as an in-place update or a regular full document 
> update. This information is present at the IndexWriter in a FieldInfos 
> instance, and can be exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-01-25 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-10037:
--

 Summary: (non-original) Solr Admin UI > query tab > unexpected url 
above results
 Key: SOLR-10037
 URL: https://issues.apache.org/jira/browse/SOLR-10037
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Priority: Minor


To reproduce, in a browser run a search from the query tab and then notice the 
url shown above the results
{code}
# actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
# expected: 
http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
{code}

(We had noticed this when using the (master branch) Admin UI during the [London 
Lucene Hackday for Full 
Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
 on Friday, I just tried to reproduce both on master (reproducible with 
non-original version only) and on branch_6_4 (not reproducible) and search for 
an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838401#comment-15838401
 ] 

Ishan Chattopadhyaya commented on SOLR-8029:


bq. I'm planning to commit this to master shortly
+1. Yay, this is exciting!

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1097 - Still Failing!

2017-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1097/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 27494 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/jdk/instances/jdk1.8.0/jre/bin/java -XX:+UseCompressedOops 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=95339D88742FFB44 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/temp
 -Dcommon.dir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene 
-Dclover.db.dir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build/clover/db
 
-Djava.security.policy=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[jira] [Created] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest

2017-01-25 Thread Shashank Pedamallu (JIRA)
Shashank Pedamallu created SOLR-10036:
-

 Summary: Revise jackson-core version from 2.5.4 to latest
 Key: SOLR-10036
 URL: https://issues.apache.org/jira/browse/SOLR-10036
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shashank Pedamallu


The current jackson-core dependency in Solr is not compatible with Amazon AWS 
S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
from S3.

It would be greatly helpful if someone could revise the jackson-core jar in 
Solr to the latest version.

Details of my Setup:
Solr Version: 6.3
AWS SDK version: 1.11.76



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10035) Admin UI cannot find dataimport handlers

2017-01-25 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-10035:
-
Labels: regression  (was: )

> Admin UI cannot find dataimport handlers
> 
>
> Key: SOLR-10035
> URL: https://issues.apache.org/jira/browse/SOLR-10035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.4.0
>Reporter: Shawn Heisey
>  Labels: regression
>
> The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin 
> UI.  It will say "Sorry, no dataimport-handler defined" when trying to access 
> that tab.
> The root cause of the problem is a change in the /admin/mbeans handler, by 
> SOLR-9947.  The section of the output where defined dataimport handlers are 
> listed was changed from QUERYHANDLER to QUERY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838373#comment-15838373
 ] 

Noble Paul commented on SOLR-8029:
--

I'm planning to commit this to master shortly

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838370#comment-15838370
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit c91b96211b9e88c6cc7a4e3aedc14e4f1375dab8 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c91b962 ]

SOLR-8029: fixing some test errors


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838369#comment-15838369
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 4ddaba397d30f9b5344545d08c809488633638d1 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ddaba3 ]

SOLR-8029: fixing some test errors


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838364#comment-15838364
 ] 

Mark Miller commented on SOLR-10032:


This will also help address getting our nightly runs to a useful state. I am 
not currently running tests that run non nightly with 'nightly cranked up' 
variants, but we do get a report on tests that only run nightly, and tests like 
that tend to get little to no visibility currently (and my guess is we may find 
many fairly failure prone).

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10035) Admin UI cannot find dataimport handlers

2017-01-25 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838363#comment-15838363
 ] 

Shawn Heisey commented on SOLR-10035:
-

The changes to the mbean output are going to make my life interesting beyond 
the admin UI.  I have a SolrJ program that accesses this information and isn't 
going to work with newer versions of Solr.  Because it will need to deal with 
multiple versions, it's going to have to handle both the old and the new output.

> Admin UI cannot find dataimport handlers
> 
>
> Key: SOLR-10035
> URL: https://issues.apache.org/jira/browse/SOLR-10035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.4.0
>Reporter: Shawn Heisey
>
> The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin 
> UI.  It will say "Sorry, no dataimport-handler defined" when trying to access 
> that tab.
> The root cause of the problem is a change in the /admin/mbeans handler, by 
> SOLR-9947.  The section of the output where defined dataimport handlers are 
> listed was changed from QUERYHANDLER to QUERY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10035) Admin UI cannot find dataimport handlers

2017-01-25 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-10035:
---

 Summary: Admin UI cannot find dataimport handlers
 Key: SOLR-10035
 URL: https://issues.apache.org/jira/browse/SOLR-10035
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: UI
Affects Versions: 6.4.0
Reporter: Shawn Heisey


The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin 
UI.  It will say "Sorry, no dataimport-handler defined" when trying to access 
that tab.

The root cause of the problem is a change in the /admin/mbeans handler, by 
SOLR-9947.  The section of the output where defined dataimport handlers are 
listed was changed from QUERYHANDLER to QUERY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >