[jira] [Updated] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2015-11-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-6914:
-
Attachment: LUCENE-6914.patch

> DecimalDigitFilter skips characters in some cases (supplemental?)
> -
>
> Key: LUCENE-6914
> URL: https://issues.apache.org/jira/browse/LUCENE-6914
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Hoss Man
> Attachments: LUCENE-6914.patch
>
>
> Found this while writing up the solr ref guide for DecimalDigitFilter. 
> With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
> double struck 9, 8, double struck 4)  add some non-decimal characters in 
> between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output 
> ("1x9x8x4").  This doesn't affect all decimal characters though, as evident 
> by the existing test cases.
> Perhaps this is an off by one bug in the "if the original was supplementary, 
> shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7940) [CollectionAPI] Frequent Cluster Status timeout

2015-11-30 Thread James Hardwick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032540#comment-15032540
 ] 

James Hardwick edited comment on SOLR-7940 at 11/30/15 10:13 PM:
-

We are seeing this as well on a 3 node cluster w/ 2 collections. 

Looks like others are also, across a variety of versions: 
http://lucene.472066.n3.nabble.com/CLUSTERSTATUS-timeout-tp4173224.html
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201511.mbox/%3c5639dfcf.9020...@decalog.net%3E
http://grokbase.com/t/lucene/solr-user/154d0wjr7c/clusterstate-timeout


was (Author: hardwickj):
We are seeing this as well on a 3 node cluster w/ 2 collections. 

Looks like others are also, across a variety of versions: 
http://lucene.472066.n3.nabble.com/CLUSTERSTATUS-timeout-tp4173224.html

> [CollectionAPI] Frequent Cluster Status timeout
> ---
>
> Key: SOLR-7940
> URL: https://issues.apache.org/jira/browse/SOLR-7940
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
> Environment: Ubuntu on Azure
>Reporter: Stephan Lagraulet
>
> Very often we have a timeout when we call 
> http://server2:8080/solr/admin/collections?action=CLUSTERSTATUS=json
> {code}
> {"responseHeader": 
> {"status": 500,
> "QTime": 180100},
> "error": 
> {"msg": "CLUSTERSTATUS the collection time out:180s",
> "trace": "org.apache.solr.common.SolrException: CLUSTERSTATUS the collection 
> time out:180s\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleClusterStatus(CollectionsHandler.java:640)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:220)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:350)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:630)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)\n\tat 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:77)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:606)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:46)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code": 500}}
> {code}
> The cluster has 3 SolR nodes with 6 small collections replicated on all nodes.
> We were using this api to monitor cluster state but it was failing every 10 
> minutes. We switched by using ZkStateReader in CloudSolrServer 

[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b93) - Build # 14789 - Failure!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14789/
Java: 32bit/jdk1.9.0-ea-b93 -client -XX:+UseConcMarkSweepGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=90, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=88, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=89, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=87, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=91, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=90, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 

[jira] [Updated] (SOLR-8351) Improve HdfsDirectory and HdfsLock toString representation

2015-11-30 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8351:

Summary: Improve HdfsDirectory and HdfsLock toString representation  (was: 
Improve HdfsDirectory toString representation)

> Improve HdfsDirectory and HdfsLock toString representation
> --
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums

2015-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032555#comment-15032555
 ] 

Mike Drob commented on SOLR-8353:
-

Is this desirable? For the same reason that I am against having license headers 
easy to skip, I can see it easily leading us down a road to developer 
sloppiness if this is allowed.

I guess this makes testing an upgrade to a dependency much easier, and there 
would still be Jenkins builds to enforce the rules.

> Support regex for skipping license checksums
> 
>
> Key: SOLR-8353
> URL: https://issues.apache.org/jira/browse/SOLR-8353
> Project: Solr
>  Issue Type: Improvement
>  Components: Build
>Reporter: Gregory Chanan
>
> It would be useful to be able to specify a regex for license checksums to 
> skip in the build.  Currently there are only two supported values:
> 1) skipChecksum (i.e. regex=*)
> 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*)
> A regex would be more flexible and allow testing the entire build while 
> skipping a more limited set of checksums, e.g.:
> a) an individual library (i.e. regex=joda-time*)
> b) a suite of libraries (i.e. regex=hadoop*)
> We could make skipChecksum and skipSnapshotsChecksum continue to work for 
> backwards compatbility reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8351) Improve HdfsDirectory toString representation

2015-11-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-8351:
-
Attachment: SOLR-8351.patch

I added a toString to HdfsLock to match what's in NativeFsLock, since that 
seems in spirit with what this patch is doing.

I also removed the change to LockFactory -- it feels weird to clutter a simple 
interface declaration with toString implementation details.  I'm not against 
the log message changing, though.  Perhaps the correct place to do that is in a 
derivation on the LockFactories/Locks similar to what you are suggesting with 
the Directories.  That should probably be done in a separate jira though.

Let me know what you think [~mdrob].

> Improve HdfsDirectory toString representation
> -
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6737) Add DecimalDigitFilter

2015-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032556#comment-15032556
 ] 

Hoss Man commented on LUCENE-6737:
--

I think there may be a bug here for some digits ... created new issue 
LUCENE-6914 in case it's non trivial to fix and doesn't get resolved before 5.4 
is released.

> Add DecimalDigitFilter
> --
>
> Key: LUCENE-6737
> URL: https://issues.apache.org/jira/browse/LUCENE-6737
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Robert Muir
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6737.patch
>
>
> TokenFilter that folds all unicode digits 
> (http://unicode.org/cldr/utility/list-unicodeset.jsp?a=[:General_Category=Decimal_Number:])
>  to 0-9.
> Historically a lot of the impacted analyzers couldn't even tokenize numbers 
> at all, but now they use standardtokenizer for numbers/alphanum tokens. But 
> its usually the case you will find e.g. a mix of both ascii digits and 
> "native" digits, and today that makes searching difficult.
> Note this only impacts *decimal* digits, hence the name DecimalDigitFilter. 
> So no processing of chinese numerals or anything crazy like that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8351) Improve HdfsDirectory and HdfsLock toString representation

2015-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032583#comment-15032583
 ] 

Mike Drob commented on SOLR-8351:
-

LGTM. Updated issue summary to better capture what we're doing.

> Improve HdfsDirectory and HdfsLock toString representation
> --
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8353) Support regex for skipping license checksums

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032600#comment-15032600
 ] 

Gregory Chanan commented on SOLR-8353:
--

I thought about that, but given you can already skip _all_ checksums, this 
can't be worse.

> Support regex for skipping license checksums
> 
>
> Key: SOLR-8353
> URL: https://issues.apache.org/jira/browse/SOLR-8353
> Project: Solr
>  Issue Type: Improvement
>  Components: Build
>Reporter: Gregory Chanan
>
> It would be useful to be able to specify a regex for license checksums to 
> skip in the build.  Currently there are only two supported values:
> 1) skipChecksum (i.e. regex=*)
> 2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*)
> A regex would be more flexible and allow testing the entire build while 
> skipping a more limited set of checksums, e.g.:
> a) an individual library (i.e. regex=joda-time*)
> b) a suite of libraries (i.e. regex=hadoop*)
> We could make skipChecksum and skipSnapshotsChecksum continue to work for 
> backwards compatbility reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



LUCENE-5791 and LUCENE-6672 (BasicOperations#determinize() performance)

2015-11-30 Thread Irfan Hamid
Lucene devs,

We are hitting performance problems when our customers issue pathological
wildcard queries. Searching the Lucene JIRA I came across these two work
items and unfortunately it seems like there's no easy way out. However, in
LUCENE-6672  David
Causse has a couple of proposed solutions. I was wondering if either of
those or something similar were integrated into the code-base down the line?

If not, would the community be interested in a pull request if/when we fix
this in our fork and bake it in production for a while?

TIA,
Irfan.


[jira] [Commented] (SOLR-7940) [CollectionAPI] Frequent Cluster Status timeout

2015-11-30 Thread James Hardwick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032540#comment-15032540
 ] 

James Hardwick commented on SOLR-7940:
--

We are seeing this as well on a 3 node cluster w/ 2 collections. 

Looks like others are also, across a variety of versions: 
http://lucene.472066.n3.nabble.com/CLUSTERSTATUS-timeout-tp4173224.html

> [CollectionAPI] Frequent Cluster Status timeout
> ---
>
> Key: SOLR-7940
> URL: https://issues.apache.org/jira/browse/SOLR-7940
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
> Environment: Ubuntu on Azure
>Reporter: Stephan Lagraulet
>
> Very often we have a timeout when we call 
> http://server2:8080/solr/admin/collections?action=CLUSTERSTATUS=json
> {code}
> {"responseHeader": 
> {"status": 500,
> "QTime": 180100},
> "error": 
> {"msg": "CLUSTERSTATUS the collection time out:180s",
> "trace": "org.apache.solr.common.SolrException: CLUSTERSTATUS the collection 
> time out:180s\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleClusterStatus(CollectionsHandler.java:640)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:220)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:350)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:630)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)\n\tat 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:77)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:606)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:46)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code": 500}}
> {code}
> The cluster has 3 SolR nodes with 6 small collections replicated on all nodes.
> We were using this api to monitor cluster state but it was failing every 10 
> minutes. We switched by using ZkStateReader in CloudSolrServer and it has 
> been working for a day without problems.
> Is there a kind of deadlock as this call was been made on the three nodes 
> concurrently?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2015-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032559#comment-15032559
 ] 

Hoss Man commented on LUCENE-6914:
--

failure produced by attached test patch...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestDecimalDigitFilter -Dtests.method=testDoubleStruck 
-Dtests.seed=3126DECB8CE805E -Dtests.slow=true -Dtests.locale=ga 
-Dtests.timezone=Africa/Juba -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.03s | TestDecimalDigitFilter.testDoubleStruck <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
expected:<1[984]> but was:<1[ퟡ8ퟜ]>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([3126DECB8CE805E:92961DD9D4C68E38]:0)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:186)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:301)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:309)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:359)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:368)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkOneTerm(BaseTokenStreamTestCase.java:429)
   [junit4]>at 
org.apache.lucene.analysis.core.TestDecimalDigitFilter.testDoubleStruck(TestDecimalDigitFilter.java:74)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

> DecimalDigitFilter skips characters in some cases (supplemental?)
> -
>
> Key: LUCENE-6914
> URL: https://issues.apache.org/jira/browse/LUCENE-6914
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Hoss Man
> Attachments: LUCENE-6914.patch
>
>
> Found this while writing up the solr ref guide for DecimalDigitFilter. 
> With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
> double struck 9, 8, double struck 4)  add some non-decimal characters in 
> between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output 
> ("1x9x8x4").  This doesn't affect all decimal characters though, as evident 
> by the existing test cases.
> Perhaps this is an off by one bug in the "if the original was supplementary, 
> shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032471#comment-15032471
 ] 

David Smiley commented on LUCENE-6911:
--

Since you already backported, I suggest simply removing from trunk.  I'm not 
sure I'd bother with a CHANGES.txt; it was marked deprecated so people can 
expect it to disappear, not to mention I can't imagine *anyone* called that 
before given it's obvious uselessness.

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2015-11-30 Thread Hoss Man (JIRA)
Hoss Man created LUCENE-6914:


 Summary: DecimalDigitFilter skips characters in some cases 
(supplemental?)
 Key: LUCENE-6914
 URL: https://issues.apache.org/jira/browse/LUCENE-6914
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.4
Reporter: Hoss Man


Found this while writing up the solr ref guide for DecimalDigitFilter. 

With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
double struck 9, 8, double struck 4)  add some non-decimal characters in 
between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output ("1x9x8x4"). 
 This doesn't affect all decimal characters though, as evident by the existing 
test cases.

Perhaps this is an off by one bug in the "if the original was supplementary, 
shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7940) [CollectionAPI] Frequent Cluster Status timeout

2015-11-30 Thread James Hardwick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032552#comment-15032552
 ] 

James Hardwick commented on SOLR-7940:
--

Actually, we are consistently seeing this on any of a variety of instances we 
have, all of which are generally uniform in their configuration. 

I'd love to help if any of the Solr dev's can point me in the right direction 
for doing any sort of diagnostics. 

> [CollectionAPI] Frequent Cluster Status timeout
> ---
>
> Key: SOLR-7940
> URL: https://issues.apache.org/jira/browse/SOLR-7940
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
> Environment: Ubuntu on Azure
>Reporter: Stephan Lagraulet
>
> Very often we have a timeout when we call 
> http://server2:8080/solr/admin/collections?action=CLUSTERSTATUS=json
> {code}
> {"responseHeader": 
> {"status": 500,
> "QTime": 180100},
> "error": 
> {"msg": "CLUSTERSTATUS the collection time out:180s",
> "trace": "org.apache.solr.common.SolrException: CLUSTERSTATUS the collection 
> time out:180s\n\tat 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleClusterStatus(CollectionsHandler.java:640)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:220)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:350)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:630)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)\n\tat 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:77)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:606)\n\tat
>  
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:46)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code": 500}}
> {code}
> The cluster has 3 SolR nodes with 6 small collections replicated on all nodes.
> We were using this api to monitor cluster state but it was failing every 10 
> minutes. We switched by using ZkStateReader in CloudSolrServer and it has 
> been working for a day without problems.
> Is there a kind of deadlock as this call was been made on the three nodes 
> concurrently?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1032 - Still Failing

2015-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1032/

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
QUERY FAILED: 
xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt447']
  request=/schema/fields?wt=xml  response=  0   172
  _root_ string 
false true 
true   _version_ long true true   
constantField tdouble   
id string  
   true false  
   true true 
true   newTestFieldInt0 tlong  
 newTestFieldInt1 tlong   newTestFieldInt10 tlong  
 newTestFieldInt100 tlong   newTestFieldInt101 tlong 
  newTestFieldInt102 tlong   newTestFieldInt103 tlong 
  newTestFieldInt104 tlong   newTestFieldInt105 tlong 
  newTestFieldInt106 tlong   newTestFieldInt107 tlong 
  newTestFieldInt108 tlong   newTestFieldInt109 tlong 
  newTestFieldInt11 tlong   newTestFieldInt110 tlong 
  newTestFieldInt111 tlong   newTestFieldInt112 tlong 
  newTestFieldInt113 tlong   newTestFieldInt114 tlong 
  newTestFieldInt115 tlong   newTestFieldInt116 tlong 
  newTestFieldInt117 tlong   newTestFieldInt118 tlong 
  newTestFieldInt119 tlong   newTestFieldInt12 tlong  
 newTestFieldInt120 tlong   newTestFieldInt121 tlong 
  newTestFieldInt122 tlong   newTestFieldInt123 tlong 
  newTestFieldInt124 tlong   newTestFieldInt125 tlong 
  newTestFieldInt126 tlong   newTestFieldInt127 tlong 
  newTestFieldInt128 tlong   newTestFieldInt129 tlong 
  newTestFieldInt13 tlong   newTestFieldInt130 tlong 
  newTestFieldInt131 tlong   newTestFieldInt132 tlong 
  newTestFieldInt133 tlong   newTestFieldInt134 tlong 
  newTestFieldInt135 tlong   newTestFieldInt136 tlong 
  newTestFieldInt137 tlong   newTestFieldInt138 tlong 
  newTestFieldInt139 tlong   newTestFieldInt14 tlong  
 newTestFieldInt140 tlong   newTestFieldInt141 tlong 
  newTestFieldInt142 tlong   newTestFieldInt143 tlong 
  newTestFieldInt144 tlong   newTestFieldInt145 tlong 
  newTestFieldInt146 tlong   newTestFieldInt147 tlong 
  newTestFieldInt148 tlong   newTestFieldInt149 tlong 
  newTestFieldInt15 tlong   newTestFieldInt150 tlong 
  newTestFieldInt151 tlong   newTestFieldInt152 tlong 
  newTestFieldInt153 tlong   newTestFieldInt154 tlong 
  newTestFieldInt155 tlong   newTestFieldInt156 tlong 
  newTestFieldInt157 tlong   newTestFieldInt158 tlong 
  newTestFieldInt159 tlong   newTestFieldInt16 tlong  
 newTestFieldInt160 tlong   newTestFieldInt161 tlong 
  newTestFieldInt162 tlong   newTestFieldInt163 tlong 
  newTestFieldInt164 tlong   newTestFieldInt165 tlong 
  newTestFieldInt166 tlong   newTestFieldInt167 tlong 
  newTestFieldInt168 tlong   newTestFieldInt169 tlong 
  newTestFieldInt17 tlong   newTestFieldInt170 tlong 
  newTestFieldInt171 tlong   newTestFieldInt172 tlong 
  newTestFieldInt173 tlong   newTestFieldInt174 tlong 
  newTestFieldInt175 tlong   newTestFieldInt176 tlong 
  newTestFieldInt177 tlong   newTestFieldInt178 tlong 
  newTestFieldInt179 tlong   newTestFieldInt18 tlong  
 newTestFieldInt180 tlong   newTestFieldInt181 tlong 
  newTestFieldInt182 tlong   newTestFieldInt183 tlong 
  newTestFieldInt184 tlong   newTestFieldInt185 tlong 
  newTestFieldInt186 tlong   newTestFieldInt187 tlong 
  newTestFieldInt188 tlong   newTestFieldInt189 tlong 
  newTestFieldInt19 tlong   newTestFieldInt190 tlong 
  newTestFieldInt191 tlong   newTestFieldInt192 tlong 
  newTestFieldInt193 tlong   newTestFieldInt194 tlong 
  newTestFieldInt195 tlong   newTestFieldInt196 tlong 
  newTestFieldInt197 tlong   newTestFieldInt198 tlong 
  newTestFieldInt199 tlong   newTestFieldInt2 tlong  
 

[jira] [Commented] (LUCENE-5868) JoinUtil support for NUMERIC docValues fields

2015-11-30 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031595#comment-15031595
 ] 

Martijn van Groningen commented on LUCENE-5868:
---

+1 this looks good. One small thing, maybe rename the parameter name in the 
protected `createJoinQuery(...)` method from `termsWithScoreCollector` to just 
`collector`?

> JoinUtil support for NUMERIC docValues fields 
> --
>
> Key: LUCENE-5868
> URL: https://issues.apache.org/jira/browse/LUCENE-5868
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-5868-lambdarefactoring.patch, 
> LUCENE-5868-lambdarefactoring.patch, LUCENE-5868.patch, LUCENE-5868.patch, 
> LUCENE-5868.patch, qtj.diff
>
>
> while polishing SOLR-6234 I found that JoinUtil can't join int dv fields at 
> least. 
> I plan to provide test/patch. It might be important, because Solr's join can 
> do that. Please vote if you care! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-30 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032094#comment-15032094
 ] 

Alan Woodward commented on SOLR-8336:
-

Hm, that looks as though something hasn't been rebuilt - are you sure you're 
trying this from a totally clean checkout?

> CoreDescriptor instance directory should be a Path, not a String
> 
>
> Key: SOLR-8336
> URL: https://issues.apache.org/jira/browse/SOLR-8336
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-30 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032127#comment-15032127
 ] 

Alan Woodward commented on SOLR-8336:
-

No worries, thanks for checking!

> CoreDescriptor instance directory should be a Path, not a String
> 
>
> Key: SOLR-8336
> URL: https://issues.apache.org/jira/browse/SOLR-8336
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-30 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032087#comment-15032087
 ] 

Dennis Gove commented on SOLR-8336:
---

This patch appears to have broken the ability to create a new collection using 
bin/solr create -c

{code}
$ bin/solr/bin/solr create -c holders -d ~/dev/solr/bbdemo/data/solr/conf

Connecting to ZooKeeper at localhost:2181 ...
Uploading /Users/dgove1/dev/solr/bbdemo/data/solr/conf/conf for config holders 
to ZooKeeper at localhost:2181

Creating new collection 'holders' using command:
http://localhost:8983/solr/admin/collections?action=CREATE=holders=1=1=1=holders


ERROR: Failed to create collection 'holders' due to: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from 
server at http://10.16.81.28:8983/solr: Expected mime type 
application/octet-stream but got text/html. 


Error 500 
{msg=org.apache.solr.core.CoreDescriptor.getInstanceDir()Ljava/lang/String;,trace=java.lang.NoSuchMethodError:
 org.apache.solr.core.CoreDescriptor.getInstanceDir()Ljava/lang/String;
at 
org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:38)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:750)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:617)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:192)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:151)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:436)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:221)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:180)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
,code=500}

HTTP ERROR 500
Problem accessing /solr/admin/cores. Reason:

{msg=org.apache.solr.core.CoreDescriptor.getInstanceDir()Ljava/lang/String;,trace=java.lang.NoSuchMethodError:
 org.apache.solr.core.CoreDescriptor.getInstanceDir()Ljava/lang/String;
at 
org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:38)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:750)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:617)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:192)
at 

[jira] [Commented] (SOLR-8184) Negative tests for JDBC Connection String

2015-11-30 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032092#comment-15032092
 ] 

Kevin Risden commented on SOLR-8184:


[~susheel2...@gmail.com] Are the tests that were added here included in 
SOLR-8179?

The way the tests that were added w/ your patch would mean that a whole new 
Solr cluster would be stood up for each test. In SOLR-8179 I added a new class 
specifically for testing the driver that doesn't require a Solr cluster to be 
up and running.

> Negative tests for JDBC Connection String
> -
>
> Key: SOLR-8184
> URL: https://issues.apache.org/jira/browse/SOLR-8184
> Project: Solr
>  Issue Type: Test
> Environment: Trunk
>Reporter: Susheel Kumar
>Priority: Minor
> Attachments: SOLR-8184.patch
>
>
> Ticket to track negative tests for JDBC connection string SOLR-7986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-30 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032126#comment-15032126
 ] 

Dennis Gove commented on SOLR-8336:
---

You're correct - it was a failure to properly clean the build. Apparently I 
fat-fingered 
{code}ant clean{code} 
to 
{code}ant c lean{code}

After going back and properly cleaning I am now seeing expected behavior. 

> CoreDescriptor instance directory should be a Path, not a String
> 
>
> Key: SOLR-8336
> URL: https://issues.apache.org/jira/browse/SOLR-8336
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-30 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032100#comment-15032100
 ] 

Dennis Gove commented on SOLR-8336:
---

I believe so. I went through a full clean/rebuild. Though maybe something 
failed and I didn't notice it. I'll double check.

> CoreDescriptor instance directory should be a Path, not a String
> 
>
> Key: SOLR-8336
> URL: https://issues.apache.org/jira/browse/SOLR-8336
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031953#comment-15031953
 ] 

David Smiley commented on LUCENE-6911:
--

+1 for 5.4.  I suggest not even leaving around the deprecated method -- looks 
like it's completely erroneous and nobody would be calling this silly method.

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6911) StandardQueryParser's getMultiFields method (found by "Coverity scan results of Lucene")

2015-11-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6911:

Summary: StandardQueryParser's getMultiFields method (found by "Coverity 
scan results of Lucene")  (was: deprecated/replace StandardQueryParser's 
getMultiFields method (found by "Coverity scan results of Lucene"))

> StandardQueryParser's getMultiFields method (found by "Coverity scan results 
> of Lucene")
> 
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6911) deprecated/replace StandardQueryParser's getMultiFields method (found by "Coverity scan results of Lucene")

2015-11-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6911:

Description: 
problem summary:
* 
{{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
 fields)}} is a no-op

details:
* https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
(http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
 in July 2015:
** coverity CID 120698


  was:
https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
(http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
 in July 2015:
* coverity CID 120698



> deprecated/replace StandardQueryParser's getMultiFields method (found by 
> "Coverity scan results of Lucene")
> ---
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6911:

Summary: StandardQueryParser's getMultiFields(CharSequence[] fields) method 
is a no-op  (was: StandardQueryParser's getMultiFields method (found by 
"Coverity scan results of Lucene"))

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031910#comment-15031910
 ] 

Yonik Seeley commented on SOLR-8220:


bq. From a performance perspective, reading values from DocValues always (if 
they exist) can be horrible because each field access in docvalues may need a 
random disk seek, whereas, all stored fields for a document are kept together 
and need only 1 random seek and a sequential block read.

A few points:
- stored fields also require decompression (more overhead)
- use of stored fields and docvalues at the same time is less memory efficient 
- the stored fields will also take up needed disk cache (although hopefully the 
OS will figure out which it should cache more aggressively
- presumably one has docvalues because they need to be used, and they need to 
be fast... i.e. they already need to be cached.
- if one as a small set of fields that are normally retrieved, it seems like a 
win again.
- a *very* common case these days is that the entire index fits in memory.
- we're in the SSD era, and multiple "seeks" will still be more expensive if 
not cached, but much less so (and less so over time as non-volatile storage 
keeps improving)

It seems like this should be a big win for the common case, and the ability to 
reindex your data or change config and not have to change the clients is 
important IMO.  It's like being able to reindex a date to a trie-date and have 
the clients not care.  We can already reindex a field as docValues, and sort, 
facet, do analytics, without changing client requests.  Optimizations to field 
value retrieval (or optionally removing redundantly stored data) should be the 
same.


> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-7885

2015-11-30 Thread Aaron LaBella
Lucene Developers,

Can someone please take a look at this patch at their earliest convenience
and apply it?

https://issues.apache.org/jira/browse/SOLR-7885

It's been out there since August 6th, and I'm trying to get it applied asap
so that it's part of Solr 5.4

Thanks.

Aaron


[jira] [Created] (SOLR-8350) Filters on Aggregate Data Subfacets

2015-11-30 Thread Pablo Anzorena (JIRA)
Pablo Anzorena created SOLR-8350:


 Summary: Filters on Aggregate Data Subfacets
 Key: SOLR-8350
 URL: https://issues.apache.org/jira/browse/SOLR-8350
 Project: Solr
  Issue Type: Improvement
Reporter: Pablo Anzorena
Priority: Minor


Hey, I got an idea that I'm pretty sure it is not supported.

Let's assume the schema in solr has at least the following fields:
transaction_id,
product_id,
company,
price

Let's imagine we are Amazon, and we want to analyze the top 10 companies that 
have sold more than US$ 100,000,000. 
Nowadays, the filters are only applied to each solr record and not over 
aggregated data, so there is no way to achieve this (at least natively) from 
solr.
It is more like a BI Tool capability.

It would be nice to have this feature in the subfacets module, for example:
companies: {
  type: terms,
  field: company,
  limit: 10,
  offset: 0,
  sort: "price desc",
  facet: { price: "sum(price)"},
  aggfilter:"price > 100,000,000"
  }
}

And it would be even better to support logic expressions in the "aggfilter" 
field. For example:
companies: {
  type: terms,
  field: company,
  limit: 10,
  offset: 0,
  sort: "price desc",
  facet: { price: "sum(price)"},
  aggfilter:"price > 100,000,000 OR other_measure < 100"
  }
}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6833) Upgrade morfologik to version 2.0.1, simplify MorfologikFilter's dictionary lookup

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031934#comment-15031934
 ] 

ASF subversion and git services commented on LUCENE-6833:
-

Commit 1717271 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1717271 ]

LUCENE-6833: maven build: add test-files/ to morfologik pom

> Upgrade morfologik to version 2.0.1, simplify MorfologikFilter's dictionary 
> lookup
> --
>
> Key: LUCENE-6833
> URL: https://issues.apache.org/jira/browse/LUCENE-6833
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6833.patch, LUCENE-6833.patch
>
>
> This is a follow-up to Uwe's work on LUCENE-6774. 
> This patch updates the code to use Morfologik stemming version 2.0.1, which 
> removes the "automatic" lookup of classpath-relative dictionary resources in 
> favor of an explicit InputStream or URL. So the user code is explicitly 
> responsible to provide these resources, reacting to missing files, etc.
> There were no other "default" dictionaries in Morfologik other than the 
> Polish dictionary so I also cleaned up the filter code from a number of 
> attributes that were, to me, confusing. 
> * {{MorfologikFilterFactory}} now accepts an (optional) {{dictionary}} 
> attribute which contains an explicit name of the dictionary resource to load. 
> The resource is loaded with a {{ResourceLoader}} passed to the {{inform(..)}} 
> method, so the final location depends on the resource loader.
> * There is no way to load the dictionary and metadata separately (this isn't 
> at all useful).
> * If the {{dictionary}} attribute is missing, the filter loads the Polish 
> dictionary by default (since most people would be using Morfologik for 
> stemming Polish anyway).
> This patch is *not* backward compatible, but it attempts to provide useful 
> feedback on initialization: if the removed attributes were used, it points at 
> this JIRA issue, so it should be clear what to change and how.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6833) Upgrade morfologik to version 2.0.1, simplify MorfologikFilter's dictionary lookup

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031943#comment-15031943
 ] 

ASF subversion and git services commented on LUCENE-6833:
-

Commit 1717272 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717272 ]

LUCENE-6833: maven build: add test-files/ to morfologik pom

> Upgrade morfologik to version 2.0.1, simplify MorfologikFilter's dictionary 
> lookup
> --
>
> Key: LUCENE-6833
> URL: https://issues.apache.org/jira/browse/LUCENE-6833
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6833.patch, LUCENE-6833.patch
>
>
> This is a follow-up to Uwe's work on LUCENE-6774. 
> This patch updates the code to use Morfologik stemming version 2.0.1, which 
> removes the "automatic" lookup of classpath-relative dictionary resources in 
> favor of an explicit InputStream or URL. So the user code is explicitly 
> responsible to provide these resources, reacting to missing files, etc.
> There were no other "default" dictionaries in Morfologik other than the 
> Polish dictionary so I also cleaned up the filter code from a number of 
> attributes that were, to me, confusing. 
> * {{MorfologikFilterFactory}} now accepts an (optional) {{dictionary}} 
> attribute which contains an explicit name of the dictionary resource to load. 
> The resource is loaded with a {{ResourceLoader}} passed to the {{inform(..)}} 
> method, so the final location depends on the resource loader.
> * There is no way to load the dictionary and metadata separately (this isn't 
> at all useful).
> * If the {{dictionary}} attribute is missing, the filter loads the Polish 
> dictionary by default (since most people would be using Morfologik for 
> stemming Polish anyway).
> This patch is *not* backward compatible, but it attempts to provide useful 
> feedback on initialization: if the removed attributes were used, it points at 
> this JIRA issue, so it should be clear what to change and how.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8349) Allow sharing of large in memory data structures across cores

2015-11-30 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032007#comment-15032007
 ] 

Gus Heck commented on SOLR-8349:


Further background for this feature... a precursor version of this patch (which 
does not have the interface and thus can't fix SOLR-3443) is in use at a client 
where we had a ~900MB hash map for looking up lat/lon from custom query 
parameters. This map was needed for all our cores. This is anticipated to save 
>35GB of ram since 40+ cores will live on a machine. The original 
implementation of this lat/lon lookup feature for the client attempted to use a 
static field, but the independent class loaders (each core gets it's own class 
loader) loaded fresh copies of the class each with it's own static map.

It's worth noting that the analyzers such as the hunspell one in SOLR-3443 are 
not loaded by the core's class loaders and the excess memory there is held in a 
member field per instance, so a static variable based solution would be 
possible there. I thought it was better to provide a uniform solution. 

Another possible follow on feature (or perhaps enhancement to this one) would 
be a means of reference counting the shared resources and removing them. In the 
present (initial) patch, a long running solr instance where lots of cores are 
added and removed would potentially have unused container resources hanging 
around (though they would become used again with no loading time if a core were 
re-installed that required them). I didn't go into the complexity of removal 
because I wasn't sure if it would be deemed necessary.


> Allow sharing of large in memory data structures across cores
> -
>
> Key: SOLR-8349
> URL: https://issues.apache.org/jira/browse/SOLR-8349
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Affects Versions: 5.3
>Reporter: Gus Heck
> Attachments: SOLR-8349.patch
>
>
> In some cases search components or analysis classes may utilize a large 
> dictionary or other in-memory structure. When multiple cores are loaded with 
> identical configurations utilizing this large in memory structure, each core 
> holds it's own copy in memory. This has been noted in the past and a specific 
> case reported in SOLR-3443. This patch provides a generalized capability, and 
> if accepted, this capability will then be used to fix SOLR-3443.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.4 branch created, feature freeze in place (was Re: A 5.4 release?)

2015-11-30 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Fair point, I have re-worded the ticket title and summary. Will await further 
opinions before applying or not applying the patch to 5.4 branch (and trunk and 
branch_5x).

Christine

From: dev@lucene.apache.org At: Nov 30 2015 14:29:58
To: dev@lucene.apache.org
Subject: Re: 5.4 branch created, feature freeze in place (was Re: A 5.4 
release?)


The patch seems innocuous enough. From the ticket though it isn't so clear to 
me what problem it solves. I'm open to the opinion of others.
 
  
Upayavira
 
  
On Mon, Nov 30, 2015, at 12:09 PM, Christine Poerschke (BLOOMBERG/ LONDON) 
wrote:
 
Any thoughts on getting the https://issues.apache.org/jira/browse/LUCENE-6911 
fix into 5.4 solr or not? Basically StandardQueryParser's existing 
getMultiFields method is a no-op.
 
  
From: dev@lucene.apache.org At: Nov 25 2015 14:11:46
 
To: dev@lucene.apache.org
 
Subject: Re:5.4 branch created, feature freeze in place (was Re: A 5.4 release?)
 

I have created the lucene_solr_5_4 branch. Please, no new features in this 
branch. 
 
  
Please update this thread with any changes you propose to make to this branch. 
Only JIRA tickets which are a blocker and have fix version 5.4 will delay a 
release candidate build.
 
  
Please do review the below - and take any action to clear up these tickets asap.
 
  
I expect to create the first RC this time next week.
 
  
Thanks!
 
  
Upayavira
 
  
On Wed, Nov 25, 2015, at 02:05 PM, Upayavira wrote:
 
I shall shortly create the 5.4 release branch. From this moment, the feature 
freeze starts.
 
  
Looking through JIRA, I see some 71 tickets assigned to fix version 5.4. I 
suspect we won't be able to fix all 71 in one week, so I expect that the 
majority will be pushed, after this release, to 5.5.
 
  
Looking for blockers or critical tickets, I see five tickets:
 
  
https://issues.apache.org/jira/browse/SOLR-8326 (Anusham, Noble) blocker
 
  "Adding read restriction to BasicAuth + RuleBased authorization causes issue 
with replication"
 
  
  Anusham/Noble - any thoughts on how to resolve this before the release?
 
  
https://issues.apache.org/jira/browse/SOLR-8035 (Erik) critical 
 
  "Move solr/webapp to solr/server/solr-webapp" 
 
  
  This one I know isn't a blocker in any sense.
 
  
https://issues.apache.org/jira/browse/SOLR-7901 (Erik) critical
 
  "Add tests for bin/post"
 
  
  Again, this one does not seem to be something worthy of holding back a release
 
  
https://issues.apache.org/jira/browse/LUCENE-6723 (Uwe) critical
 
  "Date field problems using ExtractingRequestHandler and java 9 (b71)"
 
  
  Uwe, I presume as this relates to Java 9, it isn't a blocker?
 
  
https://issues.apache.org/jira/browse/LUCENE-6722 (Shalin, others), blocker
 
  "Java 8 as the minimum supported JVM version for branch_5x"
 
  
  Looking at the discussion, there was no consensus here, so I will not 
consider this a blocker either.
 
  
  - o -
 
  
So SOLR-8326 and LUCENE-6723 seem to be the ones worthy of attention. Anyone 
have comments/observations here?
 
  
I will create the branch shortly.
 
  
Upayavira

 

   


[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031935#comment-15031935
 ] 

Shalin Shekhar Mangar commented on SOLR-8220:
-

bq. It seems like this should be a big win for the common case, and the ability 
to reindex your data or change config and not have to change the clients is 
important IMO.

It sounds like you are arguing for a common way to access docvalues and stored 
fields using the 'fl' parameter. I'm +1 to that.

But are you also arguing for always loading fields from docvalues even if they 
are stored?

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031964#comment-15031964
 ] 

David Smiley commented on SOLR-8180:


Thanks for your input Uwe.  So this explains why the maven build always seems 
to pass on Jenkins.  Perhaps we should have some sort of annotation for a test 
to flag that it can should be skipped during a maven build.

Any way, I plan to commit this patch tomorrow if I don't get any further 
feedback to the contrary.

> Missing logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: David Smiley
> Fix For: 5.4
>
> Attachments: SOLR-8180.patch, SOLR_8180_jcl_over_slf4j.patch, 
> SOLR_8180_jcl_over_slf4j.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional logging dependency must be added otherwise the following 
> exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6875) New Serbian Filter

2015-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032233#comment-15032233
 ] 

Hoss Man commented on LUCENE-6875:
--

Nikola: huge thank you for creating that Solr wiki page - very helpful for 
understanding the pros/cons of the different approaches.

> New Serbian Filter
> --
>
> Key: LUCENE-6875
> URL: https://issues.apache.org/jira/browse/LUCENE-6875
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Nikola Smolenski
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6875.patch, Lucene-Serbian-Regular-1.patch
>
>
> This is a new Serbian filter that works with regular Latin text (the current 
> filter works with "bald" Latin). I described in detail what does it do and 
> why is it necessary at the wiki.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-11-30 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8230:
--
Attachment: SOLR-8230.patch

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032250#comment-15032250
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717303 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1717303 ]

LUCENE-6911: Add correct StandardQueryParser.getMultiFields() method, deprecate 
no-op StandardQueryParser.getMultiFields(CharSequence[]) method.

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8351) Improve HdfsDirectory toString representation

2015-11-30 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8351:
---

 Summary: Improve HdfsDirectory toString representation
 Key: SOLR-8351
 URL: https://issues.apache.org/jira/browse/SOLR-8351
 Project: Solr
  Issue Type: Improvement
Reporter: Mike Drob
 Fix For: Trunk


HdfsDirectory's toString is used in logging by the DeletionPolicy and 
SnapPuller (and probably others). It would be useful to match what FSDirectory 
does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8351) Improve HdfsDirectory toString representation

2015-11-30 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8351:

Attachment: SOLR-8351.patch

Patch that adds directory to HdfsDirectory. Also changes LockFactory to use 
simple name instead of full class name.

> Improve HdfsDirectory toString representation
> -
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032230#comment-15032230
 ] 

Shalin Shekhar Mangar commented on SOLR-8220:
-

bq. On a different note, if we are going to tackle only the non-stored 
docValues fields for now in this issue, does it now make sense to do this, 
performance wise, at the DocTransformer instead of the SolrIndexSearcher?

I don't think there's any difference performance-wise. Changes to DocStreamer 
should be enough as it is called only for writing the response and not the 
entire result-set.

bq. At this point the question that remains; should we move forward with these 
patches and move logic for retrieving dv fields to SolrIndexSearcher, leaving 
out *, *_foo and other optimizations for now? i.e. retrieve fields by name, if 
they exist in dv, but are not stored.

+1 let's create a patch to retrieve fields by name, if they exist in dv, but 
are not stored. I also like Yonik's idea of bumping the schema version to have 
fl=* return all fields (stored + non-stored docvalues) in 5.x and to include 
both by default in trunk (6.x). So +1 to that as well.

bq. a very common case these days is that the entire index fits in memory.

I propose a middle ground. Let's use Lucene's spinning disk utility method and 
prefer docvalues if we detect a SSD and fallback to reading from stored fields 
otherwise. Let's discuss this optimization in SOLR-8344 and keep the two issues 
separate.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-11-30 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032244#comment-15032244
 ] 

Michael Sun commented on SOLR-8230:
---

Just uploaded new patch. In the patch, it builds a tree structure for facet 
telemetry information only during facet processing. The facet context is not 
retained after each facet is processed as it was.


> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8351) Improve HdfsDirectory toString representation

2015-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032287#comment-15032287
 ] 

Mike Drob commented on SOLR-8351:
-

It occurs to me that there might be some value in providing another abstract 
directory that both FSDirectory and HdfsDirectory could inherit from that would 
take care of any common functionality that they have. That's a more invasive 
fix and I'm not sure if that would impact usage of either one of them 
significantly.

> Improve HdfsDirectory toString representation
> -
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-11-30 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032175#comment-15032175
 ] 

Michael Sun commented on SOLR-8230:
---

Thanks [~yo...@apache.org]. It's really good insight to point out the potential 
memory issue. Let me fix it. It can be fixed either by building a separate tree 
for telemetry information only (small amount of data) or null out the unused 
fields in sub context once it's used.


> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032375#comment-15032375
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717314 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1717314 ]

LUCENE-6911: correcting attribution (Mikhail suggested returning getter in 
LUCENE-6910, thank you)

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6910) fix 2 interesting and 2 trivial issues found by "Coverity scan results of Lucene"

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032376#comment-15032376
 ] 

ASF subversion and git services commented on LUCENE-6910:
-

Commit 1717314 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1717314 ]

LUCENE-6911: correcting attribution (Mikhail suggested returning getter in 
LUCENE-6910, thank you)

> fix 2 interesting and 2 trivial issues found by "Coverity scan results of 
> Lucene"
> -
>
> Key: LUCENE-6910
> URL: https://issues.apache.org/jira/browse/LUCENE-6910
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6910.patch, LUCENE-6910.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> * coverity CID 119973
> * coverity CID 120040
> * coverity CID 120081
> * coverity CID 120628



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8351) Improve HdfsDirectory toString representation

2015-11-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan reassigned SOLR-8351:


Assignee: Gregory Chanan

> Improve HdfsDirectory toString representation
> -
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8352) randomise unload order in UnloadDistributedZkTest.testUnloadShardAndCollection

2015-11-30 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8352:
-

 Summary: randomise unload order in 
UnloadDistributedZkTest.testUnloadShardAndCollection
 Key: SOLR-8352
 URL: https://issues.apache.org/jira/browse/SOLR-8352
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8352) randomise unload order in UnloadDistributedZkTest.testUnloadShardAndCollection

2015-11-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8352:
--
Attachment: SOLR-8352.patch

attaching proposed patch against trunk

> randomise unload order in UnloadDistributedZkTest.testUnloadShardAndCollection
> --
>
> Key: SOLR-8352
> URL: https://issues.apache.org/jira/browse/SOLR-8352
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-8352.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032394#comment-15032394
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717316 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717316 ]

LUCENE-6911: Add correct StandardQueryParser.getMultiFields() method, deprecate 
no-op StandardQueryParser.getMultiFields(CharSequence[]) method. (merge in 
revision 1717303 from trunk)

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032402#comment-15032402
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717317 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717317 ]

LUCENE-6911: correcting attribution (merge in revision 1717314 from trunk)

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032409#comment-15032409
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717318 from [~cpoerschke] in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1717318 ]

LUCENE-6911: Add correct StandardQueryParser.getMultiFields() method, deprecate 
no-op StandardQueryParser.getMultiFields(CharSequence[]) method. (merge in 
revision 1717316 from branch_5x)

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032412#comment-15032412
 ] 

ASF subversion and git services commented on LUCENE-6911:
-

Commit 1717319 from [~cpoerschke] in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1717319 ]

LUCENE-6911: correcting attribution (merge in revision 1717317 from branch_5x)

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-11-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7339:

Attachment: SOLR-7339.patch

Patch to upgrade trunk to Jetty 9.3.6.v20151106

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
> Attachments: SOLR-7339.patch, SOLR-7339.patch
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8353) Support regex for skipping license checksums

2015-11-30 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8353:


 Summary: Support regex for skipping license checksums
 Key: SOLR-8353
 URL: https://issues.apache.org/jira/browse/SOLR-8353
 Project: Solr
  Issue Type: Improvement
  Components: Build
Reporter: Gregory Chanan


It would be useful to be able to specify a regex for license checksums to skip 
in the build.  Currently there are only two supported values:
1) skipChecksum (i.e. regex=*)
2) skipSnapshotsChecksum (i.e. regex=*-SNAPSHOT-*)

A regex would be more flexible and allow testing the entire build while 
skipping a more limited set of checksums, e.g.:
a) an individual library (i.e. regex=joda-time*)
b) a suite of libraries (i.e. regex=hadoop*)

We could make skipChecksum and skipSnapshotsChecksum continue to work for 
backwards compatbility reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032425#comment-15032425
 ] 

Christine Poerschke commented on LUCENE-6911:
-

Change committed, deprecating the broken no-op getter.

How would removal of the deprecated method work, just "remove ... method" 
instead of "deprecate ... method" in the API Changes section of CHANGES.txt and 
commit/merge as usual? Or should removal be done only for {{trunk}} but not 
{{branch_5x}}?

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6911) StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op

2015-11-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6911:

Fix Version/s: 5.4
   Trunk

> StandardQueryParser's getMultiFields(CharSequence[] fields) method is a no-op
> -
>
> Key: LUCENE-6911
> URL: https://issues.apache.org/jira/browse/LUCENE-6911
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6911.patch
>
>
> problem summary:
> * 
> {{lucene.queryparser.flexible.standard.StandardQueryParser.getMultiFields(CharSequence[]
>  fields)}} is a no-op
> details:
> * https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> ** coverity CID 120698



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b93) - Build # 14790 - Still Failing!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14790/
Java: 32bit/jdk1.9.0-ea-b93 -server -XX:+UseG1GC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 53298 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:648: The following 
files are missing svn:eol-style (or binary svn:mime-type):
* 
./lucene/test-framework/src/java/org/apache/lucene/index/BaseTestCheckIndex.java
* ./solr/core/src/java/org/apache/solr/index/hdfs/CheckHdfsIndex.java
* ./solr/core/src/java/org/apache/solr/index/hdfs/package-info.java
* ./solr/core/src/test/org/apache/solr/index/hdfs/CheckHdfsIndexTest.java

Total time: 62 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Assigned] (SOLR-8175) Wordbreak spellchecker throws IOOBE with Occur.MUST term

2015-11-30 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reassigned SOLR-8175:


Assignee: James Dyer

> Wordbreak spellchecker throws IOOBE with Occur.MUST term
> 
>
> Key: SOLR-8175
> URL: https://issues.apache.org/jira/browse/SOLR-8175
> Project: Solr
>  Issue Type: Bug
>Reporter: Ryan Josal
>Assignee: James Dyer
> Attachments: solr8175.patch
>
>
> Using the WordBreakSolrSpellChecker, if a user queries for "+foo barbaz" and 
> "bar baz" is a suggestion for "barbaz", Solr will throw an 
> IndexOutOfBoundsException.  As a result, a server driven by user queries 
> might throw a certain percentage of HTTP 500 responses as users hit this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8175) Wordbreak spellchecker throws IOOBE with Occur.MUST term

2015-11-30 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032616#comment-15032616
 ] 

James Dyer commented on SOLR-8175:
--

[~rjosal] Thanks for the bug report and the patch, especially for the unit 
test.  Its been a while since I've committed anything, but I'll work on it 
tomorrow.

> Wordbreak spellchecker throws IOOBE with Occur.MUST term
> 
>
> Key: SOLR-8175
> URL: https://issues.apache.org/jira/browse/SOLR-8175
> Project: Solr
>  Issue Type: Bug
>Reporter: Ryan Josal
> Attachments: solr8175.patch
>
>
> Using the WordBreakSolrSpellChecker, if a user queries for "+foo barbaz" and 
> "bar baz" is a suggestion for "barbaz", Solr will throw an 
> IndexOutOfBoundsException.  As a result, a server driven by user queries 
> might throw a certain percentage of HTTP 500 responses as users hit this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2850 - Failure!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2850/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.processor.TestNamedUpdateProcessors.test

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([581EDB9176B90721:D04AE44BD8456AD9]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.update.processor.TestNamedUpdateProcessors.test(TestNamedUpdateProcessors.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032898#comment-15032898
 ] 

ASF subversion and git services commented on SOLR-7928:
---

Commit 1717367 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717367 ]

SOLR-7928: Add eol-style native

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032917#comment-15032917
 ] 

ASF subversion and git services commented on SOLR-7928:
---

Commit 1717368 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717368 ]

SOLR-7928: Add eol-style native

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b93) - Build # 14790 - Still Failing!

2015-11-30 Thread Greg Chanan
Will fix in a few minutes.

Greg

On Mon, Nov 30, 2015 at 5:32 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14790/
> Java: 32bit/jdk1.9.0-ea-b93 -server -XX:+UseG1GC -XX:-CompactStrings
>
> All tests passed
>
> Build Log:
> [...truncated 53298 lines...]
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:648: The following
> files are missing svn:eol-style (or binary svn:mime-type):
> *
> ./lucene/test-framework/src/java/org/apache/lucene/index/BaseTestCheckIndex.java
> * ./solr/core/src/java/org/apache/solr/index/hdfs/CheckHdfsIndex.java
> * ./solr/core/src/java/org/apache/solr/index/hdfs/package-info.java
> * ./solr/core/src/test/org/apache/solr/index/hdfs/CheckHdfsIndexTest.java
>
> Total time: 62 minutes 56 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>


[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2904 - Failure!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2904/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62017 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:775: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:655: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:638: The following 
files are missing svn:eol-style (or binary svn:mime-type):
* 
./lucene/test-framework/src/java/org/apache/lucene/index/BaseTestCheckIndex.java
* ./solr/core/src/java/org/apache/solr/index/hdfs/CheckHdfsIndex.java
* ./solr/core/src/java/org/apache/solr/index/hdfs/package-info.java
* ./solr/core/src/test/org/apache/solr/index/hdfs/CheckHdfsIndexTest.java

Total time: 99 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b93) - Build # 15084 - Still Failing!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15084/
Java: 32bit/jdk1.9.0-ea-b93 -server -XX:+UseSerialGC -XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=5713, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[7A0262B23FD3A113]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=5968, name=zkCallback-1208-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=5714, 
name=zkCallback-1208-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=5969, 
name=zkCallback-1208-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=5712, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[7A0262B23FD3A113]-SendThread(127.0.0.1:58937),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=5713, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[7A0262B23FD3A113]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:178)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
   2) Thread[id=5968, name=zkCallback-1208-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 

[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032641#comment-15032641
 ] 

ASF subversion and git services commented on SOLR-7928:
---

Commit 1717340 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717340 ]

SOLR-7928: Improve CheckIndex to work against HdfsDirectory

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032652#comment-15032652
 ] 

Gregory Chanan commented on SOLR-7928:
--

Committed to 5.5 and trunk, thanks Mike!

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6737) Add DecimalDigitFilter

2015-11-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032673#comment-15032673
 ] 

Uwe Schindler commented on LUCENE-6737:
---

Just as idea: We could expand UnicodeData.java autogen'd to ICU extracted 
digits like UnicodeWhitespaceTokenizer? Just in case that the Java data may be 
strange (although I think it is a bug in the filter, as Hoss' said).

> Add DecimalDigitFilter
> --
>
> Key: LUCENE-6737
> URL: https://issues.apache.org/jira/browse/LUCENE-6737
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Robert Muir
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6737.patch
>
>
> TokenFilter that folds all unicode digits 
> (http://unicode.org/cldr/utility/list-unicodeset.jsp?a=[:General_Category=Decimal_Number:])
>  to 0-9.
> Historically a lot of the impacted analyzers couldn't even tokenize numbers 
> at all, but now they use standardtokenizer for numbers/alphanum tokens. But 
> its usually the case you will find e.g. a mix of both ascii digits and 
> "native" digits, and today that makes searching difficult.
> Note this only impacts *decimal* digits, hence the name DecimalDigitFilter. 
> So no processing of chinese numerals or anything crazy like that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8351) Improve HdfsDirectory and HdfsLock toString representation

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032668#comment-15032668
 ] 

ASF subversion and git services commented on SOLR-8351:
---

Commit 1717344 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717344 ]

SOLR-8351: Improve HdfsDirectory toString representation

> Improve HdfsDirectory and HdfsLock toString representation
> --
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8351) Improve HdfsDirectory and HdfsLock toString representation

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032670#comment-15032670
 ] 

ASF subversion and git services commented on SOLR-8351:
---

Commit 1717345 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717345 ]

SOLR-8351: Improve HdfsDirectory toString representation

> Improve HdfsDirectory and HdfsLock toString representation
> --
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8354) RecoveryStrategy retry timing is innaccurate

2015-11-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032748#comment-15032748
 ] 

Ishan Chattopadhyaya commented on SOLR-8354:


Should we not, instead, change the comment to reflect what current behavior is? 
If the existing timeouts are working well, I don't think we should change the 
behavior without a real reason.

> RecoveryStrategy retry timing is innaccurate
> 
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-5494) ArrayIndexOutOfBounds - WordBreakSolrSpellChecker.java:266

2015-11-30 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer closed LUCENE-5494.
--
Resolution: Duplicate

This is a solr-only problem, and will be addressed with SOLR-8175.

> ArrayIndexOutOfBounds - WordBreakSolrSpellChecker.java:266
> --
>
> Key: LUCENE-5494
> URL: https://issues.apache.org/jira/browse/LUCENE-5494
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: SOlrNet, Uri interface
>Reporter: Mark Peck 
>Priority: Minor
>
> When running the following query:
> {code}
> http://localhost:8983/solr/search/select?q=(%22active%2Bhuman%2Bcox-2%22+OR+(%22active%22+AND+%22human%22+AND+%22cox-2%22))=true
> {code}
> We get the following error output:
> {code:xml}
> 
> 9
> 
> java.lang.ArrayIndexOutOfBoundsException: 9 at 
> org.apache.solr.spelling.WordBreakSolrSpellChecker.getSuggestions(WordBreakSolrSpellChecker.java:266)
>  at 
> org.apache.solr.spelling.ConjunctionSolrSpellChecker.getSuggestions(ConjunctionSolrSpellChecker.java:120)
>  at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:172)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:365) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
>  at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>  at java.lang.Thread.run(Unknown Source)
> 
> 500
> 
> {code}
> (!) We have ascertained this only happens when the '-2' as added to the 
> search term.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8354) RecoveryStrategy retry timing is innaccurate

2015-11-30 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8354:

Summary: RecoveryStrategy retry timing is innaccurate  (was: 
RecoveryStrategy retry logic is innaccurate)

> RecoveryStrategy retry timing is innaccurate
> 
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032649#comment-15032649
 ] 

ASF subversion and git services commented on SOLR-7928:
---

Commit 1717342 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717342 ]

SOLR-7928: Improve CheckIndex to work against HdfsDirectory

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-7928.
--
   Resolution: Fixed
Fix Version/s: (was: 5.4)
   5.5

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8175) Wordbreak spellchecker throws IOOBE with Occur.MUST term

2015-11-30 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032697#comment-15032697
 ] 

Ryan Josal commented on SOLR-8175:
--

Thanks for picking it up!

> Wordbreak spellchecker throws IOOBE with Occur.MUST term
> 
>
> Key: SOLR-8175
> URL: https://issues.apache.org/jira/browse/SOLR-8175
> Project: Solr
>  Issue Type: Bug
>Reporter: Ryan Josal
>Assignee: James Dyer
> Attachments: solr8175.patch
>
>
> Using the WordBreakSolrSpellChecker, if a user queries for "+foo barbaz" and 
> "bar baz" is a suggestion for "barbaz", Solr will throw an 
> IndexOutOfBoundsException.  As a result, a server driven by user queries 
> might throw a certain percentage of HTTP 500 responses as users hit this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8354) RecoveryStrategy retry timing is innaccurate

2015-11-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032778#comment-15032778
 ] 

Mark Miller commented on SOLR-8354:
---

Auto complete for names not working on my phone.  Ram had a good point about 
simplifying these retries and not backing off at all. Instead just rely every N 
seconds or something. 

> RecoveryStrategy retry timing is innaccurate
> 
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2015-11-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-6914:
-
Attachment: LUCENE-6914.patch

the redundancy in the two randomized tests (and wasteful int[] I introduced) 
was bugging me, so I refactored some logic into determininghte set of all 
decimal digits for reuse in both tests.

> DecimalDigitFilter skips characters in some cases (supplemental?)
> -
>
> Key: LUCENE-6914
> URL: https://issues.apache.org/jira/browse/LUCENE-6914
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Hoss Man
> Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch
>
>
> Found this while writing up the solr ref guide for DecimalDigitFilter. 
> With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
> double struck 9, 8, double struck 4)  add some non-decimal characters in 
> between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output 
> ("1x9x8x4").  This doesn't affect all decimal characters though, as evident 
> by the existing test cases.
> Perhaps this is an off by one bug in the "if the original was supplementary, 
> shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8354) RecoveryStrategy retry logic is innaccurate

2015-11-30 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8354:

Attachment: SOLR-8354.patch

Patch attached that changes the sleep to match what is in the comment.

> RecoveryStrategy retry logic is innaccurate
> ---
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8351) Improve HdfsDirectory and HdfsLock toString representation

2015-11-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-8351.
--
   Resolution: Fixed
Fix Version/s: 5.5

> Improve HdfsDirectory and HdfsLock toString representation
> --
>
> Key: SOLR-8351
> URL: https://issues.apache.org/jira/browse/SOLR-8351
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8351.patch, SOLR-8351.patch
>
>
> HdfsDirectory's toString is used in logging by the DeletionPolicy and 
> SnapPuller (and probably others). It would be useful to match what 
> FSDirectory does, and print the directory it refers to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032698#comment-15032698
 ] 

Gregory Chanan commented on SOLR-7183:
--

[~anshumg] should this be marked resolved?

> SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
> ---
>
> Key: SOLR-7183
> URL: https://issues.apache.org/jira/browse/SOLR-7183
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Gregory Chanan
> Fix For: 5.2
>
> Attachments: SOLR-7183.patch
>
>
> SaslZkACLProviderTest has this blacklist of locales...
> {code}
>   // These Locales don't generate dates that are compatibile with Hadoop 
> MiniKdc.
>   protected final static List brokenLocales =
> Arrays.asList(
>   "th_TH_TH_#u-nu-thai",
>   "ja_JP_JP_#u-ca-japanese",
>   "hi_IN");
> {code}
> ..but this list is incomplete -- notably because it only focuses on one 
> specific Thai variant, and then does a string Locale.toString() comparison.  
> so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
> other variants that will fail as well
> * if there is a bug in "Hadoop MiniKdc" then that bug should be filed in 
> jira, and there should be Solr jira that refers to it -- the Solr jira URL 
> needs to be included her in the test case so developers in the future can 
> understand the context and have some idea of if/when the third-party lib bug 
> is fixed
> * if we need to work around some Locales because of this bug, then Locale 
> comparisons need be based on whatever aspects of the Locale are actually 
> problematic
> see for example SOLR-6387 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676=1618675=1618676
> Or SOLR-6991 + TIKA-1526 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708=1653707=1653708



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032731#comment-15032731
 ] 

Gregory Chanan commented on SOLR-8330:
--

RequestLoggingTest is still relevant -- it checks the output of specific 
loggers.  I don't quite understand why it doesn't work though -- does the 
MethodHandles change the name of the logger?

> Restrict logger visibility throughout the codebase to private so that only 
> the file that declares it can use it
> ---
>
> Key: SOLR-8330
> URL: https://issues.apache.org/jira/browse/SOLR-8330
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>  Labels: logging
> Fix For: Trunk
>
> Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, 
> SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, 
> SOLR-8330.patch, SOLR-8330.patch
>
>
> As Mike Drob pointed out in Solr-8324, many loggers in Solr are 
> unintentionally shared between classes.  Many instances of this are caused by 
> overzealous copy-paste.  This can make debugging tougher, as messages appear 
> to come from an incorrect location.
> As discussed in the comments on SOLR-8324, there also might be legitimate 
> reasons for sharing loggers between classes.  Where any ambiguity exists, 
> these instances shouldn't be touched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2015-11-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-6914:
-
Attachment: LUCENE-6914.patch

updated patch with beefed up randomized testing to reproduce the problem that 
way (either that or some other problem that looks similar to my naked eye)

Then I took a shot in the dark at a fix to the call to StemmerUtil.delete and 
that seems to make the beast happy.

Since i'm way out of my depth here i don't intend on commiting w/o explicit 
feedback from someone who understands this code.  (i'm mainly worried i may 
have introduced some other equally bad bug w/o realizing it).

Anybody who understands this and thinks my patch looks good is welcome to run 
with it for 5.4, no need to wait for me.

> DecimalDigitFilter skips characters in some cases (supplemental?)
> -
>
> Key: LUCENE-6914
> URL: https://issues.apache.org/jira/browse/LUCENE-6914
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Hoss Man
> Attachments: LUCENE-6914.patch, LUCENE-6914.patch
>
>
> Found this while writing up the solr ref guide for DecimalDigitFilter. 
> With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
> double struck 9, 8, double struck 4)  add some non-decimal characters in 
> between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output 
> ("1x9x8x4").  This doesn't affect all decimal characters though, as evident 
> by the existing test cases.
> Perhaps this is an off by one bug in the "if the original was supplementary, 
> shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6737) Add DecimalDigitFilter

2015-11-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032677#comment-15032677
 ] 

Uwe Schindler commented on LUCENE-6737:
---

Ignore my last comment: The filter needs more Unicode info than 
Character#isDigit().

> Add DecimalDigitFilter
> --
>
> Key: LUCENE-6737
> URL: https://issues.apache.org/jira/browse/LUCENE-6737
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Robert Muir
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6737.patch
>
>
> TokenFilter that folds all unicode digits 
> (http://unicode.org/cldr/utility/list-unicodeset.jsp?a=[:General_Category=Decimal_Number:])
>  to 0-9.
> Historically a lot of the impacted analyzers couldn't even tokenize numbers 
> at all, but now they use standardtokenizer for numbers/alphanum tokens. But 
> its usually the case you will find e.g. a mix of both ascii digits and 
> "native" digits, and today that makes searching difficult.
> Note this only impacts *decimal* digits, hence the name DecimalDigitFilter. 
> So no processing of chinese numerals or anything crazy like that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15083 - Failure!

2015-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15083/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:53705/collection1 
expected:<68> but was:<67>

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:53705/collection1 expected:<68> but was:<67>
at 
__randomizedtesting.SeedInfo.seed([196DE52EA3B07F75:9139DAF40D4C128D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032710#comment-15032710
 ] 

Gregory Chanan commented on SOLR-6915:
--

bq. This is still failing fairly frequently on Jenkins runs, particularly on 
Java 9 (eg http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14737/). Maybe 
the thing to do is to wrap the MiniKDC startup method in an assumeTrue(), if we 
know there are certain locales that break this?

I think that's more or less what was done in SOLR-7183.  I think the issue is 
that just maintains a list of known bad locales instead of running checks on 
the locales to programatically figure out what was wrong.  And there are new 
locales in JDK9.  So easiest thing to do is add more to the list, medium 
solution is to runs checks on the locale, best solution is to fix MiniKDC.

Just a note: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14789/ fails 
with ar_TD

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.1, Trunk
>
> Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
> tests-failures.txt
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8354) RecoveryStrategy retry timing is innaccurate

2015-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032759#comment-15032759
 ] 

Mike Drob commented on SOLR-8354:
-

Looks like this was originally added by [~markrmil...@gmail.com] - 
https://github.com/apache/lucene-solr/commit/9cb587e216283275ddcd8161a6306daf7b924cfc
 with the comment and the logic coming in at the same time. [~shalinmangar] 
later changed the 600 to 60.

Do you guys know what the intent here is? Happy to adjust the comment and 
logging instead of the logic, depending on which one can be considered the 
source of truth.

> RecoveryStrategy retry timing is innaccurate
> 
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it

2015-11-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032760#comment-15032760
 ] 

Gregory Chanan commented on SOLR-8330:
--

BTW changing the requestLog in SolrCore.java to:
{code} requestLog = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass().getName()+ 
".Request");{code}
gets the test to pass for me.

> Restrict logger visibility throughout the codebase to private so that only 
> the file that declares it can use it
> ---
>
> Key: SOLR-8330
> URL: https://issues.apache.org/jira/browse/SOLR-8330
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>  Labels: logging
> Fix For: Trunk
>
> Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, 
> SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, 
> SOLR-8330.patch, SOLR-8330.patch
>
>
> As Mike Drob pointed out in Solr-8324, many loggers in Solr are 
> unintentionally shared between classes.  Many instances of this are caused by 
> overzealous copy-paste.  This can make debugging tougher, as messages appear 
> to come from an incorrect location.
> As discussed in the comments on SOLR-8324, there also might be legitimate 
> reasons for sharing loggers between classes.  Where any ambiguity exists, 
> these instances shouldn't be touched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it

2015-11-30 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032798#comment-15032798
 ] 

Anshum Gupta commented on SOLR-8330:


Thanks Greg and Jason.

I'll take a look at this patch and commit this after the tests pass, along with 
Uwe's awesome validator :)

> Restrict logger visibility throughout the codebase to private so that only 
> the file that declares it can use it
> ---
>
> Key: SOLR-8330
> URL: https://issues.apache.org/jira/browse/SOLR-8330
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>  Labels: logging
> Fix For: Trunk
>
> Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, 
> SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, 
> SOLR-8330.patch, SOLR-8330.patch
>
>
> As Mike Drob pointed out in Solr-8324, many loggers in Solr are 
> unintentionally shared between classes.  Many instances of this are caused by 
> overzealous copy-paste.  This can make debugging tougher, as messages appear 
> to come from an incorrect location.
> As discussed in the comments on SOLR-8324, there also might be legitimate 
> reasons for sharing loggers between classes.  Where any ambiguity exists, 
> these instances shouldn't be touched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8354) RecoveryStrategy retry logic is innaccurate

2015-11-30 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8354:
---

 Summary: RecoveryStrategy retry logic is innaccurate
 Key: SOLR-8354
 URL: https://issues.apache.org/jira/browse/SOLR-8354
 Project: Solr
  Issue Type: Improvement
Reporter: Mike Drob


At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with a 
comment that suggests the code will {{// start at 1 sec and work up to a min}}. 
The code will actually start at 10 seconds, and work up to 5 minutes. 
Additionally, the log statement incorrectly reports how long the next wait will 
be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 868 - Still Failing

2015-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/868/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SyncSliceTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([64967A30CCFDD1B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:224)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10342 lines...]
   [junit4] Suite: org.apache.solr.cloud.SyncSliceTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.cloud.SyncSliceTest_64967A30CCFDD1B0-001/init-core-data-001
   [junit4]   2> 858726 INFO  
(SUITE-SyncSliceTest-seed#[64967A30CCFDD1B0]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 858726 INFO  
(SUITE-SyncSliceTest-seed#[64967A30CCFDD1B0]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /h_/hb
   [junit4]   2> 858744 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 858744 INFO  (Thread-11899) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 858744 INFO  (Thread-11899) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 858844 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.ZkTestServer start zk server on port:43685
   [junit4]   2> 858844 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 858863 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 858875 INFO  (zkCallback-525-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@6b2aa8fc 
name:ZooKeeperConnection Watcher:127.0.0.1:43685 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 858875 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 858875 INFO  (TEST-SyncSliceTest.test-seed#[64967A30CCFDD1B0]) 
[] o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 858875 INFO  

[jira] [Commented] (SOLR-8354) RecoveryStrategy retry timing is innaccurate

2015-11-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032837#comment-15032837
 ] 

Mark Miller commented on SOLR-8354:
---

The problem with the retry back off is that it doesnt work so well. Eventually 
it's taking too long between tries for something that should be fairly cheap to 
attempt and fail. 

> RecoveryStrategy retry timing is innaccurate
> 
>
> Key: SOLR-8354
> URL: https://issues.apache.org/jira/browse/SOLR-8354
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mike Drob
> Attachments: SOLR-8354.patch
>
>
> At the end of {{RecoveryStrategy::doRecovery}} there is a retry segment, with 
> a comment that suggests the code will {{// start at 1 sec and work up to a 
> min}}. The code will actually start at 10 seconds, and work up to 5 minutes. 
> Additionally, the log statement incorrectly reports how long the next wait 
> will be. Either the comment and log should be corrected or the logic adjusted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033045#comment-15033045
 ] 

ASF subversion and git services commented on SOLR-7339:
---

Commit 1717377 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717377 ]

SOLR-7339: Upgrade Jetty to v9.3.6.v20151106

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
> Attachments: SOLR-7339.patch, SOLR-7339.patch
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6744) equals methods should compare classes directly, not use instanceof

2015-11-30 Thread Sachin Rajendra (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033065#comment-15033065
 ] 

Sachin Rajendra commented on LUCENE-6744:
-

Hi [~erickerickson],

Thanks for the detailed analysis. Good points. I will wait for the 
authors/maintainers of the respective classes to weigh in then.

Thanks!

> equals methods should compare classes directly, not use instanceof
> --
>
> Key: LUCENE-6744
> URL: https://issues.apache.org/jira/browse/LUCENE-6744
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>  Labels: newdev
> Attachments: LUCENE-6744.patch
>
>
> from a 2015-07-12 email to the dev list from Fuxiang Chen...
> {noformat}
> We have found some inconsistencies in the overriding of the equals() method
> in some files with respect to the conforming to the contract structure
> based on the Java Specification.
> Affected files:
> 1) ConstValueSource.java
> 2) DoubleConstValueSource.java
> 3) FixedBitSet.java
> 4) GeohashFunction.java
> 5) LongBitSet.java
> 6) SpanNearQuery.java
> 7) StringDistanceFunction.java
> 8) ValueSourceRangeFilter.java
> 9) VectorDistanceFunction.java
> The above files all uses instanceof in the overridden equals() method in
> comparing two objects.
> According to the Java Specification, the equals() method must be reflexive,
> symmetric, transitive and consistent. In the case of symmetric, it is
> stated that x.equals(y) should return true if and only if y.equals(x)
> returns true. Using instanceof is asymmetric and is not a valid symmetric
> contract.
> A more preferred way will be to compare the classes instead. i.e. if
> (this.getClass() != o.getClass()).
> However, if compiling the source code using JDK 7 and above, and if
> developers still prefer to use instanceof, you can make use of the static
> methods of Objects such as Objects.equals(this.id, that.id). (Making use of
> the static methods of Objects is currently absent in the methods.) It will
> be easier to override the equals() method and will ensure that the
> overridden equals() method will fulfill the contract rules.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8355) RuleBasedAuthenticationPlugin doesn't work with update permission enabled

2015-11-30 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-8355:
--

 Summary: RuleBasedAuthenticationPlugin doesn't work with update 
permission enabled
 Key: SOLR-8355
 URL: https://issues.apache.org/jira/browse/SOLR-8355
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
Priority: Blocker
 Fix For: 5.4


Here are the steps that recreate this issue. I tried this on Solr 5.4 and I had 
the following stack trace when I issued an ADDREPLICA. This seems pretty 
similar to what we saw on SOLR-8326 so it might be just something we missed but 
we should make sure that we ship 5.4 with this fixed.

#Upload Security Conf
server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd putfile 
/security.json ~/security.json

#Start Solr
bin/solr start -e cloud -z localhost:2181


#Collection Admin Edit Command:
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{"set-permission" : 
{"name":"collection-admin-edit", "role":"admin"}}'

#Read User and permission:
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{"set-permission" : {"name":"read", 
"role":"read"}}'

curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{"set-permission" : {"name":"update", 
"role":"update"]}}'

#Add Users
#Read User
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
'Content-type:application/json' -d '{"set-user" : {"solrread":"solrRocks"}}'

#Update user
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
'Content-type:application/json' -d '{"set-user" : {"solrupdate":"solrRocks"}}'

#Set user roles
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{"set-user-role" : 
{"solrupdate":["read","update"]}}'

#Read User
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{"set-user-role" : {"solrread":["read"]}}'

#Create collection
curl --user solr:SolrRocks 
'http://localhost:8983/solr/admin/collections?action=CREATE=a=1=1=gettingstarted=json'

#Add Replica
curl --user solr:SolrRocks 
'http://localhost:8983/solr/admin/collections?action=ADDREPLICA=a=shard1=json'


Exception log:

INFO  - 2015-12-01 04:57:47.022; [c:a s:shard1 r:core_node2 
x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Starting 
Replication Recovery.
INFO  - 2015-12-01 04:57:47.023; [c:a s:shard1 r:core_node2 
x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Attempting to 
replicate from http://172.20.10.4:7574/solr/a_shard1_replica1/.
ERROR - 2015-12-01 04:57:47.027; [c:a s:shard1 r:core_node2 
x:a_shard1_replica2] org.apache.solr.common.SolrException; Error while trying 
to 
recover:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
Error from server at http://172.20.10.4:7574/solr/a_shard1_replica1: Expected 
mime type application/octet-stream but got text/html. 


Error 401 Unauthorized request, Response code: 401

HTTP ERROR 401
Problem accessing /solr/a_shard1_replica1/update. Reason:
Unauthorized request, Response code: 
401Powered by Jetty://




at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.RecoveryStrategy.commitOnLeader(RecoveryStrategy.java:205)
at 
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:145)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:436)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)

INFO  - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
x:a_shard1_replica2] org.apache.solr.update.UpdateLog; Dropping buffered 
updates FSUpdateLog{state=BUFFERING, tlog=null}
ERROR - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Recovery failed - 
trying again... (4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



FOSDEM 2016 - take action by 4th of December 2015

2015-11-30 Thread Roman Shaposhnik
As most of you probably know FOSDEM 2016 (the biggest,
100% free open source developer conference) is right 
around the corner:
   https://fosdem.org/2016/

We hope to have an ASF booth and we would love to see as
many ASF projects as possible present at various tracks
(AKA Developer rooms):
   https://fosdem.org/2016/schedule/#devrooms

This year, for the first time, we are running a dedicated
Big Data and HPC Developer Room and given how much of that
open source development is done at ASF it would be great
to have folks submit talks to:
   https://hpc-bigdata-fosdem16.github.io

While the CFPs for different Developer Rooms follow slightly 
different schedules, but if you submit by the end of this week 
you should be fine.

Finally if you don't want to fish for CFP submission URL,
here it is:
   https://fosdem.org/submit

If you have any questions -- please email me *directly* and
hope to see as many of you as possible in two months! 

Thanks,
Roman.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8355) RuleBasedAuthenticationPlugin doesn't work with update permission enabled

2015-11-30 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8355:
---
Component/s: security

> RuleBasedAuthenticationPlugin doesn't work with update permission enabled
> -
>
> Key: SOLR-8355
> URL: https://issues.apache.org/jira/browse/SOLR-8355
> Project: Solr
>  Issue Type: Bug
>  Components: security
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Priority: Blocker
>  Labels: authorization-plugin
> Fix For: 5.4
>
>
> Here are the steps that recreate this issue. I tried this on Solr 5.4 and I 
> had the following stack trace when I issued an ADDREPLICA. This seems pretty 
> similar to what we saw on SOLR-8326 so it might be just something we missed 
> but we should make sure that we ship 5.4 with this fixed.
> #Upload Security Conf
> server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd putfile 
> /security.json ~/security.json
> #Start Solr
> bin/solr start -e cloud -z localhost:2181
> #Collection Admin Edit Command:
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : 
> {"name":"collection-admin-edit", "role":"admin"}}'
> #Read User and permission:
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : {"name":"read", 
> "role":"read"}}'
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : {"name":"update", 
> "role":"update"]}}'
> #Add Users
> #Read User
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d '{"set-user" : {"solrread":"solrRocks"}}'
> #Update user
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d '{"set-user" : {"solrupdate":"solrRocks"}}'
> #Set user roles
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-user-role" : 
> {"solrupdate":["read","update"]}}'
> #Read User
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-user-role" : {"solrread":["read"]}}'
> #Create collection
> curl --user solr:SolrRocks 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=a=1=1=gettingstarted=json'
> #Add Replica
> curl --user solr:SolrRocks 
> 'http://localhost:8983/solr/admin/collections?action=ADDREPLICA=a=shard1=json'
> Exception log:
> INFO  - 2015-12-01 04:57:47.022; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Starting 
> Replication Recovery.
> INFO  - 2015-12-01 04:57:47.023; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Attempting to 
> replicate from http://172.20.10.4:7574/solr/a_shard1_replica1/.
> ERROR - 2015-12-01 04:57:47.027; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.common.SolrException; Error while trying 
> to 
> recover:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Error from server at http://172.20.10.4:7574/solr/a_shard1_replica1: Expected 
> mime type application/octet-stream but got text/html. 
> 
> 
> Error 401 Unauthorized request, Response code: 401
> 
> HTTP ERROR 401
> Problem accessing /solr/a_shard1_replica1/update. Reason:
> Unauthorized request, Response code: 
> 401Powered by Jetty://
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.commitOnLeader(RecoveryStrategy.java:205)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:145)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:436)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
> INFO  - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.update.UpdateLog; Dropping buffered 
> updates FSUpdateLog{state=BUFFERING, tlog=null}
> ERROR - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Recovery failed 
> - trying again... (4)



--
This message 

[jira] [Updated] (SOLR-8355) RuleBasedAuthenticationPlugin doesn't work with update permission enabled

2015-11-30 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8355:
---
Affects Version/s: 5.3.1
   5.3

> RuleBasedAuthenticationPlugin doesn't work with update permission enabled
> -
>
> Key: SOLR-8355
> URL: https://issues.apache.org/jira/browse/SOLR-8355
> Project: Solr
>  Issue Type: Bug
>  Components: security
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Priority: Blocker
>  Labels: authorization-plugin
> Fix For: 5.4
>
>
> Here are the steps that recreate this issue. I tried this on Solr 5.4 and I 
> had the following stack trace when I issued an ADDREPLICA. This seems pretty 
> similar to what we saw on SOLR-8326 so it might be just something we missed 
> but we should make sure that we ship 5.4 with this fixed.
> #Upload Security Conf
> server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd putfile 
> /security.json ~/security.json
> #Start Solr
> bin/solr start -e cloud -z localhost:2181
> #Collection Admin Edit Command:
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : 
> {"name":"collection-admin-edit", "role":"admin"}}'
> #Read User and permission:
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : {"name":"read", 
> "role":"read"}}'
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-permission" : {"name":"update", 
> "role":"update"]}}'
> #Add Users
> #Read User
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d '{"set-user" : {"solrread":"solrRocks"}}'
> #Update user
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d '{"set-user" : {"solrupdate":"solrRocks"}}'
> #Set user roles
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-user-role" : 
> {"solrupdate":["read","update"]}}'
> #Read User
> curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{"set-user-role" : {"solrread":["read"]}}'
> #Create collection
> curl --user solr:SolrRocks 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=a=1=1=gettingstarted=json'
> #Add Replica
> curl --user solr:SolrRocks 
> 'http://localhost:8983/solr/admin/collections?action=ADDREPLICA=a=shard1=json'
> Exception log:
> INFO  - 2015-12-01 04:57:47.022; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Starting 
> Replication Recovery.
> INFO  - 2015-12-01 04:57:47.023; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Attempting to 
> replicate from http://172.20.10.4:7574/solr/a_shard1_replica1/.
> ERROR - 2015-12-01 04:57:47.027; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.common.SolrException; Error while trying 
> to 
> recover:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Error from server at http://172.20.10.4:7574/solr/a_shard1_replica1: Expected 
> mime type application/octet-stream but got text/html. 
> 
> 
> Error 401 Unauthorized request, Response code: 401
> 
> HTTP ERROR 401
> Problem accessing /solr/a_shard1_replica1/update. Reason:
> Unauthorized request, Response code: 
> 401Powered by Jetty://
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.commitOnLeader(RecoveryStrategy.java:205)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:145)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:436)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
> INFO  - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.update.UpdateLog; Dropping buffered 
> updates FSUpdateLog{state=BUFFERING, tlog=null}
> ERROR - 2015-12-01 04:57:47.028; [c:a s:shard1 r:core_node2 
> x:a_shard1_replica2] org.apache.solr.cloud.RecoveryStrategy; Recovery failed 
> - trying 

  1   2   >