[jira] [Comment Edited] (SOLR-9525) split() function for streaming

2016-10-14 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577338#comment-15577338
 ] 

Dennis Gove edited comment on SOLR-9525 at 10/15/16 4:27 AM:
-

Full implementation and tests for a split operation. Because it's implemented 
as an operation this will work as part of a select() stream.

Valid expression forms:

{code}
split(fieldA, on=",") // replace value of fieldA with List of split 
values
split(fieldA, on=",", as="fieldB") // splits value of fieldA into List 
and puts into fieldB
{code}


was (Author: dpgove):
Full implementation and tests for a split operation. Because it's implementing 
as an operation this will work as part of a select() stream.

Valid expression forms:

{code}
split(fieldA, on=",") // replace value of fieldA with List of split 
values
split(fieldA, on=",", as="fieldB") // splits value of fieldA into List 
and puts into fieldB
{code}

> split() function for streaming
> --
>
> Key: SOLR-9525
> URL: https://issues.apache.org/jira/browse/SOLR-9525
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Thomsen
> Attachments: SOLR-9525.patch
>
>
> This is the original description I posted on solr-user:
> Read this article and thought it could be interesting as a way to do 
> ingestion:
> https://dzone.com/articles/solr-streaming-expressions-for-collection-auto-upd-1
> Example from the article:
> daemon(id="12345",
>  runInterval="6",
>  update(users,
>  batchSize=10,
>  jdbc(connection="jdbc:mysql://localhost/users?user=root=solr", 
> sql="SELECT id, name FROM users", sort="id asc", 
> driver="com.mysql.jdbc.Driver")
> )
> What's the best way to handle a multivalue field using this API? Is there a 
> way to tokenize something returned in a database field?
> Joel Bernstein responded with this:
> Unfortunately there currently isn't a way to split a field. But this would
> be nice functionality to add.
> The approach would be to an add a split operation that would be used by the
> select() function. It would look like this:
> select(jdbc(...), split(fieldA, delim=","), ...)
> This would make a good jira issue.
> So the TL;DR version is that I need the ability to specify in such a 
> streaming operation certain fields to tokenize into multivalue fields. In one 
> schema I may have to support, there are probably a half a dozen such fields.
> Perhaps I am missing a feature here, but until this is done it looks like 
> this new capability cannot handle multivalue fields until something like this 
> is in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9525) split() function for streaming

2016-10-14 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9525:
--
Attachment: SOLR-9525.patch

Full implementation and tests for a split operation. Because it's implementing 
as an operation this will work as part of a select() stream.

Valid expression forms:

{code}
split(fieldA, on=",") // replace value of fieldA with List of split 
values
split(fieldA, on=",", as="fieldB") // splits value of fieldA into List 
and puts into fieldB
{code}

> split() function for streaming
> --
>
> Key: SOLR-9525
> URL: https://issues.apache.org/jira/browse/SOLR-9525
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Thomsen
> Attachments: SOLR-9525.patch
>
>
> This is the original description I posted on solr-user:
> Read this article and thought it could be interesting as a way to do 
> ingestion:
> https://dzone.com/articles/solr-streaming-expressions-for-collection-auto-upd-1
> Example from the article:
> daemon(id="12345",
>  runInterval="6",
>  update(users,
>  batchSize=10,
>  jdbc(connection="jdbc:mysql://localhost/users?user=root=solr", 
> sql="SELECT id, name FROM users", sort="id asc", 
> driver="com.mysql.jdbc.Driver")
> )
> What's the best way to handle a multivalue field using this API? Is there a 
> way to tokenize something returned in a database field?
> Joel Bernstein responded with this:
> Unfortunately there currently isn't a way to split a field. But this would
> be nice functionality to add.
> The approach would be to an add a split operation that would be used by the
> select() function. It would look like this:
> select(jdbc(...), split(fieldA, delim=","), ...)
> This would make a good jira issue.
> So the TL;DR version is that I need the ability to specify in such a 
> streaming operation certain fields to tokenize into multivalue fields. In one 
> schema I may have to support, there are probably a half a dozen such fields.
> Perhaps I am missing a feature here, but until this is done it looks like 
> this new capability cannot handle multivalue fields until something like this 
> is in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1949 - Unstable!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1949/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([48D5291744F332C4:750D873B7C1D6CB4]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12292 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithKerberosAlt
   [junit4]   2> 1960265 WARN  
(TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[48D5291744F332C4]) [] 
o.a.d.s.c.DefaultDirectoryService You didn't change the admin password of 
directory service instance 'DefaultKrbServer'.  Please update the admin 
password as soon as possible to prevent a possible security breach.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSolrCloudWithKerberosAlt -Dtests.method=testBasics 
-Dtests.seed=48D5291744F332C4 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=it-IT -Dtests.timezone=Asia/Anadyr -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   10.7s J2 | TestSolrCloudWithKerberosAlt.testBasics <<<
   [junit4]> Throwable #1: java.net.BindException: Address already in use
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([48D5291744F332C4:750D873B7C1D6CB4]:0)
   [junit4]>at sun.nio.ch.Net.bind0(Native Method)
   [junit4]>at sun.nio.ch.Net.bind(Net.java:433)
   [junit4]>at sun.nio.ch.Net.bind(Net.java:425)
   [junit4]>at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   [junit4]>at 
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   [junit4]>at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
   [junit4]>at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
   [junit4]>at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
   [junit4]>at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithKerberosAlt_48D5291744F332C4-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=771, maxMBSortInHeap=7.871693961883822, 
sim=ClassicSimilarity, locale=it-IT, timezone=Asia/Anadyr
   [junit4]   2> NOTE: Linux 4.4.0-36-generic i386/Oracle Corporation 1.8.0_102 
(32-bit)/cpus=12,threads=1,free=187295792,total=395575296
   [junit4]   2> NOTE: All tests run in this JVM: 
[TestReversedWildcardFilterFactory, CdcrRequestHandlerTest, 
TestFieldCacheReopen, LukeRequestHandlerTest, TestRecovery, JSONWriterTest, 
TestGraphTermsQParserPlugin, CoreSorterTest, TestBulkSchemaAPI, 
TestBinaryResponseWriter, TestTolerantSearch, TestDFRSimilarityFactory, 
PreAnalyzedFieldTest, CloneFieldUpdateProcessorFactoryTest, DOMUtilTest, 
TestMergePolicyConfig, TestStandardQParsers, TestCloudSchemaless, 
DocValuesMissingTest, 

[jira] [Resolved] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9325.
---
Resolution: Fixed

Pushed. Thanks to Tim for reporting and testing!

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576536#comment-15576536
 ] 

ASF subversion and git services commented on SOLR-9325:
---

Commit bc5e06e34cd6c6d45668fff9969305b1ae8e1ce1 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc5e06e ]

SOLR-9325: solr.log is now written to $SOLR_LOGS_DIR without changing 
log4j.properties

(cherry picked from commit 33db4de)


> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576517#comment-15576517
 ] 

ASF subversion and git services commented on SOLR-9325:
---

Commit 33db4de4d7d5e325f8bfd886d3957735b33310a8 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=33db4de ]

SOLR-9325: solr.log is now written to $SOLR_LOGS_DIR without changing 
log4j.properties


> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9646) Error reporting & upgrade Doc could be much more helpful

2016-10-14 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576485#comment-15576485
 ] 

Tim Parker commented on SOLR-9646:
--

I'll get over to the list for more specifics of the problems - My intent here 
was the enhancement request for better logging - clearly some of the issues I'm 
seeing are related to this being a development build and will be resolved 
before 6.3 goes gold, and I don't expect this to serve as adequate reporting 
for the specifics of each of these.  The point here is that the information 
here is too minimal for me to even guess whether the problems relate to my 
configuration or some sort of coding problem that I shouldn't worry about due 
to the pre-release state of the build.

[by the way.. no chance of multiple versions in the classpath, and the indexes 
are created from scratch - only the config files and schema are hold-overs from 
previous versions - I'll take this up on the list]

> Error reporting & upgrade Doc could be much more helpful
> 
>
> Key: SOLR-9646
> URL: https://issues.apache.org/jira/browse/SOLR-9646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.3
> Environment: CentOS 7 VM (VirtualBox VM hosted on Windows).  
> ColdFusion application talking to Solr via a mix of SolrJ and direct HTTP: 
> calls.  Using 6.3 snapshot linked to from SOLR-9325
>Reporter: Tim Parker
>
> Our Solr interface was originally built with Solr 4.10, and included some 
> additional schema fields and some minor configuration updates (default of 10 
> rows returned is too small for us, and we're doing all communication with 
> JSON).  This configuration works well with all versions through 6.2.1 (after 
> updating our custom ClassLoader to work around SOLR-9231).  However... when 
> trying to run with a 6.3.0 snapshot... we get errors which are far from easy 
> to decipher.
> 1) tons of warnings about deprecated 5.2 emulation - after some digging, 
> traced these to our failing to update luceneMatchVersion in solrconfig.xml
> >>  The warning should, at a minimum, point to the luceneMatchVersion setting 
> >> - current log entry is:
> 2016-10-14 17:13:36.131 WARN  (coreLoadExecutor-6-thread-1) [   ] 
> o.a.s.s.FieldTypePluginLoader TokenizerFactory is using deprecated 5.2.0 
> emulation. You should at some point declare and reindex to at least 6.0, 
> because 5.x emulation is deprecated and will be removed in 7.0
> 2) [seen before updating luceneMatchVersion]
> 2016-10-14 17:31:15.978 ERROR (qtp2080166188-15) [   x:issues-1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception 
> writing document id {some document name} to the index; possible analysis 
> error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:178)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
> .
>  =  I assume that this is something triggered by a change in 6.3, but it 
> would be nice to have more of a clue about what it's complaining about
> 3) [after updating luceneMatchVersion to 6.2.0]
> 2016-10-14 18:20:02.847 ERROR (qtp2080166188-16) [   x:issues-1] 
> o.a.s.h.RequestHandlerBase java.lang.UnsupportedOperationException: This 
> format can only be used for reading
>   at 
> org.apache.lucene.codecs.lucene53.Lucene53NormsFormat.normsConsumer(Lucene53NormsFormat.java:77)
>   at 
> org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:266)
>   at 
> org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:95)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:443)
> .
>  what format is this referring to?  why is lucene53 in play? is there 
> another setting I need to update?  If this is a configuration problem on my 
> end, it would be more than nice to have some pointers here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9231) SolrInputDocument no-args constructor removed without notice

2016-10-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576436#comment-15576436
 ] 

Jan Høydahl commented on SOLR-9231:
---

Marked as duplicate of SOLR-9373

> SolrInputDocument no-args constructor removed without notice
> 
>
> Key: SOLR-9231
> URL: https://issues.apache.org/jira/browse/SOLR-9231
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 6.1
> Environment: Lucee (or ColdFusion) loading SolrJ using separate 
> URLClassLoader instance)
>Reporter: Tim Parker
>
> In 6.0.1 and previous, SolrInputDocument provided two constructors - one with 
> no arguments, the other accepting a Map object.  As of 6.1.0, the 
> no-arguments constructor is replaced with one that accepts zero or more 
> strings.
> With 6.0.1, this worked:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Constructor foo = cls.getConstructor();
> This fails with Solr 6.1.0
> We get the same error after updating the code to this:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Class[] argArray = new Class[0];
> Constructor foo = cls.getConstructor(argArray);
> Are we missing something?  If not, please restore the missing no-arguments 
> constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9373) Add the constructor with no argument to SolrInputDocument

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-9373.
-
Resolution: Won't Fix

Closing. Please adapt the reflection code.

> Add the constructor with no argument to SolrInputDocument
> -
>
> Key: SOLR-9373
> URL: https://issues.apache.org/jira/browse/SOLR-9373
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.1
>Reporter: Minoru Osuka
> Attachments: SOLR-9373.patch
>
>
> I'm working on create a patch to upgrade Solr 6.0.1 for Flume.
> Issue : https://issues.apache.org/jira/browse/FLUME-2919
> Since Solr 6.1.0 has been released, I'm trying to create a patch to upgrade 
> 6.1.0 again.
> But, java.lang.NoSuchMethodError occurs at compile time owing to the 
> constructor with no argument has been removed from the SolrInputDocument by 
> SOLR-9065.
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.solr.common.SolrInputDocument: method ()V not found
>   at 
> org.apache.solr.handler.extraction.SolrContentHandler.(SolrContentHandler.java:90)
>   at 
> org.apache.solr.morphlines.cell.TrimSolrContentHandlerFactory$TrimSolrContentHandler.(TrimSolrContentHandlerFactory.java:50)
>   at 
> org.apache.solr.morphlines.cell.TrimSolrContentHandlerFactory.createSolrContentHandler(TrimSolrContentHandlerFactory.java:40)
>   at 
> org.apache.solr.morphlines.cell.SolrCellBuilder$SolrCell.doProcess(SolrCellBuilder.java:232)
>   at 
> org.kitesdk.morphline.stdio.AbstractParser.doProcess(AbstractParser.java:96)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.process(AbstractCommand.java:156)
>   at org.kitesdk.morphline.base.Connector.process(Connector.java:64)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.doProcess(AbstractCommand.java:181)
>   at org.kitesdk.morphline.stdlib.LogCommand.doProcess(LogCommand.java:56)
>   at 
> org.kitesdk.morphline.stdlib.LogDebugBuilder$LogDebug.doProcess(LogDebugBuilder.java:56)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.process(AbstractCommand.java:156)
>   at 
> org.kitesdk.morphline.stdlib.TryRulesBuilder$TryRules.doProcess(TryRulesBuilder.java:115)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.process(AbstractCommand.java:156)
>   at org.kitesdk.morphline.base.Connector.process(Connector.java:64)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.doProcess(AbstractCommand.java:181)
>   at 
> org.kitesdk.morphline.tika.DetectMimeTypeBuilder$DetectMimeType.doProcess(DetectMimeTypeBuilder.java:166)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.process(AbstractCommand.java:156)
>   at org.kitesdk.morphline.base.Connector.process(Connector.java:64)
>   at 
> org.kitesdk.morphline.scriptengine.java.scripts.MyJavaClass6.eval(MyJavaClass6.java:15)
>   ... 69 more
> {noformat}
> It could not invoke the constructor with variable arguments using java 
> reflection. Please restore the no arguments constructor to SolrInputDocument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9231) SolrInputDocument no-args constructor removed without notice

2016-10-14 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576414#comment-15576414
 ] 

Tim Parker commented on SOLR-9231:
--

Use of SolrJ with ColdFusion requires a separate classloader because ColdFusion 
is bundled with an ancient Solr release and some other libraries which aren't 
compatible with current Solr releases.  We do not bundle Solr with our product, 
so we're trying to maximize the range of Solr releases which can be used with 
our product - with this in mind, we needed a solution which is able to load a 
'SolrInputDocument' object with no constructor arguments.  With a no-args 
constructor, this is easy - just call newInstance() with no arguments.  After 
the API change, however, this failed - updating our logic to change the 
newInstance() call to pass an (empty) array of strings would have cut off our 
ability to work with older Solr releases - and we also don't want to add 
conditional logic based on the Solr release we're using.

The work-around is to do some gymnastics with reflection if the argument-free 
newInstance() throws an exception - it's not optimal, but it does get the job 
done

> SolrInputDocument no-args constructor removed without notice
> 
>
> Key: SOLR-9231
> URL: https://issues.apache.org/jira/browse/SOLR-9231
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 6.1
> Environment: Lucee (or ColdFusion) loading SolrJ using separate 
> URLClassLoader instance)
>Reporter: Tim Parker
>
> In 6.0.1 and previous, SolrInputDocument provided two constructors - one with 
> no arguments, the other accepting a Map object.  As of 6.1.0, the 
> no-arguments constructor is replaced with one that accepts zero or more 
> strings.
> With 6.0.1, this worked:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Constructor foo = cls.getConstructor();
> This fails with Solr 6.1.0
> We get the same error after updating the code to this:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Class[] argArray = new Class[0];
> Constructor foo = cls.getConstructor(argArray);
> Are we missing something?  If not, please restore the missing no-arguments 
> constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9231) SolrInputDocument no-args constructor removed without notice

2016-10-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576356#comment-15576356
 ] 

Shawn Heisey edited comment on SOLR-9231 at 10/14/16 8:38 PM:
--

Interesting problem.  I guess the underlying question is ... what sort of API 
guarantees do we intend to honor for SolrJ?

I think the removal of the no-arg constructor fits with a policy of "typical 
API usage will continue to compile with a minor-version update."  IMHO, your 
usage is not typical.  Classloaders and other things that utilize method 
signatures are an advanced kind of Java programming.

A more strict guarantee of "code compiled against SolrJ X.Y will work if the 
jar is upgraded in place to version X.Z" would be required for your usage.  It 
may be reasonable for users to expect this kind of guarantee, but apparently 
whoever removed the constructor did not share this opinion, or was not aware 
that it could break existing binaries.


was (Author: elyograg):
Interesting problem.  I guess the underlying question is ... what sort of API 
guarantees do we intend to honor for SolrJ?

I think the removal of the no-arg constructor fits with a policy of "typical 
API usage will continue to compile with a minor-version update."  IMHO, your 
usage is not typical.  Classloaders and other things that utilize method 
signatures are an advanced kind of Java programming.

A more strict guarantee of "code compiled against SolrJ X.Y will work if the 
jar is upgraded in place to version X.Z" would be required for your usage.  It 
may be reasonable for users to expect this kind of guarantee, but apparently 
whoever removed the constructor did not share this opinion.

> SolrInputDocument no-args constructor removed without notice
> 
>
> Key: SOLR-9231
> URL: https://issues.apache.org/jira/browse/SOLR-9231
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 6.1
> Environment: Lucee (or ColdFusion) loading SolrJ using separate 
> URLClassLoader instance)
>Reporter: Tim Parker
>
> In 6.0.1 and previous, SolrInputDocument provided two constructors - one with 
> no arguments, the other accepting a Map object.  As of 6.1.0, the 
> no-arguments constructor is replaced with one that accepts zero or more 
> strings.
> With 6.0.1, this worked:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Constructor foo = cls.getConstructor();
> This fails with Solr 6.1.0
> We get the same error after updating the code to this:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Class[] argArray = new Class[0];
> Constructor foo = cls.getConstructor(argArray);
> Are we missing something?  If not, please restore the missing no-arguments 
> constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9646) Error reporting & upgrade Doc could be much more helpful

2016-10-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576407#comment-15576407
 ] 

Shawn Heisey commented on SOLR-9646:


Please use the mailing list for discussion of problems before coming to Jira, 
or join the #solr IRC channel on freenode.  We intend this tracker for bugs, 
enhancement requests, and development tasks that have been confirmed and 
discussed, so they are well-focused and well-documented.  Jira is not the first 
step for support requests.

On the mailing list or IRC channel, we could have at least informed you that 
this requires at least two, and possibly three separate issues:
 * Enhancing the log message about deprecated version emulation so that it 
correctly mentions luceneMatchVersion.
 * Figuring out why updating the version emulation to 6.2 caused an error 
message related to the 5.3 codec.
 ** This looks like it might be a bug.  You're running a development version, 
where bugs are relatively common.  If this *IS* a bug, it would typically be 
detected and fixed before release.
 ** My initial guess is that you have at least one index segment that was 
written by the 5.3 version, and the newer version is having some trouble 
dealing with that fact.  It should be able to handle a 5.x index segment.
 ** luceneMatchVersion is not intended to influence the on-disk index format.  
That should always be dependent on the actual version of the software.
 ** If you have multiple versions of Lucene/Solr jars on your classpath, it 
MIGHT explain this problem.
 * Figuring out why you got "Exception writing document id XXX".

Before creating additional issues or going any further on this one, please use 
the mailing list or IRC channel.  The mailing list has a VERY large audience.

When an error occurs, we need the *full* error from the log, all sections of 
the stacktrace that go with it, and the version of Solr.  With development 
versions, the version info needs to be EXTREMELY precise -- including the 
branch name, the git hash, and a complete description of any manual code 
changes that are in place.  Without that information, errors are very difficult 
to track down.


> Error reporting & upgrade Doc could be much more helpful
> 
>
> Key: SOLR-9646
> URL: https://issues.apache.org/jira/browse/SOLR-9646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.3
> Environment: CentOS 7 VM (VirtualBox VM hosted on Windows).  
> ColdFusion application talking to Solr via a mix of SolrJ and direct HTTP: 
> calls.  Using 6.3 snapshot linked to from SOLR-9325
>Reporter: Tim Parker
>
> Our Solr interface was originally built with Solr 4.10, and included some 
> additional schema fields and some minor configuration updates (default of 10 
> rows returned is too small for us, and we're doing all communication with 
> JSON).  This configuration works well with all versions through 6.2.1 (after 
> updating our custom ClassLoader to work around SOLR-9231).  However... when 
> trying to run with a 6.3.0 snapshot... we get errors which are far from easy 
> to decipher.
> 1) tons of warnings about deprecated 5.2 emulation - after some digging, 
> traced these to our failing to update luceneMatchVersion in solrconfig.xml
> >>  The warning should, at a minimum, point to the luceneMatchVersion setting 
> >> - current log entry is:
> 2016-10-14 17:13:36.131 WARN  (coreLoadExecutor-6-thread-1) [   ] 
> o.a.s.s.FieldTypePluginLoader TokenizerFactory is using deprecated 5.2.0 
> emulation. You should at some point declare and reindex to at least 6.0, 
> because 5.x emulation is deprecated and will be removed in 7.0
> 2) [seen before updating luceneMatchVersion]
> 2016-10-14 17:31:15.978 ERROR (qtp2080166188-15) [   x:issues-1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception 
> writing document id {some document name} to the index; possible analysis 
> error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:178)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
> .
>  =  I assume that this is something triggered by a change in 6.3, but it 
> would be nice to have more of a clue about what it's complaining about
> 3) [after updating luceneMatchVersion to 6.2.0]
> 2016-10-14 18:20:02.847 ERROR (qtp2080166188-16) [   x:issues-1] 
> o.a.s.h.RequestHandlerBase java.lang.UnsupportedOperationException: This 
> format can only be used for reading
>   at 
> org.apache.lucene.codecs.lucene53.Lucene53NormsFormat.normsConsumer(Lucene53NormsFormat.java:77)
>   at 
> 

[jira] [Commented] (SOLR-9231) SolrInputDocument no-args constructor removed without notice

2016-10-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576356#comment-15576356
 ] 

Shawn Heisey commented on SOLR-9231:


Interesting problem.  I guess the underlying question is ... what sort of API 
guarantees do we intend to honor for SolrJ?

I think the removal of the no-arg constructor fits with a policy of "typical 
API usage will continue to compile with a minor-version update."  IMHO, your 
usage is not typical.  Classloaders and other things that utilize method 
signatures are an advanced kind of Java programming.

A more strict guarantee of "code compiled against SolrJ X.Y will work if the 
jar is upgraded in place to version X.Z" would be required for your usage.  It 
may be reasonable for users to expect this kind of guarantee, but apparently 
whoever removed the constructor did not share this opinion.

> SolrInputDocument no-args constructor removed without notice
> 
>
> Key: SOLR-9231
> URL: https://issues.apache.org/jira/browse/SOLR-9231
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 6.1
> Environment: Lucee (or ColdFusion) loading SolrJ using separate 
> URLClassLoader instance)
>Reporter: Tim Parker
>
> In 6.0.1 and previous, SolrInputDocument provided two constructors - one with 
> no arguments, the other accepting a Map object.  As of 6.1.0, the 
> no-arguments constructor is replaced with one that accepts zero or more 
> strings.
> With 6.0.1, this worked:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Constructor foo = cls.getConstructor();
> This fails with Solr 6.1.0
> We get the same error after updating the code to this:
> cls = LoadClass("org.apache.solr.common.SolrInputDocument");
> Class[] argArray = new Class[0];
> Constructor foo = cls.getConstructor(argArray);
> Are we missing something?  If not, please restore the missing no-arguments 
> constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 452 - Unstable!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/452/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testThresholdTokenFrequency

Error Message:
Path not found: /spellcheck/suggestions/[1]/suggestion

Stack Trace:
java.lang.RuntimeException: Path not found: 
/spellcheck/suggestions/[1]/suggestion
at 
__randomizedtesting.SeedInfo.seed([73440E54CDBEC307:F9E381A54255FA7C]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:901)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:848)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testThresholdTokenFrequency(SpellCheckComponentTest.java:277)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11694 lines...]
   [junit4] Suite: org.apache.solr.handler.component.SpellCheckComponentTest
   [junit4]   2> Creating 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-10-14 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576189#comment-15576189
 ] 

Shalin Shekhar Mangar commented on SOLR-9512:
-

Noble and I discussed this offline. Here is a summary of the problem and the 
solution:

There are five cases that we need to tackle. Assuming replica x is leader:
# Case 1: x is disconnected from zk, y becomes leader
** currently — x throws error on indexing, client fails and keeps trying to 
send requests to x and fail. This will continue until X re-connects and the 
client gets a stale state flag in the response.
# Case 2: x is dead, y becomes leader
** currently - client gets connect exception or NoResponseException (for 
in-flight requests) and client keeps retrying request to x. This will continue 
until x comes back online.
# Case 3: x is disconnected from zk, no one is leader
** currently -- client keeps sending requests to x which fail because x is 
disconnected from leader. This will continue until X re-connects and the client 
gets a stale state flag in the response.
# Case 4: x is dead, no one is leader yet
** currently - client gets connect exception or NoResponseException (for 
in-flight requests) and client keeps retrying request to x. This will continue 
until x comes back online.
# Case 5: x is alive but now y is leader
** currently -- client gets a stale state flag from x and refreshes cluster 
state to see y as the new leader. All further indexing requests are sent to y.
# Case 6: client is disconnected from zk
** currently -- client keeps indexing to x. If it receives a stale state error, 
it will try to refresh cluster state, fail and continue to send further 
requests to x, keep failing and keep trying to read from zk and be stuck in a 
cycle.

Cases 1-5 are solved by a single solution -- On ConnectException, 
NoHttpResponseException, Leader disconnected from zk error, client should fetch 
state from zk again. If client fetches from zk and does not get a new version 
then this should be marked in a flag and subsequent retries should only happen 
after N seconds are elapsed or if we know for a fact that version has changed 
since the last zk fetch was made. N could be something small as 2 seconds or so.

Case 6 is more difficult. Either we can keep failing the indexing requests or 
we can ask a random Solr instance to return the latest cluster state. This is 
kinda dangerous because it can open us up to very difficult to debug bugs so I 
am inclined to punt on this for now.

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9646) Error reporting & upgrade Doc could be much more helpful

2016-10-14 Thread Tim Parker (JIRA)
Tim Parker created SOLR-9646:


 Summary: Error reporting & upgrade Doc could be much more helpful
 Key: SOLR-9646
 URL: https://issues.apache.org/jira/browse/SOLR-9646
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server, SolrJ
Affects Versions: 6.3
 Environment: CentOS 7 VM (VirtualBox VM hosted on Windows).  
ColdFusion application talking to Solr via a mix of SolrJ and direct HTTP: 
calls.  Using 6.3 snapshot linked to from SOLR-9325
Reporter: Tim Parker


Our Solr interface was originally built with Solr 4.10, and included some 
additional schema fields and some minor configuration updates (default of 10 
rows returned is too small for us, and we're doing all communication with 
JSON).  This configuration works well with all versions through 6.2.1 (after 
updating our custom ClassLoader to work around SOLR-9231).  However... when 
trying to run with a 6.3.0 snapshot... we get errors which are far from easy to 
decipher.

1) tons of warnings about deprecated 5.2 emulation - after some digging, traced 
these to our failing to update luceneMatchVersion in solrconfig.xml
>>  The warning should, at a minimum, point to the luceneMatchVersion setting - 
>> current log entry is:
2016-10-14 17:13:36.131 WARN  (coreLoadExecutor-6-thread-1) [   ] 
o.a.s.s.FieldTypePluginLoader TokenizerFactory is using deprecated 5.2.0 
emulation. You should at some point declare and reindex to at least 6.0, 
because 5.x emulation is deprecated and will be removed in 7.0

2) [seen before updating luceneMatchVersion]
2016-10-14 17:31:15.978 ERROR (qtp2080166188-15) [   x:issues-1] 
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception 
writing document id {some document name} to the index; possible analysis error.
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:178)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
.
 =  I assume that this is something triggered by a change in 6.3, but it 
would be nice to have more of a clue about what it's complaining about

3) [after updating luceneMatchVersion to 6.2.0]
2016-10-14 18:20:02.847 ERROR (qtp2080166188-16) [   x:issues-1] 
o.a.s.h.RequestHandlerBase java.lang.UnsupportedOperationException: This format 
can only be used for reading
at 
org.apache.lucene.codecs.lucene53.Lucene53NormsFormat.normsConsumer(Lucene53NormsFormat.java:77)
at 
org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:266)
at 
org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:95)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:443)
.
 what format is this referring to?  why is lucene53 in play? is there 
another setting I need to update?  If this is a configuration problem on my 
end, it would be more than nice to have some pointers here





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9633) Limit FastLRUCache by RAM Usage

2016-10-14 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576099#comment-15576099
 ] 

Shalin Shekhar Mangar commented on SOLR-9633:
-

FYI [~ysee...@gmail.com]

> Limit FastLRUCache by RAM Usage
> ---
>
> Key: SOLR-9633
> URL: https://issues.apache.org/jira/browse/SOLR-9633
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9633.patch
>
>
> SOLR-7372 added a maxRamMB parameter to LRUCache to evict items based on 
> memory usage. That helps with the query result cache but not with the filter 
> cache which defaults to FastLRUCache. This issue intends to add the same 
> feature to FastLRUCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9639) CdcrVersionReplicationTest failure

2016-10-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-9639.

Resolution: Fixed

follow up  SOLR-9645

> CdcrVersionReplicationTest failure
> --
>
> Key: SOLR-9639
> URL: https://issues.apache.org/jira/browse/SOLR-9639
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 6.3, master (7.0)
>
> Attachments: CDcr failure.txt, SOLR-9639.patch, SOLR-9639.patch, 
> cdcr-stack.txt, cdcr-success.txt
>
>
> h3. it fails.
> The problem is [over 
> there|https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/BaseCdcrDistributedZkTest.java#L597]
>  when it deletes that temporal collection (which is a tricky thing per se) 
> while it's still in recovery Solr Cloud went crazy: it closes the core, and 
> almost done it, but it can't be unloaded because PeerSync (remember, it's 
> recovering) open it ones, and it bloat logs with 
> bq.105902 INFO  (qtp3284815-656) [n:127.0.0.1:41440_ia%2Fd] 
> o.a.s.c.SolrCore Core collection1 is not yet closed, waiting 100 ms before 
> checking again.
> But then, something spawn too many request {{/get}}?? which deadlocks until 
> heap is exceeded and it dies. The fix is obvious, just to wait until 
> recoveries finishes, before removing tmp_collection. 
> Beside of this particular fix,is there any ideas about deadlock caused by 
> deleting recovering collection?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9639) CdcrVersionReplicationTest failure

2016-10-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9639:
---
Fix Version/s: master (7.0)

> CdcrVersionReplicationTest failure
> --
>
> Key: SOLR-9639
> URL: https://issues.apache.org/jira/browse/SOLR-9639
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 6.3, master (7.0)
>
> Attachments: CDcr failure.txt, SOLR-9639.patch, SOLR-9639.patch, 
> cdcr-stack.txt, cdcr-success.txt
>
>
> h3. it fails.
> The problem is [over 
> there|https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/BaseCdcrDistributedZkTest.java#L597]
>  when it deletes that temporal collection (which is a tricky thing per se) 
> while it's still in recovery Solr Cloud went crazy: it closes the core, and 
> almost done it, but it can't be unloaded because PeerSync (remember, it's 
> recovering) open it ones, and it bloat logs with 
> bq.105902 INFO  (qtp3284815-656) [n:127.0.0.1:41440_ia%2Fd] 
> o.a.s.c.SolrCore Core collection1 is not yet closed, waiting 100 ms before 
> checking again.
> But then, something spawn too many request {{/get}}?? which deadlocks until 
> heap is exceeded and it dies. The fix is obvious, just to wait until 
> recoveries finishes, before removing tmp_collection. 
> Beside of this particular fix,is there any ideas about deadlock caused by 
> deleting recovering collection?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9645) removing collection while it's recovering blows up cluster

2016-10-14 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9645:
--

 Summary: removing collection while it's recovering blows up cluster
 Key: SOLR-9645
 URL: https://issues.apache.org/jira/browse/SOLR-9645
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.2, master (7.0)
Reporter: Mikhail Khludnev


The case is described at SOLR-9639. It's clear why core removing is stuck, but 
it's not clear what causes a bunch of requests, which seem like /get, that 
flood the heap. To reproduce, just rollback SOLR-9639 commit and run ant test 
line from there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6914) DecimalDigitFilter skips characters in some cases (supplemental?)

2016-10-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575706#comment-15575706
 ] 

Adrien Grand commented on LUCENE-6914:
--

[~hossman] Let's commit this patch?

> DecimalDigitFilter skips characters in some cases (supplemental?)
> -
>
> Key: LUCENE-6914
> URL: https://issues.apache.org/jira/browse/LUCENE-6914
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Hoss Man
> Attachments: LUCENE-6914.patch, LUCENE-6914.patch, LUCENE-6914.patch
>
>
> Found this while writing up the solr ref guide for DecimalDigitFilter. 
> With input like "ퟙퟡퟠퟜ" ("Double Struck" 1984) the filter produces "1ퟡ8ퟜ" (1, 
> double struck 9, 8, double struck 4)  add some non-decimal characters in 
> between the digits (ie: "ퟙxퟡxퟠxퟜ") and you get the expected output 
> ("1x9x8x4").  This doesn't affect all decimal characters though, as evident 
> by the existing test cases.
> Perhaps this is an off by one bug in the "if the original was supplementary, 
> shrink the string" code path?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575681#comment-15575681
 ] 

ASF subversion and git services commented on SOLR-8487:
---

Commit edde433594c104668137350d9db640180b04f648 in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=edde433 ]

SOLR-8487: Adds CommitStream to support sending commits to a collection being 
updated


> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9103) Restore ability for users to add custom Streaming Expressions

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575668#comment-15575668
 ] 

ASF subversion and git services commented on SOLR-9103:
---

Commit 5e6a8e7c7537bfe1d2fdabf3b1bdd9fe825c5996 in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e6a8e7 ]

SOLR-9103: Restore ability for users to add custom Streaming Expressions


> Restore ability for users to add custom Streaming Expressions
> -
>
> Key: SOLR-9103
> URL: https://issues.apache.org/jira/browse/SOLR-9103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Dennis Gove
> Fix For: 6.3
>
> Attachments: HelloStream.class, SOLR-9103.PATCH, SOLR-9103.PATCH, 
> SOLR-9103.patch, SOLR-9103.patch
>
>
> StreamHandler is an implicit handler. So to make it extensible, we can 
> introduce the below syntax in solrconfig.xml. 
> {code}
> 
> {code}
> This will add hello function to streamFactory of StreamHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-14 Thread Tim Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575632#comment-15575632
 ] 

Tim Parker commented on SOLR-9325:
--

will do - we're calling Tika ourselves, so this probably isn't a Tika issue, 
but... I know where to find you if it is - thank you!

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-14 Thread loushang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575623#comment-15575623
 ] 

loushang commented on SOLR-9516:


so you confirmed that this bug was not fixed in 6.2?

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2016-10-14 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575614#comment-15575614
 ] 

Cassandra Targett commented on SOLR-9516:
-

I believe [~ichattopadhyaya]  was using 6.2 when he found the problem.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI, web gui
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #96: SOLR-6246 - Fix core reload if suggester has b...

2016-10-14 Thread Peter-LaComb
Github user Peter-LaComb closed the pull request at:

https://github.com/apache/lucene-solr/pull/96


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-10-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575582#comment-15575582
 ] 

ASF GitHub Bot commented on SOLR-6246:
--

Github user Peter-LaComb commented on the issue:

https://github.com/apache/lucene-solr/pull/96
  
I broke three or more tests - need to fix that.


> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-10-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575583#comment-15575583
 ] 

ASF GitHub Bot commented on SOLR-6246:
--

Github user Peter-LaComb closed the pull request at:

https://github.com/apache/lucene-solr/pull/96


> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #96: SOLR-6246 - Fix core reload if suggester has been bui...

2016-10-14 Thread Peter-LaComb
Github user Peter-LaComb commented on the issue:

https://github.com/apache/lucene-solr/pull/96
  
I broke three or more tests - need to fix that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18046 - Unstable!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18046/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([8A26770661A56CCB:33A7A1D94D4F6841]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:813)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:806)
... 40 more




Build Log:
[...truncated 10682 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-3419) XSS vulnerability in the json.wrf parameter

2016-10-14 Thread Shayne Urbanowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575505#comment-15575505
 ] 

Shayne Urbanowski commented on SOLR-3419:
-

I'm not sure that this is only related to the admin  UI.

My security scanning tool is detecting a vulnerability related to embedding a 
script tag in the json.wrf, callback, group, facet or _ parameters in Solr API 
requests.

> XSS vulnerability in the json.wrf parameter
> ---
>
> Key: SOLR-3419
> URL: https://issues.apache.org/jira/browse/SOLR-3419
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Affects Versions: 3.5
>Reporter: Prafulla Kiran
>Priority: Minor
> Attachments: SOLR-3419-escape.patch
>
>
> There's no filtering of the wrapper function name passed to the solr search 
> service
> If the name of the wrapper function passed to the solr query service is the 
> following string - 
> %3C!doctype%20html%3E%3Chtml%3E%3Cbody%3E%3Cimg%20src=%22x%22%20onerror=%22alert%281%29%22%3E%3C/body%3E%3C/html%3E
> solr passes the string back as-is which results in an XSS attack in browsers 
> like IE-7 which perform mime-sniffing. In any case, the callback function in 
> a jsonp response should always be sanitized - 
> http://stackoverflow.com/questions/2777021/do-i-need-to-sanitize-the-callback-parameter-from-a-jsonp-call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-14 Thread Mahesh (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575498#comment-15575498
 ] 

Mahesh commented on LUCENE-7493:


Hey Michael McCandless thanks for pointing out. 

With new code there is weird behavior. Code fix and test executes as expected 
like before fix assertion fails and after fix test passes but the problem that 
I had is that sometimes test will not pass in all cases with error ' 
java.lang.IllegalArgumentException: dimension "b" was not indexed into field 
"$facets'. This is happening randomly with no fixed step to reproduce and I am 
not sure why :(.

> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
> Attachments: LUCENE-7493-Fail-TestCase.patch, 
> LUCENE-7493-Fail-V.20.patch, LUCENE-7493-Pass-TestCase.patch, 
> LUCENE-7493-Pass-V.20.patch
>
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-14 Thread Mahesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahesh updated LUCENE-7493:
---
Attachment: LUCENE-7493-Pass-V.20.patch
LUCENE-7493-Fail-V.20.patch

> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
> Attachments: LUCENE-7493-Fail-TestCase.patch, 
> LUCENE-7493-Fail-V.20.patch, LUCENE-7493-Pass-TestCase.patch, 
> LUCENE-7493-Pass-V.20.patch
>
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Adrien Grand
No worries, it was easy to think about when I saw a DMQ with negative
scores. :)

Le ven. 14 oct. 2016 à 16:07, Uwe Schindler  a écrit :

> Hi,
>
> Many thanks. I was not aware that the f*cking explain() does not use
> scorer!
> I was about to look into this when I have seen your question, Mike,
> because I have seen negative scores :-)
>
> Sorry,
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Michael McCandless [mailto:luc...@mikemccandless.com]
> > Sent: Friday, October 14, 2016 3:37 PM
> > To: Lucene/Solr dev 
> > Subject: Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) -
> Build #
> > 18039 - Still Unstable!
> >
> > Ahh thanks Adrien!
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> >
> > On Fri, Oct 14, 2016 at 9:33 AM, Adrien Grand  wrote:
> > > This was caused by https://issues.apache.org/jira/browse/LUCENE-7486:
> > the
> > > dismax scorer had been updated to initialize maxScore to
> > NEGATIVE_INFINITY,
> > > but not its explain() method. Should be fixed now.
> > >
> > > Le ven. 14 oct. 2016 à 12:26, Michael McCandless
> > 
> > > a écrit :
> > >>
> > >> Thanks Adrien.
> > >>
> > >> Mike McCandless
> > >>
> > >> http://blog.mikemccandless.com
> > >>
> > >>
> > >> On Fri, Oct 14, 2016 at 6:21 AM, Adrien Grand 
> > wrote:
> > >> > I will look into it later today.
> > >> >
> > >> > Le ven. 14 oct. 2016 à 12:08, Michael McCandless
> > >> > 
> > >> > a écrit :
> > >> >>
> > >> >> Does anyone have any idea on this one :)
> > >> >>
> > >> >> Mike McCandless
> > >> >>
> > >> >> http://blog.mikemccandless.com
> > >> >>
> > >> >>
> > >> >> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
> > >> >>  wrote:
> > >> >> > Build:
> > >> >> > https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
> > >> >> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
> > >> >> >
> > >> >> > 2 tests failed.
> > >> >> > FAILED:
> > >> >> >
> > >> >> >
> > org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
> > >> >> >
> > >> >> > Error Message:
> > >> >> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5
> -extra:extra)
> > >> >> > NEVER:MATCH: score(doc=433)=-3.9678193E-6 !=
> > >> >> > explanationScore=-1.9839097E-6
> > >> >> > Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
> > >> >> > -1.9839097E-6 = max plus 0.5 times others of:
>  -3.9678193E-6 =
> > >> >> > sum of:
> > >> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
> result
> > >> >> > of:
> > >> >> > -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed
> > >> >> > from:
> > >> >> > 100.0 = boost 9.741334E-20 = NormalizationH1,
> computed
> > >> >> > from:
> > >> >> > 1.0 = tf   5.5031867 = avgFieldLength
> > >> >> > 5.6493154E19
> > >> >> > = len 0.24889919 = LambdaTTF, computed from:
> > >> >> > 2147.0 = totalTermFreq   8629.0 = numberOfDocuments
> > >> >> > -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
> > >> >> > was:<-1.9839097E-6>
> > >> >> >
> > >> >> > Stack Trace:
> > >> >> > junit.framework.AssertionFailedError: (+((field:yy
> (field:w5)^100.0)
> > >> >> > |
> > >> >> > (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH:
> > >> >> > score(doc=433)=-3.9678193E-6
> > >> >> > != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 =
> > sum
> > >> >> > of:
> > >> >> >   -1.9839097E-6 = sum of:
> > >> >> > -1.9839097E-6 = max plus 0.5 times others of:
> > >> >> >   -3.9678193E-6 = sum of:
> > >> >> > -3.9678193E-6 = weight(field:w5 in 433)
> [RandomSimilarity],
> > >> >> > result of:
> > >> >> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
> > >> >> > computed from:
> > >> >> > 100.0 = boost
> > >> >> > 9.741334E-20 = NormalizationH1, computed from:
> > >> >> >   1.0 = tf
> > >> >> >   5.5031867 = avgFieldLength
> > >> >> >   5.6493154E19 = len
> > >> >> > 0.24889919 = LambdaTTF, computed from:
> > >> >> >   2147.0 = totalTermFreq
> > >> >> >   8629.0 = numberOfDocuments
> > >> >> > -3.9678195E-8 = DistributionSPL
> > >> >> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
> > >> >> > at
> > >> >> >
> > >> >> >
> > __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE2
> > 9]:0)
> > >> >> > at junit.framework.Assert.fail(Assert.java:50)
> > >> >> > at junit.framework.Assert.failNotEquals(Assert.java:287)
> > >> >> > at junit.framework.Assert.assertEquals(Assert.java:120)
> > >> >> > at
> > >> >> >
> > >> >> >
> > org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
> > >> >> > at
> > 

RE: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Uwe Schindler
Hi,

Many thanks. I was not aware that the f*cking explain() does not use scorer!
I was about to look into this when I have seen your question, Mike, because I 
have seen negative scores :-)

Sorry,
Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Friday, October 14, 2016 3:37 PM
> To: Lucene/Solr dev 
> Subject: Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build #
> 18039 - Still Unstable!
> 
> Ahh thanks Adrien!
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> 
> On Fri, Oct 14, 2016 at 9:33 AM, Adrien Grand  wrote:
> > This was caused by https://issues.apache.org/jira/browse/LUCENE-7486:
> the
> > dismax scorer had been updated to initialize maxScore to
> NEGATIVE_INFINITY,
> > but not its explain() method. Should be fixed now.
> >
> > Le ven. 14 oct. 2016 à 12:26, Michael McCandless
> 
> > a écrit :
> >>
> >> Thanks Adrien.
> >>
> >> Mike McCandless
> >>
> >> http://blog.mikemccandless.com
> >>
> >>
> >> On Fri, Oct 14, 2016 at 6:21 AM, Adrien Grand 
> wrote:
> >> > I will look into it later today.
> >> >
> >> > Le ven. 14 oct. 2016 à 12:08, Michael McCandless
> >> > 
> >> > a écrit :
> >> >>
> >> >> Does anyone have any idea on this one :)
> >> >>
> >> >> Mike McCandless
> >> >>
> >> >> http://blog.mikemccandless.com
> >> >>
> >> >>
> >> >> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
> >> >>  wrote:
> >> >> > Build:
> >> >> > https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
> >> >> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
> >> >> >
> >> >> > 2 tests failed.
> >> >> > FAILED:
> >> >> >
> >> >> >
> org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
> >> >> >
> >> >> > Error Message:
> >> >> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra)
> >> >> > NEVER:MATCH: score(doc=433)=-3.9678193E-6 !=
> >> >> > explanationScore=-1.9839097E-6
> >> >> > Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
> >> >> > -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 =
> >> >> > sum of:
> >> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result
> >> >> > of:
> >> >> > -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed
> >> >> > from:
> >> >> > 100.0 = boost 9.741334E-20 = NormalizationH1, computed
> >> >> > from:
> >> >> > 1.0 = tf   5.5031867 = avgFieldLength
> >> >> > 5.6493154E19
> >> >> > = len 0.24889919 = LambdaTTF, computed from:
> >> >> > 2147.0 = totalTermFreq   8629.0 = numberOfDocuments
> >> >> > -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
> >> >> > was:<-1.9839097E-6>
> >> >> >
> >> >> > Stack Trace:
> >> >> > junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0)
> >> >> > |
> >> >> > (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH:
> >> >> > score(doc=433)=-3.9678193E-6
> >> >> > != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 =
> sum
> >> >> > of:
> >> >> >   -1.9839097E-6 = sum of:
> >> >> > -1.9839097E-6 = max plus 0.5 times others of:
> >> >> >   -3.9678193E-6 = sum of:
> >> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
> >> >> > result of:
> >> >> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
> >> >> > computed from:
> >> >> > 100.0 = boost
> >> >> > 9.741334E-20 = NormalizationH1, computed from:
> >> >> >   1.0 = tf
> >> >> >   5.5031867 = avgFieldLength
> >> >> >   5.6493154E19 = len
> >> >> > 0.24889919 = LambdaTTF, computed from:
> >> >> >   2147.0 = totalTermFreq
> >> >> >   8629.0 = numberOfDocuments
> >> >> > -3.9678195E-8 = DistributionSPL
> >> >> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
> >> >> > at
> >> >> >
> >> >> >
> __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE2
> 9]:0)
> >> >> > at junit.framework.Assert.fail(Assert.java:50)
> >> >> > at junit.framework.Assert.failNotEquals(Assert.java:287)
> >> >> > at junit.framework.Assert.assertEquals(Assert.java:120)
> >> >> > at
> >> >> >
> >> >> >
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
> >> >> > at
> >> >> >
> >> >> >
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.j
> ava:505)
> >> >> > at
> >> >> >
> >> >> >
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollect
> or.java:52)
> >> >> > at
> >> >> >
> >> >> >
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.jav
> a:56)
> >> >> > at
> >> >> >
> >> >> >
> 

[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575437#comment-15575437
 ] 

Uwe Schindler commented on LUCENE-7486:
---

Thanks Adrien for fixing this bug!

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.3
>
> Attachments: LUCENE-7486.patch, LUCENE-7486.patch
>
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7115) UpdateLog can miss closing transaction log objects.

2016-10-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575372#comment-15575372
 ] 

Mikhail Khludnev commented on SOLR-7115:


no no no. Thorough beasing haven't let to reproduce the failure. Tlease 
consider for reviewing.  

> UpdateLog can miss closing transaction log objects.
> ---
>
> Key: SOLR-7115
> URL: https://issues.apache.org/jira/browse/SOLR-7115
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: 6.3
>
> Attachments: SOLR-7115-LargeVolumeEmbeddedTest-fail.txt, 
> SOLR-7115.patch, SOLR-7115.patch, tests-failures-7115.txt
>
>
> I've seen this happen on YourKit and in various tests - especially since 
> adding resource release tracking to the log objects. Now I've got a test that 
> catches it in SOLR-7113.
> It seems that in precommit, if prevTlog is not null, we need to close it 
> because we are going to overwrite prevTlog with a new log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Michael McCandless
Ahh thanks Adrien!

Mike McCandless

http://blog.mikemccandless.com


On Fri, Oct 14, 2016 at 9:33 AM, Adrien Grand  wrote:
> This was caused by https://issues.apache.org/jira/browse/LUCENE-7486: the
> dismax scorer had been updated to initialize maxScore to NEGATIVE_INFINITY,
> but not its explain() method. Should be fixed now.
>
> Le ven. 14 oct. 2016 à 12:26, Michael McCandless 
> a écrit :
>>
>> Thanks Adrien.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Fri, Oct 14, 2016 at 6:21 AM, Adrien Grand  wrote:
>> > I will look into it later today.
>> >
>> > Le ven. 14 oct. 2016 à 12:08, Michael McCandless
>> > 
>> > a écrit :
>> >>
>> >> Does anyone have any idea on this one :)
>> >>
>> >> Mike McCandless
>> >>
>> >> http://blog.mikemccandless.com
>> >>
>> >>
>> >> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
>> >>  wrote:
>> >> > Build:
>> >> > https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
>> >> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
>> >> >
>> >> > 2 tests failed.
>> >> > FAILED:
>> >> >
>> >> > org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
>> >> >
>> >> > Error Message:
>> >> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra)
>> >> > NEVER:MATCH: score(doc=433)=-3.9678193E-6 !=
>> >> > explanationScore=-1.9839097E-6
>> >> > Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
>> >> > -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 =
>> >> > sum of:
>> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result
>> >> > of:
>> >> > -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed
>> >> > from:
>> >> > 100.0 = boost 9.741334E-20 = NormalizationH1, computed
>> >> > from:
>> >> > 1.0 = tf   5.5031867 = avgFieldLength
>> >> > 5.6493154E19
>> >> > = len 0.24889919 = LambdaTTF, computed from:
>> >> > 2147.0 = totalTermFreq   8629.0 = numberOfDocuments
>> >> > -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
>> >> > was:<-1.9839097E-6>
>> >> >
>> >> > Stack Trace:
>> >> > junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0)
>> >> > |
>> >> > (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH:
>> >> > score(doc=433)=-3.9678193E-6
>> >> > != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 = sum
>> >> > of:
>> >> >   -1.9839097E-6 = sum of:
>> >> > -1.9839097E-6 = max plus 0.5 times others of:
>> >> >   -3.9678193E-6 = sum of:
>> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
>> >> > result of:
>> >> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
>> >> > computed from:
>> >> > 100.0 = boost
>> >> > 9.741334E-20 = NormalizationH1, computed from:
>> >> >   1.0 = tf
>> >> >   5.5031867 = avgFieldLength
>> >> >   5.6493154E19 = len
>> >> > 0.24889919 = LambdaTTF, computed from:
>> >> >   2147.0 = totalTermFreq
>> >> >   8629.0 = numberOfDocuments
>> >> > -3.9678195E-8 = DistributionSPL
>> >> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
>> >> > at
>> >> >
>> >> > __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE29]:0)
>> >> > at junit.framework.Assert.fail(Assert.java:50)
>> >> > at junit.framework.Assert.failNotEquals(Assert.java:287)
>> >> > at junit.framework.Assert.assertEquals(Assert.java:120)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:505)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:183)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:170)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>> >> > at
>> >> > org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>> >> > at
>> >> >
>> >> > org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>> >> 

Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Adrien Grand
This was caused by https://issues.apache.org/jira/browse/LUCENE-7486: the
dismax scorer had been updated to initialize maxScore to NEGATIVE_INFINITY,
but not its explain() method. Should be fixed now.

Le ven. 14 oct. 2016 à 12:26, Michael McCandless 
a écrit :

> Thanks Adrien.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Oct 14, 2016 at 6:21 AM, Adrien Grand  wrote:
> > I will look into it later today.
> >
> > Le ven. 14 oct. 2016 à 12:08, Michael McCandless <
> luc...@mikemccandless.com>
> > a écrit :
> >>
> >> Does anyone have any idea on this one :)
> >>
> >> Mike McCandless
> >>
> >> http://blog.mikemccandless.com
> >>
> >>
> >> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
> >>  wrote:
> >> > Build:
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
> >> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
> >> >
> >> > 2 tests failed.
> >> > FAILED:
> >> > org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
> >> >
> >> > Error Message:
> >> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra)
> >> > NEVER:MATCH: score(doc=433)=-3.9678193E-6 !=
> explanationScore=-1.9839097E-6
> >> > Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
> >> > -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 =
> sum of:
> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result of:
> >> > -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed from:
> >> > 100.0 = boost 9.741334E-20 = NormalizationH1, computed
> from:
> >> > 1.0 = tf   5.5031867 = avgFieldLength
>  5.6493154E19
> >> > = len 0.24889919 = LambdaTTF, computed from:
> >> > 2147.0 = totalTermFreq   8629.0 = numberOfDocuments
> >> > -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
> >> > was:<-1.9839097E-6>
> >> >
> >> > Stack Trace:
> >> > junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0) |
> >> > (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH:
> score(doc=433)=-3.9678193E-6
> >> > != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 = sum of:
> >> >   -1.9839097E-6 = sum of:
> >> > -1.9839097E-6 = max plus 0.5 times others of:
> >> >   -3.9678193E-6 = sum of:
> >> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
> >> > result of:
> >> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
> >> > computed from:
> >> > 100.0 = boost
> >> > 9.741334E-20 = NormalizationH1, computed from:
> >> >   1.0 = tf
> >> >   5.5031867 = avgFieldLength
> >> >   5.6493154E19 = len
> >> > 0.24889919 = LambdaTTF, computed from:
> >> >   2147.0 = totalTermFreq
> >> >   8629.0 = numberOfDocuments
> >> > -3.9678195E-8 = DistributionSPL
> >> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
> >> > at
> >> >
> __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE29]:0)
> >> > at junit.framework.Assert.fail(Assert.java:50)
> >> > at junit.framework.Assert.failNotEquals(Assert.java:287)
> >> > at junit.framework.Assert.assertEquals(Assert.java:120)
> >> > at
> >> >
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
> >> > at
> >> >
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:505)
> >> > at
> >> >
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> >> > at
> >> >
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
> >> > at
> >> >
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> >> > at
> >> >
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> >> > at
> >> >
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:183)
> >> > at
> >> >
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:170)
> >> > at
> >> >
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
> >> > at
> >> >
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
> >> > at
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> >> > at
> >> >
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
> >> > at
> >> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
> >> > at
> >> >
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
> >> > at
> >> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> >> > at
> >> >
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
> >> 

[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575340#comment-15575340
 ] 

ASF subversion and git services commented on LUCENE-7486:
-

Commit 9304ef9f118d24f76b280299706310ca8a0d40e6 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9304ef9 ]

LUCENE-7486: Explain() should initialize maxScore to NEGATIVE_INFINITY too.


> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.3
>
> Attachments: LUCENE-7486.patch, LUCENE-7486.patch
>
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575339#comment-15575339
 ] 

ASF subversion and git services commented on LUCENE-7486:
-

Commit b5a98cc52c29dbcb79299396d44db6b604c2c422 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b5a98cc ]

LUCENE-7486: Explain() should initialize maxScore to NEGATIVE_INFINITY too.


> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.3
>
> Attachments: LUCENE-7486.patch, LUCENE-7486.patch
>
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7496) Better toString for SweetSpotSimilarity

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned LUCENE-7496:
---

Assignee: Jan Høydahl

> Better toString for SweetSpotSimilarity
> ---
>
> Key: LUCENE-7496
> URL: https://issues.apache.org/jira/browse/LUCENE-7496
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7496.patch
>
>
> Spinoff from SOLR-8370 where we display Similarity class in use in the Admin 
> UI.
> SweetSpotSimilarity does not provide a {{toString}} method, so it will 
> incorreclty print {{ClassicSimilarity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7496) Better toString for SweetSpotSimilarity

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-7496:

Fix Version/s: 6.3
   master (7.0)

> Better toString for SweetSpotSimilarity
> ---
>
> Key: LUCENE-7496
> URL: https://issues.apache.org/jira/browse/LUCENE-7496
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Jan Høydahl
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7496.patch
>
>
> Spinoff from SOLR-8370 where we display Similarity class in use in the Admin 
> UI.
> SweetSpotSimilarity does not provide a {{toString}} method, so it will 
> incorreclty print {{ClassicSimilarity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8370:
--
Attachment: SOLR-8370.patch

New patch
* Moved change of {{SweetSpotSimilarity}} to LUCENE-7496 for separate discussion
* Not changing {{PerFieldSimiliarityWrapper}} as it is a Lucene class and it 
was wrong to assume {{SchemaSimilarity}} and that {{get("")}} returns default 
impl. Instead, added the toString to the anonymous inner class of 
{{SchemaSimilarity}}
* Skipped the toString for {{TFIDFSimilarity}} since I believe it will never be 
in use - we don't have a factory for it, it will be ClassicSimilarity...

Think this should be good to go in now? Or do you see more changes needed?

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, 
> SOLR-8370.patch, SOLR-8370.patch, SOLR-8370.patch, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7496) Better toString for SweetSpotSimilarity

2016-10-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575299#comment-15575299
 ] 

Adrien Grand commented on LUCENE-7496:
--

+1 to the patch. I think being exhaustive is fine.

> Better toString for SweetSpotSimilarity
> ---
>
> Key: LUCENE-7496
> URL: https://issues.apache.org/jira/browse/LUCENE-7496
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Jan Høydahl
> Attachments: LUCENE-7496.patch
>
>
> Spinoff from SOLR-8370 where we display Similarity class in use in the Admin 
> UI.
> SweetSpotSimilarity does not provide a {{toString}} method, so it will 
> incorreclty print {{ClassicSimilarity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7496) Better toString for SweetSpotSimilarity

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-7496:

Attachment: LUCENE-7496.patch

Patch.

There are many config variables. Could perhaps be smarter in detecting whether 
baseline of hyperbolic is in use and print only the active set of params?

> Better toString for SweetSpotSimilarity
> ---
>
> Key: LUCENE-7496
> URL: https://issues.apache.org/jira/browse/LUCENE-7496
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Jan Høydahl
> Attachments: LUCENE-7496.patch
>
>
> Spinoff from SOLR-8370 where we display Similarity class in use in the Admin 
> UI.
> SweetSpotSimilarity does not provide a {{toString}} method, so it will 
> incorreclty print {{ClassicSimilarity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9644) MoreLikeThis parser doesn't handle boosts properly

2016-10-14 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-9644:
--
Remaining Estimate: (was: 4h)
 Original Estimate: (was: 4h)

> MoreLikeThis parser doesn't handle boosts properly
> --
>
> Key: SOLR-9644
> URL: https://issues.apache.org/jira/browse/SOLR-9644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 6.2.1
>Reporter: Ere Maijala
>
> It seems SimpleMLTQParser should be able to handle boost parameters, but it's 
> not working properly. I've added a proposed patch to fix similar issue in 
> CloudMLTQParser in issue https://issues.apache.org/jira/browse/SOLR-9267 and 
> will attach a patch here too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7496) Better toString for SweetSpotSimilarity

2016-10-14 Thread JIRA
Jan Høydahl created LUCENE-7496:
---

 Summary: Better toString for SweetSpotSimilarity
 Key: LUCENE-7496
 URL: https://issues.apache.org/jira/browse/LUCENE-7496
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Jan Høydahl


Spinoff from SOLR-8370 where we display Similarity class in use in the Admin UI.

SweetSpotSimilarity does not provide a {{toString}} method, so it will 
incorreclty print {{ClassicSimilarity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2585) DirectoryReader.isCurrent might fail to see the segments file during concurrent index changes

2016-10-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575275#comment-15575275
 ] 

Michael McCandless commented on LUCENE-2585:


Thanks [~hanwen], I think you are correct.  How would you suggest we fix it?  
Could you open a new issue to track this?  Thanks.

> DirectoryReader.isCurrent might fail to see the segments file during 
> concurrent index changes
> -
>
> Key: LUCENE-2585
> URL: https://issues.apache.org/jira/browse/LUCENE-2585
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Sanne Grinovero
> Fix For: 4.9, 6.0
>
>
> I could reproduce the issue several times but only by running long and 
> stressfull benchmarks, the high number of files is likely part of the 
> scenario.
> All tests run on local disk, using ext3.
> Sample stacktrace:
> {noformat}java.io.FileNotFoundException: no segments* file found in 
> org.apache.lucene.store.NIOFSDirectory@/home/sanne/infinispan-41/lucene-directory/tempIndexName:
>  files:
> _2l3.frq _uz.fdt _1q4.fnm _1q0.fdx _4bc.fdt _v2.tis _4ll.fdx _2l8.tii _ux.fnm 
> _3g7.fdx _4bb.tii _4bj.prx _uy.fdx _3g7.prx _2l7.frq _2la.fdt _3ge.nrm 
> _2l6.prx 
> _1py.fdx _3g6.nrm _v0.prx _4bi.tii _2l2.tis _v2.fdx _2l3.nrm _2l8.fnm 
> _4bg.tis _2la.tis _uu.fdx _3g6.fdx _1q3.frq _2la.frq _4bb.tis _3gb.tii 
> _1pz.tis 
> _2lb.nrm _4lm.nrm _3g9.tii _v0.fdt _2l5.fnm _v2.prx _4ll.tii _4bd.nrm 
> _2l7.fnm _2l4.nrm _1q2.tis _3gb.fdx _4bh.fdx _1pz.nrm _ux.fdx _ux.tii 
> _1q6.nrm 
> _3gf.fdx _4lk.fdt _3gd.nrm _v3.fnm _3g8.prx _1q2.nrm _4bh.prx _1q0.frq 
> _ux.fdt _1q7.fdt _4bb.fnm _4bf.nrm _4bc.nrm _3gb.fdt _4bh.fnm _2l5.tis 
> _1pz.fnm _1py.fnm _3gc.fnm _2l2.prx _2l4.frq _3gc.fdt _ux.tis _1q3.prx 
> _2l7.fdx _4bj.nrm _4bj.fdx _4bi.tis _3g9.prx _1q4.prx _v3.fdt _1q3.fdx 
> _2l9.fdt 
> _4bh.tis _3gb.nrm _v2.nrm _3gd.tii _2l7.nrm _2lb.tii _4lm.tis _3ga.fdx 
> _1pz.fdt _3g7.fnm _2l3.fnm _4lk.fnm _uz.fnm _2l2.frq _4bd.fdx _1q2.fdt 
> _3g7.tis 
> _4bi.frq _4bj.frq _2l7.prx _ux.prx _3gd.fnm _1q4.fdt _1q1.fdt _v1.fnm 
> _1py.nrm _3gf.nrm _4be.fdt _1q3.tii _1q1.prx _2l3.fdt _4lk.frq _2l4.fdx 
> _4bd.fnm 
> _uw.frq _3g8.fdx _2l6.tii _1q5.frq _1q5.tis _3g8.nrm _uw.nrm _v0.tii _v2.fdt 
> _2l7.fdt _v0.tis _uy.tii _3ge.tii _v1.tii _3gb.tis _4lm.fdx _4bc.fnm _2lb.frq 
> _2l6.fnm _3g6.tii _3ge.prx _uu.frq _1pz.fdx _1q2.fnm _4bi.prx _3gc.frq 
> _2l9.tis _3ge.fdt _uy.fdt _4ll.fnm _3gc.prx _1q7.tii _2l5.nrm _uy.nrm _uv.frq 
> _1q6.frq _4ba.tis _3g9.tis _4be.nrm _4bi.fnm _ux.frq _1q1.fnm _v0.fnm 
> _2l4.fnm _4ba.fnm _4be.tis _uz.prx _1q6.fdx _uw.tii _2l6.nrm _1pz.prx 
> _2l7.tis 
> _1q7.fdx _2l9.tii _4lk.tii _uz.frq _3g8.frq _4bb.prx _1q5.tii _1q5.prx 
> _v2.frq _4bc.tii _1q7.prx _v2.tii _2lb.tis _4bi.fdt _uv.nrm _2l2.fnm _4bd.tii 
> _1q7.tis 
> _4bg.fnm _3ga.frq _uu.fnm _2l9.fnm _3ga.fnm _uw.fnm _1pz.frq _1q1.fdx 
> _3ge.fdx _2l3.prx _3ga.nrm _uv.fdt _4bb.nrm _1q7.fnm _uv.tis _3gb.fnm 
> _2l6.tis _1pz.tii _uy.fnm _3gf.fdt _3gc.nrm _4bf.tis _1q5.fnm _uu.tis 
> _4bh.tii _2l5.fdt _1q6.tii _4bc.tis _3gc.tii _3g9.fnm _2l6.fdt _4bj.fnm 
> _uu.tii _v3.frq 
> _3g9.fdx _v0.nrm _2l7.tii _1q0.fdt _3ge.fnm _4bf.fdt _1q6.prx _uz.nrm 
> _4bi.fdx _3gf.fnm _4lm.frq _v0.fdx _4ba.fdt _1py.tii _4bf.tii _uw.fdx 
> _2l5.frq 
> _3g9.nrm _v1.fdt _uw.fdt _4bd.frq _4bg.prx _3gd.tis _1q4.tis _2l9.nrm 
> _2la.nrm _v3.tii _4bf.prx _1q1.nrm _4ba.tii _3gd.fdx _1q4.tii _4lm.tii 
> _3ga.tis 
> _4bf.fnm write.lock _2l8.prx _2l8.fdt segments.gen _2lb.fnm _2l4.fdt _1q2.prx 
> _4be.fnm _3gf.prx _2l6.fdx _3g6.fnm _4bb.fdt _4bd.tis _4lk.nrm _2l5.fdx 
> _2la.tii _4bd.prx _4ln.fnm _3gf.tis _4ba.nrm _v3.prx _uv.prx _1q3.fnm 
> _3ga.tii _uz.tii _3g9.frq _v0.frq _3ge.tis _3g6.tis _4ln.prx _3g7.tii 
> _3g8.fdt 
> _3g7.nrm _3ga.prx _2l2.fdx _2l8.fdx _4ba.prx _1py.frq _uz.fdx _2l3.tii 
> _3g6.prx _v3.fdx _1q6.fdt _v1.nrm _2l2.tii _1q0.tis _4ba.fdx _4be.tii 
> _4ba.frq 
> _4ll.fdt _4bh.nrm _4lm.fdt _1q7.frq _4lk.tis _4bc.frq _1q6.fnm _3g7.frq 
> _uw.tis _3g8.tis _2l9.fdx _2l4.tii _1q4.fdx _4be.prx _1q3.nrm _1q0.tii 
> _1q0.fnm 
> _v3.nrm _1py.tis _3g9.fdt _4bh.fdt _4ll.nrm _4lk.prx _3gd.prx _1q3.tis 
> _1q2.tii _2l2.nrm _3gd.fdt _2l3.fdx _3g6.fdt _3gd.frq _1q1.tis _4bb.fdx 
> _1q2.frq 
> _1q3.fdt _v1.tis _2l8.frq _3gc.fdx _1q1.frq _4bg.frq _4bb.frq _2la.fdx 
> _2l9.frq _uy.tis _uy.prx _4bg.fdx _3gb.prx _uy.frq _1q2.fdx _4lm.prx _2la.prx 
> _2l4.prx _4bg.fdt _4be.frq _1q7.nrm _2l5.prx _4bf.frq _v1.prx _4bd.fdt 
> _2l9.prx _1q6.tis _3g8.fnm _4ln.tis _2l3.tis _4bc.fdx _2lb.prx _3gb.frq 
> _3gf.frq 
> _2la.fnm _3ga.fdt _uz.tis _4bg.nrm _uv.tii _4bg.tii _3g8.tii _4ll.frq _uv.fnm 
> _2l8.tis _2l8.nrm _2l2.fdt _4bj.tis _4lk.fdx _uw.prx _4bc.prx _4bj.fdt 
> _4be.fdx 
> _1q4.frq _uu.fdt _1q1.tii _2l5.tii _2lb.fdt _4bh.frq _3ge.frq _1py.prx 
> 

[jira] [Commented] (LUCENE-7488) Consider tracking modification time of external file fields for faster reloading

2016-10-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575263#comment-15575263
 ] 

Michael McCandless commented on LUCENE-7488:


Isn't this a Solr issue, not a Lucene issue?

> Consider tracking modification time of external file fields for faster 
> reloading
> 
>
> Key: LUCENE-7488
> URL: https://issues.apache.org/jira/browse/LUCENE-7488
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 4.10.4
> Environment: Linux
>Reporter: Mike
>
> I have an index of about 4M legal documents that has pagerank boosting 
> configured as an external file field. The external file is about 100MB in 
> size and has one row per document in the index. Each row indicates the 
> pagerank score of a document. When we open new searchers, this file has to 
> get reloaded, and it creates a noticeable delay for our users -- takes 
> several seconds to reload. 
> An idea to fix this came up in [a recent discussion in the Solr mailing 
> list|https://www.mail-archive.com/solr-user@lucene.apache.org/msg125521.html]:
>  Could the file only be reloaded if it has changed on disk? In other words, 
> when new searchers are opened, could they check the modtime of the file, and 
> avoid reloading it if the file hasn't changed? 
> In our configuration, this would be a big improvement. We only change the 
> pagerank file once/week because computing it is intensive and new documents 
> don't tend to have a big impact. At the same time, because we're regularly 
> adding new documents, we do hundreds of commits per day, all of which have a 
> delay as the (largish) external file field is reloaded. 
> Is this a reasonable improvement to request? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2016-10-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575257#comment-15575257
 ] 

Michael McCandless commented on LUCENE-5168:


OK indeed the link works once I logged into FB, something I try not to do very 
often ;)  [~rcmuir] you should make this important FB post world readable!

> ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
> ---
>
> Key: LUCENE-5168
> URL: https://issues.apache.org/jira/browse/LUCENE-5168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
> log.0078, log.0086, log.0100
>
>
> This assertion trips (sometimes from different tests), if you run the 
> highlighting tests on branch_4x with r1512807.
> It reproduces about half the time, always only with 32bit + G1GC (other 
> combinations do not seem to trip it, i didnt try looping or anything really 
> though).
> {noformat}
> rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
> rmuir@beast:~/workspace/branch_4x$ ant clean
> rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
> otherwise master seed does not work!
> rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
> -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs="-server
> -XX:+UseG1GC"
> {noformat}
> Originally showed up like this:
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
> Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
> Error Message:
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
> at 
> org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
> at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
> at 
> org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
> at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
> at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
> at 
> org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7489) Improve sparsity support of Lucene70DocValuesFormat

2016-10-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575247#comment-15575247
 ] 

Adrien Grand commented on LUCENE-7489:
--

I just ran the reproduction line again now that LUCENE-7495 fixed and the test 
passed. Things should be good now.

> Improve sparsity support of Lucene70DocValuesFormat
> ---
>
> Key: LUCENE-7489
> URL: https://issues.apache.org/jira/browse/LUCENE-7489
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7489.patch, LUCENE-7489.patch
>
>
> Like Lucene70NormsFormat, it should be able to only encode actual values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7495) Asserting*DocValues are too lenient when checking the target in advance

2016-10-14 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7495.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

> Asserting*DocValues are too lenient when checking the target in advance
> ---
>
> Key: LUCENE-7495
> URL: https://issues.apache.org/jira/browse/LUCENE-7495
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Fix For: master (7.0)
>
> Attachments: LUCENE-7495.patch
>
>
> They only check {{target >= docID()}} while the actual check should be 
> {{target > docID()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7495) Asserting*DocValues are too lenient when checking the target in advance

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575221#comment-15575221
 ] 

ASF subversion and git services commented on LUCENE-7495:
-

Commit ea1212232d2b95e53bfa6ad3ce7700d3cff4 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ea12122 ]

LUCENE-7495: Fix doc values iterators' assertions in advance().


> Asserting*DocValues are too lenient when checking the target in advance
> ---
>
> Key: LUCENE-7495
> URL: https://issues.apache.org/jira/browse/LUCENE-7495
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-7495.patch
>
>
> They only check {{target >= docID()}} while the actual check should be 
> {{target > docID()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6595) Improve error response in case distributed collection cmd fails

2016-10-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575214#comment-15575214
 ] 

Jan Høydahl commented on SOLR-6595:
---

I wonder if the error reporting might be solved during a lot of refactoring of 
the overseer, async operations etc? Anyone?

> Improve error response in case distributed collection cmd fails
> ---
>
> Key: SOLR-6595
> URL: https://issues.apache.org/jira/browse/SOLR-6595
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: SolrCloud with Client SSL
>Reporter: Sindre Fiskaa
>Priority: Minor
>
> Followed the description 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL and generated a 
> self signed key pair. Configured a few solr-nodes and used the collection api 
> to crate a new collection. -I get error message when specify the nodes with 
> the createNodeSet param. When I don't use createNodeSet param the collection 
> gets created without error on random nodes. Could this be a bug related to 
> the createNodeSet param?- *Update: It failed due to what turned out to be 
> invalid client certificate on the overseer, and returned the following 
> response:*
> {code:xml}
> 
>   0 name="QTime">185
>   
> org.apache.solr.client.solrj.SolrServerException:IOException occured 
> when talking to server at: https://vt-searchln04:443/solr
>   
> 
> {code}
> *Update: Three problems:*
> # Status=0 when the cmd did not succeed (only ZK was updated, but cores not 
> created due to failing to connect to shard nodes to talk to core admin API).
> # The error printed does not tell which action failed. Would be helpful to 
> either get the msg from the original exception or at least some message 
> saying "Failed to create core, see log on Overseer 
> # State of collection is not clean since it exists as far as ZK is concerned 
> but cores not created. Thus retrying the CREATECOLLECTION cmd would fail. 
> Should Overseer detect error in distributed cmds and rollback changes already 
> made in ZK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9644) MoreLikeThis parser doesn't handle boosts properly

2016-10-14 Thread Ere Maijala (JIRA)
Ere Maijala created SOLR-9644:
-

 Summary: MoreLikeThis parser doesn't handle boosts properly
 Key: SOLR-9644
 URL: https://issues.apache.org/jira/browse/SOLR-9644
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: MoreLikeThis
Affects Versions: 6.2.1
Reporter: Ere Maijala


It seems SimpleMLTQParser should be able to handle boost parameters, but it's 
not working properly. I've added a proposed patch to fix similar issue in 
CloudMLTQParser in issue https://issues.apache.org/jira/browse/SOLR-9267 and 
will attach a patch here too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8897) SSL-related passwords in solr.in.sh are in plain text

2016-10-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575180#comment-15575180
 ] 

Jan Høydahl commented on SOLR-8897:
---

For the problem of revealing passwords in solr.in.sh, would it help to point to 
an external file for retrieving the SSL passwords? e.g. 
{{SOLR_SSL_CONFIGFILE=/var/secret/ssl-passwords.txt}}?

I'm not sure if we can avoid passing the passwords to Jetty using sysprops. 
However, we can avoid passwords being exposed in the Admin UI "Args" section by 
showing {{*}} instead of password? Probably need to be done on REST API 
level?

> SSL-related passwords in solr.in.sh are in plain text
> -
>
> Key: SOLR-8897
> URL: https://issues.apache.org/jira/browse/SOLR-8897
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, security
>Reporter: Esther Quansah
>
> As per the steps mentioned at following URL, one needs to store the plain 
> text password for the keystore to configure SSL for Solr, which is not a good 
> idea from security perspective.
> URL: 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties
>  
> (https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties)
> Is there any way so that the encrypted password can be stored (instead of 
> plain password) in solr.in.cmd/solr.in.sh to configure SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7617) Smoke test SSL support in Solr standalone and cloud

2016-10-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575147#comment-15575147
 ] 

Jan Høydahl commented on SOLR-7617:
---

Shalin, can you provide more details on what you thought about here?

> Smoke test SSL support in Solr standalone and cloud
> ---
>
> Key: SOLR-7617
> URL: https://issues.apache.org/jira/browse/SOLR-7617
> Project: Solr
>  Issue Type: Task
>  Components: Build, scripts and tools
>Reporter: Shalin Shekhar Mangar
>
> We need a script for testing an SSL standalone and cloud Solr before release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9267) Cloud MLT field boost not working

2016-10-14 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575129#comment-15575129
 ] 

Ere Maijala edited comment on SOLR-9267 at 10/14/16 12:02 PM:
--

This patch fixes the handling of qf so that fields and any boosts are always 
extracted. It also fixes the filteredDocument creation so that IndexableField 
type is not directly cast to a string as that would include strings like 
"indexed" and "stored" and throw the results off if those are included in the 
indexed records.

As far as I can see this should work in the 6_2 branch too (apart from 
CHANGES.txt obviously).


was (Author: emaijala):
This patch fixes the handling of qf so that fields and any boosts are always 
extracted. It also fixes the filteredDocument creation so that IndexableField 
type is not directly cast to a string as that would include strings like 
"indexed" and "stored" and throw the results off if those are included in the 
indexed records.

> Cloud MLT field boost not working
> -
>
> Key: SOLR-9267
> URL: https://issues.apache.org/jira/browse/SOLR-9267
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1, 5.3.2, 5.4, 5.4.1, 
> 5.5, 5.5.1, 5.5.2, 5.5.3, 5.6, 6.0, 6.0.1, 6.0.2, 6.1, 6.1.1, 6.2
>Reporter: Brian Feldman
> Attachments: SOLR-9267.patch
>
>
> When boosting by field "fieldname otherFieldName^4.0" the boost is not 
> stripped from the field name when adding to fieldNames ArrayList.  So on line 
> 133 of CloudMLTQParser when adding field content to the filteredDocument the 
> field is not found (incorrectly trying to find 'otherFieldName^4.0').
> The easiest but perhaps hackiest solution is to overwrite qf:
> {code}
> if (localParams.get("boost") != null) {
>   mlt.setBoost(localParams.getBool("boost"));
>   boostFields = SolrPluginUtils.parseFieldBoosts(qf);
>   qf = boostFields.keySet().toArray(qf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18045 - Failure!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18045/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrVersionReplicationTest.testCdcrDocVersions

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:42982/solr within 1 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:42982/solr within 1 ms
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:106)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:226)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:563)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForCollectionToDisappear(BaseCdcrDistributedZkTest.java:496)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.startServers(BaseCdcrDistributedZkTest.java:598)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.createSourceCollection(BaseCdcrDistributedZkTest.java:348)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.baseBefore(BaseCdcrDistributedZkTest.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9267) Cloud MLT field boost not working

2016-10-14 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-9267:
--
Attachment: SOLR-9267.patch

This patch fixes the handling of qf so that fields and any boosts are always 
extracted. It also fixes the filteredDocument creation so that IndexableField 
type is not directly cast to a string as that would include strings like 
"indexed" and "stored" and throw the results off if those are included in the 
indexed records.

> Cloud MLT field boost not working
> -
>
> Key: SOLR-9267
> URL: https://issues.apache.org/jira/browse/SOLR-9267
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1, 5.3.2, 5.4, 5.4.1, 
> 5.5, 5.5.1, 5.5.2, 5.5.3, 5.6, 6.0, 6.0.1, 6.0.2, 6.1, 6.1.1, 6.2
>Reporter: Brian Feldman
> Attachments: SOLR-9267.patch
>
>
> When boosting by field "fieldname otherFieldName^4.0" the boost is not 
> stripped from the field name when adding to fieldNames ArrayList.  So on line 
> 133 of CloudMLTQParser when adding field content to the filteredDocument the 
> field is not found (incorrectly trying to find 'otherFieldName^4.0').
> The easiest but perhaps hackiest solution is to overwrite qf:
> {code}
> if (localParams.get("boost") != null) {
>   mlt.setBoost(localParams.getBool("boost"));
>   boostFields = SolrPluginUtils.parseFieldBoosts(qf);
>   qf = boostFields.keySet().toArray(qf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9639) CdcrVersionReplicationTest failure

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575105#comment-15575105
 ] 

ASF subversion and git services commented on SOLR-9639:
---

Commit 96e0c2ff48cf70f9c376760e50b78281699d0e53 in lucene-solr's branch 
refs/heads/branch_6x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96e0c2f ]

SOLR-9639: CDCR Tests only fix. Wait until recovery is over before
remove the tmp_colletion.

> CdcrVersionReplicationTest failure
> --
>
> Key: SOLR-9639
> URL: https://issues.apache.org/jira/browse/SOLR-9639
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 6.3
>
> Attachments: CDcr failure.txt, SOLR-9639.patch, SOLR-9639.patch, 
> cdcr-stack.txt, cdcr-success.txt
>
>
> h3. it fails.
> The problem is [over 
> there|https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/BaseCdcrDistributedZkTest.java#L597]
>  when it deletes that temporal collection (which is a tricky thing per se) 
> while it's still in recovery Solr Cloud went crazy: it closes the core, and 
> almost done it, but it can't be unloaded because PeerSync (remember, it's 
> recovering) open it ones, and it bloat logs with 
> bq.105902 INFO  (qtp3284815-656) [n:127.0.0.1:41440_ia%2Fd] 
> o.a.s.c.SolrCore Core collection1 is not yet closed, waiting 100 ms before 
> checking again.
> But then, something spawn too many request {{/get}}?? which deadlocks until 
> heap is exceeded and it dies. The fix is obvious, just to wait until 
> recoveries finishes, before removing tmp_collection. 
> Beside of this particular fix,is there any ideas about deadlock caused by 
> deleting recovering collection?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575091#comment-15575091
 ] 

Mikhail Khludnev commented on SOLR-9182:


[~romseygeek], what about leaving TestSolrJErrorHandling unchanged for a while, 
and remove SupressSSL from other tests? 

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182-solrj-supprel.patch, SOLR-9182.patch, SOLR-9182.patch, 
> SOLR-9182.patch, SOLR-9182.patch, TestSolrJErrorHandling-SOLR-8192.txt
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-14 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575093#comment-15575093
 ] 

Tim Allison commented on SOLR-9325:
---

And for Tika-specific errors, please ping us on our JIRA.  Thank you!

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9639) CdcrVersionReplicationTest failure

2016-10-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575084#comment-15575084
 ] 

ASF subversion and git services commented on SOLR-9639:
---

Commit 47446733884e030feaecac355c01c58f9e5e3169 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4744673 ]

SOLR-9639: CDCR Tests only fix. Wait until recovery is over before
remove the tmp_colletion.

> CdcrVersionReplicationTest failure
> --
>
> Key: SOLR-9639
> URL: https://issues.apache.org/jira/browse/SOLR-9639
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 6.3
>
> Attachments: CDcr failure.txt, SOLR-9639.patch, SOLR-9639.patch, 
> cdcr-stack.txt, cdcr-success.txt
>
>
> h3. it fails.
> The problem is [over 
> there|https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/BaseCdcrDistributedZkTest.java#L597]
>  when it deletes that temporal collection (which is a tricky thing per se) 
> while it's still in recovery Solr Cloud went crazy: it closes the core, and 
> almost done it, but it can't be unloaded because PeerSync (remember, it's 
> recovering) open it ones, and it bloat logs with 
> bq.105902 INFO  (qtp3284815-656) [n:127.0.0.1:41440_ia%2Fd] 
> o.a.s.c.SolrCore Core collection1 is not yet closed, waiting 100 ms before 
> checking again.
> But then, something spawn too many request {{/get}}?? which deadlocks until 
> heap is exceeded and it dies. The fix is obvious, just to wait until 
> recoveries finishes, before removing tmp_collection. 
> Beside of this particular fix,is there any ideas about deadlock caused by 
> deleting recovering collection?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2869) IllegalStateException when requesting multiple pages.

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2869.
-
Resolution: Cannot Reproduce

Closing old issue. We do not support Tomcat, could be deployment issue? Too 
little information to reproduce.

> IllegalStateException when requesting multiple pages. 
> --
>
> Key: SOLR-2869
> URL: https://issues.apache.org/jira/browse/SOLR-2869
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.4
> Environment: Tomcat 5.5.15 on ubuntu linux server, Solr 3.4
>Reporter: Phil Scadden
>Priority: Minor
>
> IllegalStateException
> Seems to happen when I ask for more pages of results, but solr 
> essentially stops working. Half an hour later it was working okay. Solr 
> 3.4 on tomcat 5.5.15
> Logs look like: (example of one of many...)
> Any ideas very welcome. However, the bug is intermittant. I cant find a way 
> to reliably reproduce the problem.
> 1/11/2011 12:00:14 org.apache.catalina.core.StandardWrapperValve invoke
> SEVERE: Servlet.service() for servlet SolrServer threw exception
> java.lang.IllegalStateException
>  at 
> org.apache.catalina.connector.ResponseFacade.sendError(ResponseFacade.java:404)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:380)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:283)
>  at sun.reflect.GeneratedMethodAccessor101.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at 
> org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:243)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAsPrivileged(Subject.java:517)
>  at 
> org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:275)
>  at 
> org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:217)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:197)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.access$000(ApplicationFilterChain.java:50)
>  at 
> org.apache.catalina.core.ApplicationFilterChain$1.run(ApplicationFilterChain.java:156)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:152)
>  at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
>  at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
>  at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
>  at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
>  at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:541)
>  at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
>  at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
>  at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
>  at 
> org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:667)
>  at 
> org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
>  at 
> org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
>  at 
> org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
>  at java.lang.Thread.run(Thread.java:619)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2973) SynonymFilterFactory causing Highlighter to highlight terms which are not part of the search

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2973.
-
Resolution: Incomplete

Closing old issue. Too little information to reproduce. Please re-open if you 
still have this problem with a recent version of Solr.

> SynonymFilterFactory causing Highlighter to highlight terms which are not 
> part of the search
> 
>
> Key: SOLR-2973
> URL: https://issues.apache.org/jira/browse/SOLR-2973
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.0-ALPHA
> Environment: java Tomcat Solaris
>Reporter: Shyam Bhaskaran
>  Labels: SynonymFilterFactory, highlighting, lucene, solr, synonym
>
> SynonymFilterFactory causing Highlighter to highlight terms which are not 
> part of the search - We recently applied the latest Solr 4.0 trunk code and 
> after this change it is found that the highlighter is wrongly highlighting 
> the terms and when we remove the SynonymFilterFactory filter everything works 
> fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3019) Solr Search : SEVERE: java.lang.ArrayIndexOutOfBoundsException: -1

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3019.
-
Resolution: Incomplete

Closing this old issue. Way too little information to reproduce! Please re-open 
if you manage to reproduce on a recent Solr version.

> Solr Search : SEVERE: java.lang.ArrayIndexOutOfBoundsException: -1
> --
>
> Key: SOLR-3019
> URL: https://issues.apache.org/jira/browse/SOLR-3019
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.1, 3.5
> Environment: Solr Version 3.1
> Solr Version 3.5
> Tomcat 6.0
> Window 2008 Server R2
>Reporter: Rohit Gupta
>
> I am getting the following error when I try a search query in my solr, am not 
> sure what might be causing this, but this has bought a halt to all our work.
> This query works:
>  http://10.0.0.13:8080/solr/cmn/select/?q=*:*=searchText=4
> But this doesn't, and gives the exception below,
>  http://10.0.0.13:8080/solr/cmn/select/?q=solr
> SEVERE: java.lang.ArrayIndexOutOfBoundsException: -1
> at org.apache.lucene.util.packed.Packed64.get(Packed64.java:186)
> at 
> org.apache.lucene.index.TermInfosReaderIndex.seekEnum(TermInfosReaderIndex.java:118)
> at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:235)
> at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
> at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:509)
> at 
> org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:309)
> at org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
> at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:77)
> at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:82)
> at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:66)
> at org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:53)
> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
> at 
> org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
> at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3476) Create a Solr Core with a given commit point

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3476.
-
Resolution: Won't Fix

Closing as won't fix.
If you want to stage content for controlled publishing at a certain time, 
please create a separate SolrCloud collection, and then flip aliases to set 
into production.

> Create a Solr Core with a given commit point
> 
>
> Key: SOLR-3476
> URL: https://issues.apache.org/jira/browse/SOLR-3476
> Project: Solr
>  Issue Type: Improvement
>  Components: multicore
>Affects Versions: 3.6
>Reporter: Ludovic Boutros
>  Labels: patch
> Attachments: commitPoint.patch
>
>
> In some configurations, we need to open new cores with a given commit point.
> For instance, when the publication of new documents must be controlled (legal 
> obligations) in a master-slave configuration there are two cores on the same 
> instanceDir and dataDir which are using two "versions" of the index.
> The switch of the two cores is done manually.
> The problem is that when the replication is done one day before the switch, 
> if any problem occurs, and we need to restart tomcat, the new documents are 
> published.
> With this functionality, we could ensure that the index generation used by 
> the core used for querying is always the good one. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3544) Under heavy load json response is cut at some arbitrary position

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3544.
-
Resolution: Not A Bug

Not a bug. If you want to spool 30k docs, please use /export handler, 
cursorMark or streaming!

> Under heavy load json response is cut at some arbitrary position
> 
>
> Key: SOLR-3544
> URL: https://issues.apache.org/jira/browse/SOLR-3544
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.1
> Environment: Linux version 2.6.32-5-amd64 (Debian 2.6.32-38) 
> (b...@decadent.org.uk) (gcc version 4.3.5 (Debian 4.3.5-4) )
>Reporter: Dušan Omerčević
>
> We query solr for 30K documents using json as the response format. Normally 
> this works perfectly fine. But when the machine comes under heavy load (all 
> cores utilized) the response got interrupted at arbitrary position. We 
> circumvented the problem by switching to xml response format.
> I've written the full description here: 
> http://restreaming.wordpress.com/2012/06/14/the-curious-case-of-solr-malfunction/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3792) NullPointerException in highlighter under certain conditions

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3792.
-
Resolution: Information Provided

Closing ooold issue from 4.0-beta. If this was a problem simply due to an empty 
field, we'd see this bug again and again. If anyone still see this NPE in their 
recent Solr installs, please speak up!

> NullPointerException in highlighter under certain conditions
> 
>
> Key: SOLR-3792
> URL: https://issues.apache.org/jira/browse/SOLR-3792
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.0-BETA
> Environment: Windows 7, Java 6 or 7
>Reporter: Yoni Amir
>
> Under certain conditions, there is an exception in highlighter component. The 
> stacktrace is shown below.
> The conditions are:
> 1) Add a document with a field that has empty value. E.g., with solrj api:
> document.addField ("field1", "");
> (Maybe this is not a valid value or use-case, but Solr stills allows it!!)
> 2) have solr.WhitespaceTokenizerFactory as the tokenizer of the analyzer 
> chain for that field.
> 3) make sure that this field should be highlighted. In my case, I was using 
> hl.requireFieldMatch=false, and actually I was searching on another field.
> 4) Using edismax, search for a phrase, e.g., "foo bar" (including the 
> quotation marks)
> 5) The document mentioned before should be part of the search results.
> 6) This exception occurs:
> INFO  (SolrCore.java:1670) - [rcmCore] webapp=/solr path=/select 
> params={qf=all_text=2=20=javabin=0="foo bar"} 
> hits=103 status=500 QTime=38 ERROR (SolrException.java:104) - 
> null:java.lang.NullPointerException
>at 
> org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:191)
>at 
> org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:152)
>at 
> org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.incrementToken(WordDelimiterFilter.java:209)
>at 
> org.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:50)
>at 
> org.apache.lucene.analysis.miscellaneous.RemoveDuplicatesTokenFilter.incrementToken(RemoveDuplicatesTokenFilter.java:54)
>at 
> org.apache.lucene.analysis.core.LowerCaseFilter.incrementToken(LowerCaseFilter.java:54)
>at 
> org.apache.solr.highlight.TokenOrderingFilter.incrementToken(DefaultSolrHighlighter.java:629)
>at 
> org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:78)
>at 
> org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:50)
>at 
> org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:225)
>at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:510)
>at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:401)
>at 
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:136)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:1656)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:454)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:275)
>at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
>at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
>at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
>at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
>at java.lang.Thread.run(Thread.java:736)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SOLR-9643) Pagination issue occurs in solr cloud when results are grouped on a field

2016-10-14 Thread Paras Diwan (JIRA)
Paras Diwan created SOLR-9643:
-

 Summary: Pagination issue occurs in solr cloud when results are 
grouped on a field
 Key: SOLR-9643
 URL: https://issues.apache.org/jira/browse/SOLR-9643
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.1
 Environment: Solr cloud is deployed on AWS linux server. 4 Solr 
servers and apache zookeeper is setup
Reporter: Paras Diwan
Priority: Critical
 Fix For: 6.1.1


Either value of ngroups in grouped query is inaccurate or there is some issue 
in returning documents of later pages. 

select?q=*:*=true=family=true=0=1

For above mentioned query i get ngroups = 396324
but for the same query when i modify start to 396320. it returns 0 docs, an 
empty page.
Instead the last result is at 386887.
Please look into this issue or offer some solution 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2643) Allow multiple field aliases in ExtendedDisMaxQParser

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2643.
-
Resolution: Implemented

Closing this. Alias for edismax is already implemented, see 
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser

{code}
f.name.qf=name last_name first_name
{code}

> Allow multiple field aliases in ExtendedDisMaxQParser
> -
>
> Key: SOLR-2643
> URL: https://issues.apache.org/jira/browse/SOLR-2643
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.0-ALPHA
>Reporter: Jamie Johnson
>
> The original DisMaxQParser seems to have support for handling multiple 
> aliases so someone could do query rewrite on more than just the default 
> field.  If the ExtendedDisMaxQParser supported this and exposed this 
> capability we'd be able to build more powerful rewrite capabilities such that 
> could reduce the size of an index.  For instance say we have a scenario where 
> we have 3 fields first_name, last_name and name.  In this situation we don't 
> completely control the input, we may have first_name and last_name or just 
> name.  In this case given 2 documents as follows:
> Doc 1
> first_name: John
> last_name: Doe
> Doc 2
> name: Jane Doe
> if the user did a query on name:Doe we would be able to rewrite the query to 
> return both documents such that the query would be name:Doe OR first_name:Doe 
> OR last_name:Doe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7115) UpdateLog can miss closing transaction log objects.

2016-10-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-7115:
---
Attachment: tests-failures-7115.txt

Damn. The test is too brutal to be fixed, it still leaks 
[^tests-failures-7115.txt]. There is a small probability of a mistake, but I 
think the fix was applied at the time of that run... Will run it more.. 

> UpdateLog can miss closing transaction log objects.
> ---
>
> Key: SOLR-7115
> URL: https://issues.apache.org/jira/browse/SOLR-7115
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: 6.3
>
> Attachments: SOLR-7115-LargeVolumeEmbeddedTest-fail.txt, 
> SOLR-7115.patch, SOLR-7115.patch, tests-failures-7115.txt
>
>
> I've seen this happen on YourKit and in various tests - especially since 
> adding resource release tracking to the log objects. Now I've got a test that 
> catches it in SOLR-7113.
> It seems that in precommit, if prevTlog is not null, we need to close it 
> because we are going to overwrite prevTlog with a new log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9639) CdcrVersionReplicationTest failure

2016-10-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9639:
---
Attachment: SOLR-9639.patch

After applying this solr test runs much smoothly at my machine. Launching 
precommit. 

> CdcrVersionReplicationTest failure
> --
>
> Key: SOLR-9639
> URL: https://issues.apache.org/jira/browse/SOLR-9639
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 6.3
>
> Attachments: CDcr failure.txt, SOLR-9639.patch, SOLR-9639.patch, 
> cdcr-stack.txt, cdcr-success.txt
>
>
> h3. it fails.
> The problem is [over 
> there|https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/BaseCdcrDistributedZkTest.java#L597]
>  when it deletes that temporal collection (which is a tricky thing per se) 
> while it's still in recovery Solr Cloud went crazy: it closes the core, and 
> almost done it, but it can't be unloaded because PeerSync (remember, it's 
> recovering) open it ones, and it bloat logs with 
> bq.105902 INFO  (qtp3284815-656) [n:127.0.0.1:41440_ia%2Fd] 
> o.a.s.c.SolrCore Core collection1 is not yet closed, waiting 100 ms before 
> checking again.
> But then, something spawn too many request {{/get}}?? which deadlocks until 
> heap is exceeded and it dies. The fix is obvious, just to wait until 
> recoveries finishes, before removing tmp_collection. 
> Beside of this particular fix,is there any ideas about deadlock caused by 
> deleting recovering collection?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2678) confFiles replication: Flags for "No-reloadCore" and "No-Backup" for specific files

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2678.
-
Resolution: Won't Fix

Closing as won't fix since this has not seen attention in 5 years, and focus 
has shifted to Cloud. If anyone disagree, feel free to discuss further :)

> confFiles replication: Flags for "No-reloadCore" and "No-Backup" for specific 
> files
> ---
>
> Key: SOLR-2678
> URL: https://issues.apache.org/jira/browse/SOLR-2678
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Affects Versions: 3.3
>Reporter: Mark Plomer
>Priority: Minor
>
> It would be nice to have the possibility to specify for some confFiles, if 
> the core should be reloaded when they have changed, and if they should be 
> backed-up when replicating.
> Background: I setup a failover solr server as slave. To have the possibility 
> to switch it to a master server manually, I replicate the 
> dataimport.properties file to have always the corresponding last_index_time 
> for the delta-import in DIH.
> But as this file changes on every new index, of course, the complete core on 
> the slave it reloaded every time which is not neccessary. Also the 
> conf-directory is cluttered up with a lot of unneeded backups of 
> dataimport.properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2732) solr tests hang indefinitely

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2732.
-
Resolution: Works for Me

Closing old test issue from 2011, I have not seen this with today's test suite

> solr tests hang indefinitely
> 
>
> Key: SOLR-2732
> URL: https://issues.apache.org/jira/browse/SOLR-2732
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Charlie cron has been hung for hours... thread dump attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2737) I get ClassCastException when staring tomcat server (Solr configured with Suggester option enabled)

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2737.
-
Resolution: Cannot Reproduce

Closing old ancient issue. Likely caused by wrong deploy to Tomcat? We don't 
support Tomcat anymore.

> I get ClassCastException when staring tomcat server (Solr configured with 
> Suggester option enabled)
> ---
>
> Key: SOLR-2737
> URL: https://issues.apache.org/jira/browse/SOLR-2737
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 3.3
> Environment: Windows 7/Apache Tomcat 7/Solr 3.3/JDK 1.6
>Reporter: Saurabh Kumar Singh
>
> I get the following stack trace on server startup:
> Aug 30, 2011 6:37:44 PM org.apache.solr.common.SolrException log
> SEVERE: java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Float
>   at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:84)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:597)
>   at 
> org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:522)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:594)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:463)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
>   at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:133)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
>   at 
> org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:273)
>   at 
> org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:254)
>   at 
> org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:372)
>   at 
> org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:98)
>   at 
> org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4584)
>   at 
> org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5262)
>   at 
> org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5257)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Below is the patch for the fix (I did it locally. Please let me know the 
> criteria of how could I check-in the fix :) )
>  ### Eclipse Workspace Patch 1.0
> #P lucene_solr_3_3
> Index: solr/src/java/org/apache/solr/spelling/suggest/Suggester.java
> ===
> --- solr/src/java/org/apache/solr/spelling/suggest/Suggester.java 
> (revision 1163940)
> +++ solr/src/java/org/apache/solr/spelling/suggest/Suggester.java 
> (working copy)
> @@ -81,8 +81,22 @@
>public String init(NamedList config, SolrCore core) {
>  LOG.info("init: " + config);
>  String name = super.init(config, core);
> -threshold = config.get(THRESHOLD_TOKEN_FREQUENCY) == null ? 0.0f
> -: (Float)config.get(THRESHOLD_TOKEN_FREQUENCY);
> +Object tokenFrequency = config.get(THRESHOLD_TOKEN_FREQUENCY);  
> +if ( config.get(THRESHOLD_TOKEN_FREQUENCY) == null)
> +{
> + threshold = 0.0f;
> +}
> +else
> +{
> + if (tokenFrequency instanceof Number)
> + {
> + threshold = (Float)tokenFrequency;
> + }
> + else if(tokenFrequency instanceof String)
> + {
> + threshold = Float.valueOf((String)tokenFrequency);
> + }
> +}
>  sourceLocation = (String) config.get(LOCATION);
>  field = (String)config.get(FIELD);
>  lookupImpl = (String)config.get(LOOKUP_IMPL);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5055) Using solr.SearchHandler with Apache Tomcat

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-5055.
-

Closing as won't fix - we no longer support Tomcat deploy

> Using solr.SearchHandler with Apache Tomcat
> ---
>
> Key: SOLR-5055
> URL: https://issues.apache.org/jira/browse/SOLR-5055
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.3.1
> Environment: RedHat Linux 6.4 x64_86, Ubuntu 10.04.2 LTS i686
>Reporter: Christian Lutz
>Priority: Minor
>
> I'm trying to deploy Solr with Tomcat (Version 7.0.27 and 6.0.37) as 
> described in http://wiki.apache.org/solr/SolrTomcat and 
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+Tomcat
> I deployed only one instance (mentioned in Wiki Documentation as "Single Solr 
> Instance"). In solrconfig.xml the library configuration is changed to absolut 
> paths for the jar-Files.
> When i try to use the /browse Request Handler it states the following error:
> ERROR - 2013-07-22 13:04:20.279; org.apache.solr.common.SolrException; 
> null:java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/apache/commons/lang/StringUtils
> at 
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:670)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
> at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:200)
> at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
> at 
> org.apache.velocity.runtime.resource.ResourceManagerImpl.initialize(ResourceManagerImpl.java:161)
> at 
> org.apache.velocity.runtime.RuntimeInstance.initializeResourceManager(RuntimeInstance.java:730)
> at 
> org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:263)
> at org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:93)
> at 
> org.apache.solr.response.VelocityResponseWriter.getEngine(VelocityResponseWriter.java:147)
> at 
> org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:42)
> at 
> org.apache.solr.core.SolrCore$LazyQueryResponseWriterWrapper.write(SolrCore.java:2278)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:644)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:372)
> ... 15 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.lang.StringUtils
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
> ... 24 more
> ERROR - 2013-07-22 13:04:20.282; org.apache.solr.common.SolrException; 
> null:java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
> at 
> org.apache.velocity.runtime.resource.ResourceManagerImpl.initialize(ResourceManagerImpl.java:161)
> at 
> org.apache.velocity.runtime.RuntimeInstance.initializeResourceManager(RuntimeInstance.java:730)
> at 
> org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:263)
> at 

[jira] [Commented] (LUCENE-7493) Support of TotalHitCountCollector for FacetCollector.search api if numdocs passed as zero.

2016-10-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574923#comment-15574923
 ] 

Michael McCandless commented on LUCENE-7493:


Thank you [~maahi333].

Hmm but there is a problem with your change inside {{FacetsCollector}}: I think 
in the limit=0 case you will get no facet results, because you make a new 
{{FacetCollector}} rather than using the {{Collector}} passed in by the user?

Can you improve the test to confirm that you do get facet results with limit=0, 
which should fail, and then fix your changes in {{FacetsCollector}} and then 
the test should pass?

> Support of TotalHitCountCollector for FacetCollector.search api if numdocs 
> passed as zero.
> --
>
> Key: LUCENE-7493
> URL: https://issues.apache.org/jira/browse/LUCENE-7493
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mahesh
> Attachments: LUCENE-7493-Fail-TestCase.patch, 
> LUCENE-7493-Pass-TestCase.patch
>
>
> Hi, 
> I want to do drill down search using FacetCollection below is the code 
> FacetsCollector facetCollector = new FacetsCollector();
> TopDocs topDocs = FacetsCollector.search(st.searcher, filterQuery, limit, 
> facetCollector);
> I just want facet information so I pass limit as zero but I get error 
> "numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count".
> For FacetCollector there is no way to initialize 'TotalHitCountCollector'. 
> Internally it always create either 'TopFieldCollector' or 
> 'TopScoreDocCollector' which does not allow limit as 0. 
> So if limit should be zero then there should be a way that 
> 'TotalHitCountCollector' should be initialized. 
> Better way would be to provide an api which takes query and collector as 
> inputs just like 'drillSideways.search(filterQuery, totalHitCountCollector)'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Michael McCandless
Thanks Adrien.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Oct 14, 2016 at 6:21 AM, Adrien Grand  wrote:
> I will look into it later today.
>
> Le ven. 14 oct. 2016 à 12:08, Michael McCandless 
> a écrit :
>>
>> Does anyone have any idea on this one :)
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
>>  wrote:
>> > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
>> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
>> >
>> > 2 tests failed.
>> > FAILED:
>> > org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
>> >
>> > Error Message:
>> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra)
>> > NEVER:MATCH: score(doc=433)=-3.9678193E-6 != explanationScore=-1.9839097E-6
>> > Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
>> > -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 = sum of:
>> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result of:
>> > -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed from:
>> > 100.0 = boost 9.741334E-20 = NormalizationH1, computed from:
>> > 1.0 = tf   5.5031867 = avgFieldLength   
>> > 5.6493154E19
>> > = len 0.24889919 = LambdaTTF, computed from:
>> > 2147.0 = totalTermFreq   8629.0 = numberOfDocuments
>> > -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
>> > was:<-1.9839097E-6>
>> >
>> > Stack Trace:
>> > junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0) |
>> > (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH: score(doc=433)=-3.9678193E-6
>> > != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 = sum of:
>> >   -1.9839097E-6 = sum of:
>> > -1.9839097E-6 = max plus 0.5 times others of:
>> >   -3.9678193E-6 = sum of:
>> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
>> > result of:
>> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
>> > computed from:
>> > 100.0 = boost
>> > 9.741334E-20 = NormalizationH1, computed from:
>> >   1.0 = tf
>> >   5.5031867 = avgFieldLength
>> >   5.6493154E19 = len
>> > 0.24889919 = LambdaTTF, computed from:
>> >   2147.0 = totalTermFreq
>> >   8629.0 = numberOfDocuments
>> > -3.9678195E-8 = DistributionSPL
>> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
>> > at
>> > __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE29]:0)
>> > at junit.framework.Assert.fail(Assert.java:50)
>> > at junit.framework.Assert.failNotEquals(Assert.java:287)
>> > at junit.framework.Assert.assertEquals(Assert.java:120)
>> > at
>> > org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>> > at
>> > org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:505)
>> > at
>> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> > at
>> > org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>> > at
>> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> > at
>> > org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>> > at
>> > org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:183)
>> > at
>> > org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:170)
>> > at
>> > org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>> > at
>> > org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>> > at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>> > at
>> > org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>> > at
>> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
>> > at
>> > org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
>> > at
>> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
>> > at
>> > org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
>> > at
>> > org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
>> > at
>> > org.apache.lucene.search.QueryUtils.check(QueryUtils.java:132)
>> > at
>> > org.apache.lucene.search.QueryUtils.check(QueryUtils.java:128)
>> > at
>> > org.apache.lucene.search.QueryUtils.check(QueryUtils.java:118)
>> > at
>> > org.apache.lucene.search.CheckHits.checkHitCollector(CheckHits.java:98)
>> > at
>> > 

Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Adrien Grand
I will look into it later today.

Le ven. 14 oct. 2016 à 12:08, Michael McCandless 
a écrit :

> Does anyone have any idea on this one :)
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
>  wrote:
> > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
> > Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
> >
> > 2 tests failed.
> > FAILED:
> org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
> >
> > Error Message:
> > (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra)
> NEVER:MATCH: score(doc=433)=-3.9678193E-6 != explanationScore=-1.9839097E-6
> Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of:
>  -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 = sum
> of: -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
> result of:   -3.9678193E-6 = score(IBSimilarity, doc=433,
> freq=1.0), computed from: 100.0 = boost
>  9.741334E-20 = NormalizationH1, computed from:1.0 = tf
>5.5031867 = avgFieldLength   5.6493154E19 = len
>0.24889919 = LambdaTTF, computed from:2147.0 =
> totalTermFreq   8629.0 = numberOfDocuments
>  -3.9678195E-8 = DistributionSPL  expected:<-3.9678193E-6> but
> was:<-1.9839097E-6>
> >
> > Stack Trace:
> > junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0) |
> (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH: score(doc=433)=-3.9678193E-6
> != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 = sum of:
> >   -1.9839097E-6 = sum of:
> > -1.9839097E-6 = max plus 0.5 times others of:
> >   -3.9678193E-6 = sum of:
> > -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity],
> result of:
> >   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0),
> computed from:
> > 100.0 = boost
> > 9.741334E-20 = NormalizationH1, computed from:
> >   1.0 = tf
> >   5.5031867 = avgFieldLength
> >   5.6493154E19 = len
> > 0.24889919 = LambdaTTF, computed from:
> >   2147.0 = totalTermFreq
> >   8629.0 = numberOfDocuments
> > -3.9678195E-8 = DistributionSPL
> >  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
> > at
> __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE29]:0)
> > at junit.framework.Assert.fail(Assert.java:50)
> > at junit.framework.Assert.failNotEquals(Assert.java:287)
> > at junit.framework.Assert.assertEquals(Assert.java:120)
> > at
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
> > at
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:505)
> > at
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> > at
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
> > at
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> > at
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> > at
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:183)
> > at
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:170)
> > at
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
> > at
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
> > at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> > at
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
> > at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
> > at
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
> > at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> > at
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
> > at
> org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
> > at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:132)
> > at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:128)
> > at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:118)
> > at
> org.apache.lucene.search.CheckHits.checkHitCollector(CheckHits.java:98)
> > at
> org.apache.lucene.search.BaseExplanationTestCase.qtest(BaseExplanationTestCase.java:112)
> > at
> org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.qtest(TestSimpleExplanationsWithFillerDocs.java:115)
> > at
> org.apache.lucene.search.TestSimpleExplanations.testDMQ9(TestSimpleExplanations.java:198)
> > at 

[jira] [Assigned] (SOLR-9640) Support PKI authentication in standalone mode

2016-10-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-9640:
-

Assignee: Jan Høydahl

> Support PKI authentication in standalone mode
> -
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Attachments: SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18039 - Still Unstable!

2016-10-14 Thread Michael McCandless
Does anyone have any idea on this one :)

Mike McCandless

http://blog.mikemccandless.com


On Thu, Oct 13, 2016 at 11:42 AM, Policeman Jenkins Server
 wrote:
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18039/
> Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC
>
> 2 tests failed.
> FAILED:  
> org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.testDMQ9
>
> Error Message:
> (+((field:yy (field:w5)^100.0) | (field:xx)^0.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=433)=-3.9678193E-6 != explanationScore=-1.9839097E-6 
> Explanation: -1.9839097E-6 = sum of:   -1.9839097E-6 = sum of: 
> -1.9839097E-6 = max plus 0.5 times others of:   -3.9678193E-6 = sum of:   
>   -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result of:  
>  -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed 
> from: 100.0 = boost 9.741334E-20 = NormalizationH1, 
> computed from:1.0 = tf   5.5031867 = 
> avgFieldLength   5.6493154E19 = len 0.24889919 = 
> LambdaTTF, computed from:2147.0 = totalTermFreq   
> 8629.0 = numberOfDocuments -3.9678195E-8 = DistributionSPL  
> expected:<-3.9678193E-6> but was:<-1.9839097E-6>
>
> Stack Trace:
> junit.framework.AssertionFailedError: (+((field:yy (field:w5)^100.0) | 
> (field:xx)^0.0)~0.5 -extra:extra) NEVER:MATCH: score(doc=433)=-3.9678193E-6 
> != explanationScore=-1.9839097E-6 Explanation: -1.9839097E-6 = sum of:
>   -1.9839097E-6 = sum of:
> -1.9839097E-6 = max plus 0.5 times others of:
>   -3.9678193E-6 = sum of:
> -3.9678193E-6 = weight(field:w5 in 433) [RandomSimilarity], result of:
>   -3.9678193E-6 = score(IBSimilarity, doc=433, freq=1.0), computed 
> from:
> 100.0 = boost
> 9.741334E-20 = NormalizationH1, computed from:
>   1.0 = tf
>   5.5031867 = avgFieldLength
>   5.6493154E19 = len
> 0.24889919 = LambdaTTF, computed from:
>   2147.0 = totalTermFreq
>   8629.0 = numberOfDocuments
> -3.9678195E-8 = DistributionSPL
>  expected:<-3.9678193E-6> but was:<-1.9839097E-6>
> at 
> __randomizedtesting.SeedInfo.seed([4458FBCA19109BFC:D954023F895AFE29]:0)
> at junit.framework.Assert.fail(Assert.java:50)
> at junit.framework.Assert.failNotEquals(Assert.java:287)
> at junit.framework.Assert.assertEquals(Assert.java:120)
> at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
> at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:505)
> at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
> at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
> at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:183)
> at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:170)
> at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
> at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
> at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
> at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at 
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
> at 
> org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
> at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:132)
> at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:128)
> at org.apache.lucene.search.QueryUtils.check(QueryUtils.java:118)
> at 
> org.apache.lucene.search.CheckHits.checkHitCollector(CheckHits.java:98)
> at 
> org.apache.lucene.search.BaseExplanationTestCase.qtest(BaseExplanationTestCase.java:112)
> at 
> org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs.qtest(TestSimpleExplanationsWithFillerDocs.java:115)
> at 
> org.apache.lucene.search.TestSimpleExplanations.testDMQ9(TestSimpleExplanations.java:198)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (LUCENE-7495) Asserting*DocValues are too lenient when checking the target in advance

2016-10-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574847#comment-15574847
 ] 

Michael McCandless commented on LUCENE-7495:


+1, nice catch!

> Asserting*DocValues are too lenient when checking the target in advance
> ---
>
> Key: LUCENE-7495
> URL: https://issues.apache.org/jira/browse/LUCENE-7495
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-7495.patch
>
>
> They only check {{target >= docID()}} while the actual check should be 
> {{target > docID()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1944 - Unstable!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1944/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Doc with id=4 not found in 
http://127.0.0.1:46650/_/a/forceleader_test_collection due to: Path not found: 
/id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=4 not found in 
http://127.0.0.1:46650/_/a/forceleader_test_collection due to: Path not found: 
/id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([5E52FBEE75DDB978:B8C5CF2E4C5F4019]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:620)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:575)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Lucene 3.0 Problems with term vectors for large text

2016-10-14 Thread badr
Background :

I am using Lucene for indexing the text for of files, In my scenario a
single document can have multiple files in it. As lucene document is a
linear document with out hierarchy so I have stored the text for each page
of file in a property file_text. So document structure is like

document -
  |- other properties
  |- file_text
  |- file_text
  |- so on file_text of 'n' number of pages of files
I Have indexed file id's and page no. with each file text property so that I
can find the corresponding file and page if there is a match in property of
file text.

Solution :

I have indexed the file_text property with term vectors so while searching I
use term vector to find index of file_text that has matched term and by this
I am able to get file id and page no. of file. Solution works perfectly as I
am able to get all required info that is file with match and page on which
match exists and also the no. of word occurrences as well.

Current Problem :

The problem with the solution is when there are large files lucene unable to
create the term vectors for whole file text. For example I have a file with
222 pages and lucene is able to index term vectors of only first 127 pages.
the matches on 128 page never found for this file. (end offset of last term
vector was 63122 but actual last index of file text is 140743)

I am wondering if there is any limitation for term vectors with lucene that
I am missing at the moment.

So the solution never works for big files.

Workarounds :

I can find the matching document with lucene search while indexing the
file_text without term vectors and simply store the text as a whole. Once
the matching document is found then I can use regex/String methods to find
the no. of matches file id and page no. etc.

But this will be very slow as string operations will need to run on whole
file text.

Looking for :

Is there any way which can get me the index for matching file_text field in
document. I know Explain can find the matching field and in may case there
are multiple fields with same name in documents so I need to get the index
along with field name. This will make me able to only run string methods on
single text page that will improve the performance.

Is there any way to make it work with term vectors.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Lucene-3-0-Problems-with-term-vectors-for-large-text-tp4301073.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 907 - Unstable!

2016-10-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/907/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([F1AC99FA72622CC9:9913ACD0A2F83E25]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Lucene 3.0 Problems with term vectors for large text

2016-10-14 Thread Badr Zaman
*Background :*

I am using Lucene for indexing the text for of files, In my scenario a
single document can have multiple files in it. As lucene document is a
linear document with out hierarchy so I have stored the text for each page
of file in a property file_text. So document structure is like

document -

  |- other properties

  |- file_text

  |- file_text

  |- so on file_text of 'n' number of pages of files

I Have indexed file id's and page no. with each file text property so that
I can find the corresponding file and page if there is a match in property
of file text.

*Solution :*

I have indexed the file_text property with term vectors so while searching
I use term vector to find index of file_text that has matched term and by
this I am able to get file id and page no. of file. Solution works
perfectly as I am able to get all required info that is file with match and
page on which match exists and also the no. of word occurrences as well.

*Current Problem :*

The problem with the solution is when there are large files lucene unable
to create the term vectors for whole file text. For example I have a file
with 222 pages and lucene is able to index term vectors of only first 127
pages. the matches on 128 page never found for this file. (end offset of
last term vector was 63122 but actual last index of file text is 140743)

*I am wondering if there is any limitation for term vectors with lucene
that I am missing at the moment.*

So the solution never works for big files.

*Workarounds :*

I can find the matching document with lucene search while indexing the
file_text without term vectors and simply store the text as a whole. Once
the matching document is found then I can use regex/String methods to find
the no. of matches file id and page no. etc.

But this will be very slow as string operations will need to run on whole
file text.

*Looking for :*

Is there any way which can get me the index for matching file_text field in
document. I know Explain can find the matching field and in may case there
are multiple fields with same name in documents so I need to get the index
along with field name. This will make me able to only run string methods on
single text page that will improve the performance.

Is there any way to make it work with term vectors.


[jira] [Commented] (SOLR-7115) UpdateLog can miss closing transaction log objects.

2016-10-14 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574375#comment-15574375
 ] 

Mikhail Khludnev commented on SOLR-7115:


{code}
-synchronized (solrCoreState.getUpdateLock()) {
-  if (ulog != null) ulog.preSoftCommit(cmd);
-  if (cmd.openSearcher) {
-core.getSearcher(true, false, waitSearcher);
-  } else {
-// force open a new realtime searcher so realtime-get and 
versioning code can see the latest
-RefCounted searchHolder = 
core.openNewSearcher(true, true);
-searchHolder.decref();
+try{
+  synchronized (solrCoreState.getUpdateLock()) {
+if (ulog != null) ulog.preSoftCommit(cmd);
+if (cmd.openSearcher) {
+  core.getSearcher(true, false, waitSearcher);
+} else {
+  // force open a new realtime searcher so realtime-get and 
versioning code can see the latest
+  RefCounted searchHolder = 
core.openNewSearcher(true, true);
+  searchHolder.decref();
+}
+if (ulog != null) {
+  ulog.postSoftCommit(cmd);
+}
   }
-  if (ulog != null) ulog.postSoftCommit(cmd);
 }
-if (ulog != null) ulog.postCommit(cmd); // postCommit currently means 
new searcher has
-  // also been opened
+finally{
+  if (ulog != null) {
+ulog.postCommit(cmd); // postCommit currently means new searcher 
has
+  // also been opened
+  }
+}
   }
{code}
Please have a look...

> UpdateLog can miss closing transaction log objects.
> ---
>
> Key: SOLR-7115
> URL: https://issues.apache.org/jira/browse/SOLR-7115
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: 6.3
>
> Attachments: SOLR-7115-LargeVolumeEmbeddedTest-fail.txt, 
> SOLR-7115.patch, SOLR-7115.patch
>
>
> I've seen this happen on YourKit and in various tests - especially since 
> adding resource release tracking to the log objects. Now I've got a test that 
> catches it in SOLR-7113.
> It seems that in precommit, if prevTlog is not null, we need to close it 
> because we are going to overwrite prevTlog with a new log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org