Fwd: Ask Question

2014-11-12 Thread duan duan
Solr :use DIH to update indexes to SolrCloud in the Specified shard?


Case: i want to insert indexes to solrcloud inspecified shard ,the shard
was that what i dynamically added. Can solr do it?

Thanks.


[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #757: POMs out of sync

2014-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/757/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([83DE04E939FAA581]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([E7FAE7BA6DA27762]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([E7FAE7BA6DA27762]:0)




Build Log:
[...truncated 53840 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:548: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:200: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 404 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2014-11-12 Thread Varun Rajput (JIRA)
Varun Rajput created SOLR-6736:
--

 Summary: A collections-like request handler to manage solr 
configurations on zookeeper
 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Priority: Minor


Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6734) Standalone solr as *two* applications -- Solr and a controlling agent

2014-11-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209177#comment-14209177
 ] 

Shawn Heisey commented on SOLR-6734:


The "chicken and egg" repercussions of that last bullet point might make your 
head spin.


> Standalone solr as *two* applications -- Solr and a controlling agent
> -
>
> Key: SOLR-6734
> URL: https://issues.apache.org/jira/browse/SOLR-6734
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Shawn Heisey
>
> In a message to the dev list outlining reasons to switch from a webapp to a 
> standalone app, Mark Miller included the idea of making Solr into two 
> applications, rather than just one.  There would be Solr itself, and an agent 
> to control Solr.
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/%3C807476C6-E4C3-4E7E-9F67-2BECB63990DE%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6734) Standalone solr as *two* applications -- Solr and a controlling agent

2014-11-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209174#comment-14209174
 ] 

Shawn Heisey commented on SOLR-6734:


Some additional detail I thought of for this idea:

 * The agent could listen on a TCP port for connections from agents running on 
other servers.
 ** This would be primarily for SolrCloud, but there might be non-cloud uses 
too.  This would enable stopping and starting of Solr instances across an 
entire cluster.
 ** Not entirely sure about how to configure an entire cluster of agents ... 
perhaps like zookeeper, where all servers contain the entire list of host:port 
pairs.  A centralized config in Zookeeper would not be a bad idea either, as 
long as we have some way of altering or resetting that config.
 ** A shared authentication key and a cluster name in the config would be a 
good idea.  The info in an incoming request would need to validate against 
both, and perhaps even against the host:port list.  This data might also be 
used for TLS.
 * The heap set by the script that starts the agent would be very small.  Even 
with hundreds of servers in the config, I would imagine that the memory 
requirements would be minimal.
 * Most startup options for Solr would be configurable and used by the agent 
when starting Solr.  Thinking about a cluster, we probably would want to have a 
common set of options as well as per-server options that can supplement or 
override common options.
 * We might even be able to control standalone zookeeper processes with this 
agent.


> Standalone solr as *two* applications -- Solr and a controlling agent
> -
>
> Key: SOLR-6734
> URL: https://issues.apache.org/jira/browse/SOLR-6734
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Shawn Heisey
>
> In a message to the dev list outlining reasons to switch from a webapp to a 
> standalone app, Mark Miller included the idea of making Solr into two 
> applications, rather than just one.  There would be Solr itself, and an agent 
> to control Solr.
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/%3C807476C6-E4C3-4E7E-9F67-2BECB63990DE%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: LuceneTestCase static method usage

2014-11-12 Thread Jason Gerlowski
To add some additional information, the stack trace I'm seeing is.:

java.lang.NullPointerException
at
org.apache.lucene.util.LuceneTestCase.maybeChangeLiveIndexWriterConfig(LuceneTestCase.java:1080)
at
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:103)
at
org.apache.lucene.index.LuceneIndexer.indexOneDocument(LuceneIndexer.java:55)
at
org.apache.lucene.index.TestSample.indexAnimalTestData(TestSample.java:45)
at org.apache.lucene.index.TestSample.setUp(TestSample.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:401)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:642)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:129)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:559)


This is the indexing @Rule fixture I've tried putting together:

public class LuceneIndexer extends ExternalResource {
public static final String TEST_FIELD_NAME = "TEST_FIELD_NAME";

private final Analyzer analyzer;
private final Directory directory;
private RandomIndexWriter writer;

public LuceneIndexer(Directory directory, Analyzer analyzer) {
this.directory = directory;
this.analyzer = analyzer;
}

public Directory getDirectory() {
return directory;
}

@Override
public void before() throws IOException {
writer = new RandomIndexWriter(LuceneTestCase.random(), directory,
new IndexWriterConfig(analyzer));
}

@Override
public void after() {
try {
IOUtils.close(directory, analyzer);
} catch (IOException e) {
throw new RuntimeException(e);
}
}

public void close() throws IOException {
IOUtils.close(writer);
}

public void indexOneDocument(String text) throws IOException {
Document document = createDocumentFromText(text);
writer.addDocument(document);
}

private Document createDocumentFromText(String documentText) {
Document document = new Document();
FieldType defaultFieldConfig = new FieldType();

defaultFieldConfig.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
Field field = new Field(TEST_FIELD_NAME, documentText,
defaultFieldConfig);
document.add(field);

return document;
}
}

I'm reproducing the issue with the following test case below:

@RunWith(com.carrotsearch.randomizedtesting.RandomizedRunner.class)
public class TestSample {

@Rule
public LuceneIndexer indexerFixture = new
LuceneIndexer(LuceneTestCase.newDirectory(), new
MockAnalyzer(LuceneTestCase.random()));

@Before
public void setUp() throws IOException {
  indexerFixture.indexOneDocument("document");
  indexerFixture.indexOneDocument("another document");
  indexerFixture.indexOneDocument("third document");
  // NPE occurs when this last call gets down to
LuceneTestCase.maybeChangeLiveIndexWriterConfig()
  indexerFixture.indexOneDocument("last document");

  indexerFixture.close();
}

@Test
public void first_reader() {
System.out.println("In first test");
}

@Test
public void s

[jira] [Updated] (SOLR-6735) CloneFieldUpdateProcessorFactory should be null safe

2014-11-12 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6735:
---
Attachment: SOLR-6735.patch

Attached a trivial patch.

> CloneFieldUpdateProcessorFactory should be null safe
> 
>
> Key: SOLR-6735
> URL: https://issues.apache.org/jira/browse/SOLR-6735
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Davids
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6735.patch
>
>
> If a source field value is null the CloneFieldUpdateProcessor throws a null 
> pointer exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6735) CloneFieldUpdateProcessorFactory should be null safe

2014-11-12 Thread Steve Davids (JIRA)
Steve Davids created SOLR-6735:
--

 Summary: CloneFieldUpdateProcessorFactory should be null safe
 Key: SOLR-6735
 URL: https://issues.apache.org/jira/browse/SOLR-6735
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.0, Trunk


If a source field value is null the CloneFieldUpdateProcessor throws a null 
pointer exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6734) Standalone solr as *two* applications -- Solr and a controlling agent

2014-11-12 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-6734:
--

 Summary: Standalone solr as *two* applications -- Solr and a 
controlling agent
 Key: SOLR-6734
 URL: https://issues.apache.org/jira/browse/SOLR-6734
 Project: Solr
  Issue Type: Sub-task
Reporter: Shawn Heisey


In a message to the dev list outlining reasons to switch from a webapp to a 
standalone app, Mark Miller included the idea of making Solr into two 
applications, rather than just one.  There would be Solr itself, and an agent 
to control Solr.

http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/%3C807476C6-E4C3-4E7E-9F67-2BECB63990DE%40gmail.com%3E




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in trunk (6.0)

2014-11-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209135#comment-14209135
 ] 

Shawn Heisey commented on SOLR-4792:


I've been looking for another issue to record some ideas, since this issue has 
a very narrow focus, and it's resolved.  [~noble.paul] mentioned that he would 
open an issue to create a standalone app, but I can't seem to find one.  I'm 
willing to create the issue if it doesn't exist.

> stop shipping a war in trunk (6.0)
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2014-11-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209145#comment-14209145
 ] 

Shawn Heisey commented on SOLR-6733:


SOLR-4792 was the first salvo.  5.x versions will still need to retain the .war 
target, probably as the default, with standalone as an alternate.


> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>
> Umbrella issue, for gathering issues relating to smaller pieces required to 
> implement the larger feature where Solr can be run as a completely standalone 
> application, without a servlet container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6733) Umbrella issue - Solr as a standalone application

2014-11-12 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-6733:
--

 Summary: Umbrella issue - Solr as a standalone application
 Key: SOLR-6733
 URL: https://issues.apache.org/jira/browse/SOLR-6733
 Project: Solr
  Issue Type: New Feature
Reporter: Shawn Heisey


Umbrella issue, for gathering issues relating to smaller pieces required to 
implement the larger feature where Solr can be run as a completely standalone 
application, without a servlet container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40-ea-b09) - Build # 4429 - Still Failing!

2014-11-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4429/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseSerialGC 
(asserts: true)

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([6FE9CF4DEF7EDC3E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([6FE9CF4DEF7EDC3E]:0)




Build Log:
[...truncated 10964 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerBackup
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\init-core-data-001
   [junit4]   2> 2305781 T5989 oas.SolrTestCaseJ4.setUp ###Starting doTestBackup
   [junit4]   2> 2305793 T5989 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 2305799 T5989 oejs.AbstractConnector.doStart Started 
SelectChannelConnector@127.0.0.1:50404
   [junit4]   2> 2305802 T5989 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()
   [junit4]   2> 2305803 T5989 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2> 2305803 T5989 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-instance-001
   [junit4]   2> 2305803 T5989 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-instance-001\'
   [junit4]   2> 2305813 T5989 oasc.ConfigSolr.fromFile Loading container 
configuration from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-instance-001\solr.xml
   [junit4]   2> 2305819 T5989 oasc.CoreContainer. New CoreContainer 
1006464323
   [junit4]   2> 2305819 T5989 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-instance-001\]
   [junit4]   2> 2305820 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 9
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: 
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 15000
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnections to: 1
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
   [junit4]   2> 2305821 T5989 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
   [junit4]   2> 2305821 T5989 oasu.UpdateShardHandler. Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=34&connTimeout=45000&retry=false
   [junit4]   2> 2305821 T5989 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 2305822 T5989 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 2305822 T5989 oasc.CoreContainer.load Host Name: 127.0.0.1
   [junit4]   2> 2305824 T5999 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-instance-001\collection1\'
   [junit4]   2> 2305845 T5999 oasc.SolrConfig. Using Lucene 
MatchVersion: 6.0.0
   [junit4]   2> 2305849 T5999 oasc.SolrConfig. Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2> 2305849 T5999 oass.IndexSchema.readSchema Reading Solr Schema 
from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup-6FE9CF4DEF7EDC3E-001\solr-

[jira] [Commented] (LUCENE-6060) Remove IndexWriter.unLock

2014-11-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208921#comment-14208921
 ] 

Uwe Schindler commented on LUCENE-6060:
---

Maybe just open a new issue in SOLR!

> Remove IndexWriter.unLock
> -
>
> Key: LUCENE-6060
> URL: https://issues.apache.org/jira/browse/LUCENE-6060
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6060.patch
>
>
> This method used to be necessary, when our locking impls were buggy, but it's 
> a godawful dangerous method: it invites index corruption.
> I think we should remove it.
> Apps that for some scary reason really need it can do their own thing...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 81 - Still Failing!

2014-11-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/81/
Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseSerialGC (asserts: false)

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:34568/solr within 1 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:34568/solr within 1 ms
at 
__randomizedtesting.SeedInfo.seed([4F036507A9A44F7D:CEE5EB1FDEFB2F41]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:163)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:114)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:104)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:212)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:241)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:524)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1625)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1651)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:251)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Statem

[jira] [Commented] (SOLR-6732) Back-compat break for LIR state in 4.10.2

2014-11-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208844#comment-14208844
 ] 

Anshum Gupta commented on SOLR-6732:


Wouldn't this cause an issue for people who have already moved to 10.2 i.e. it 
reverts things and makes things fine for people who never noticed/moved to 10.2 
but not for others. We should be injecting back compat handling for 10.2.

> Back-compat break for LIR state in 4.10.2
> -
>
> Key: SOLR-6732
> URL: https://issues.apache.org/jira/browse/SOLR-6732
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 4.10.3
>
> Attachments: SOLR-6732.patch
>
>
> We changed the LIR state to be kept as a map but it is not back-compatible. 
> The problem is that we're checking for map or string after parsing JSON but 
> if the key has "down" as a string then json parsing will fail.
> This was introduced in SOLR-6511. This error will prevent anyone from 
> upgrading to 4.10.2
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201411.mbox/%3c54636ed2.8040...@cytainment.de%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208780#comment-14208780
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1639107 from [~sar...@syr.edu] in branch 'cms/trunk'
[ https://svn.apache.org/r1639107 ]

SOLR-6058: add files for a new 'Logos and Assets' page

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6732) Back-compat break for LIR state in 4.10.2

2014-11-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6732:

Attachment: SOLR-6732.patch

I have reverted the change to store the LIR state as json switched it back to a 
string.

> Back-compat break for LIR state in 4.10.2
> -
>
> Key: SOLR-6732
> URL: https://issues.apache.org/jira/browse/SOLR-6732
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 4.10.3
>
> Attachments: SOLR-6732.patch
>
>
> We changed the LIR state to be kept as a map but it is not back-compatible. 
> The problem is that we're checking for map or string after parsing JSON but 
> if the key has "down" as a string then json parsing will fail.
> This was introduced in SOLR-6511. This error will prevent anyone from 
> upgrading to 4.10.2
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201411.mbox/%3c54636ed2.8040...@cytainment.de%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6607) Registering pluggable components through API

2014-11-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6607:
-
Description: 
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  "remove-configuration" . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
"create-request-handler" : {"name": "/mypath" , 
class="com.mycomponent.ClassName" location="index:mycomponent" version=2, 
"defaults":{"x":"y"
"a":"b"}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
"@myselcontainigcomponent.jar" 
{code}

  was:
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  "remove-configuration" . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
"create-request-handler" : {"name": "/mypath" , 
class="com.mycomponent.ClassName" location="index:mycomponent" version=2, 
"defaults":{"x":"y"
"a":"b"}
}'
{code}


> Registering pluggable components through API
> 
>
> Key: SOLR-6607
> URL: https://issues.apache.org/jira/browse/SOLR-6607
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> The concept of solrconfig editing is split into multiple pieces . This issue 
> is about registering components and uploading binaries through an API.
> This supports multiple operations
>  * Upload a jar file which can be used later in a plugin configuration. The 
> jar file will be stored in a special collection called \_system_ or ( in  a 
> core called \_system_ in a standalone solr) as a binary field .
>  * command  'set-configuration'  which can set the configuration of a 
> component . This configuration will be saved inside the configoverlay.json
> * command  "remove-configuration" . which can remove a plugin configuration 
> from the configoverlay.json and not from solrconfig.xml
> The components can be registered from a jar file that is available in the 
> classpath of all nodes. Registering of components from uploaded jars will 
> only be possible if systems are started with an option -DloadRuntimeLibs 
> (Please suggest a better name) . The objective is to be able to completely 
> disable this feature by default and but can only be enabled by a user with 
> file system access. Any system which can load remote libraries are a security 
> hole and a lot of organizations would want to disable this 
> example for registering a component
> {code}
> curl http://loca

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 682 - Still Failing

2014-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/682/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:10392/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:10392/collection1
at 
__randomizedtesting.SeedInfo.seed([AFEBFE90FF5F026C:2E0D70006250]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:583)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:144)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedte

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208751#comment-14208751
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1639079 from [~sar...@syr.edu] in branch 'cms/trunk'
[ https://svn.apache.org/r1639079 ]

SOLR-6058: remove obsolete files

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208686#comment-14208686
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1639057 from hoss...@apache.org in branch 'cms/trunk'
[ https://svn.apache.org/r1639057 ]

SOLR-6058: some RewriteRules that should hopefully catch the urls that no 
longer exist and send them someplace useful

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6607) Registering pluggable components through API

2014-11-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6607:
-
Description: 
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  "remove-configuration" . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
"create-request-handler" : {"name": "/mypath" , 
class="com.mycomponent.ClassName" location="index:mycomponent" version=2, 
"defaults":{"x":"y"
"a":"b"}
}'
{code}

  was:
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  "remove-configuration" . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 


> Registering pluggable components through API
> 
>
> Key: SOLR-6607
> URL: https://issues.apache.org/jira/browse/SOLR-6607
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> The concept of solrconfig editing is split into multiple pieces . This issue 
> is about registering components and uploading binaries through an API.
> This supports multiple operations
>  * Upload a jar file which can be used later in a plugin configuration. The 
> jar file will be stored in a special collection called \_system_ or ( in  a 
> core called \_system_ in a standalone solr) as a binary field .
>  * command  'set-configuration'  which can set the configuration of a 
> component . This configuration will be saved inside the configoverlay.json
> * command  "remove-configuration" . which can remove a plugin configuration 
> from the configoverlay.json and not from solrconfig.xml
> The components can be registered from a jar file that is available in the 
> classpath of all nodes. Registering of components from uploaded jars will 
> only be possible if systems are started with an option -DloadRuntimeLibs 
> (Please suggest a better name) . The objective is to be able to completely 
> disable this feature by default and but can only be enabled by a user with 
> file system access. Any system which can load remote libraries are a security 
> hole and a lot of organizations would want to disable this 
> example for registering a component
> {code}
> curl http://localhost:8983/solr/collection1/config -H  -d '{
> "create-request-handler" : {"name": "/mypath" , 
> class="com.mycomponent.ClassName" location="index:mycomponent" version=2, 
> "defaults":{"x":"y"
> "a":"b"}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additiona

[jira] [Updated] (SOLR-6533) Support editing common solrconfig.xml values

2014-11-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6533:
-
Attachment: SOLR-6533.patch

Added a testcase for config reload and refactored the listening

> Support editing common solrconfig.xml values
> 
>
> Key: SOLR-6533
> URL: https://issues.apache.org/jira/browse/SOLR-6533
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
> Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
> SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
> SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch
>
>
> There are a bunch of properties in solrconfig.xml which users want to edit. 
> We will attack them first
> These properties will be persisted to a separate file called config.json (or 
> whatever file). Instead of saving in the same format we will have well known 
> properties which users can directly edit
> {code}
> updateHandler.autoCommit.maxDocs
> query.filterCache.initialSize
> {code}   
> The api will be modeled around the bulk schema API
> {code:javascript}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "set-property" : {"updateHandler.autoCommit.maxDocs":5},
> "unset-property": "updateHandler.autoCommit.maxDocs"
> }'
> {code}
> {code:javascript}
> //or use this to set ${mypropname} values
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "set-user-property" : {"mypropname":"my_prop_val"},
> "unset-user-property":{"mypropname"}
> }'
> {code}
> The values stored in the config.json will always take precedence and will be 
> applied after loading solrconfig.xml. 
> An http GET on /config path will give the real config that is applied . 
> An http GET of/config/overlay gives out the content of the configOverlay.json
> /config/ gives only the fchild of the same name from /config



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6060) Remove IndexWriter.unLock

2014-11-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208592#comment-14208592
 ] 

Michael McCandless commented on LUCENE-6060:


I was nervous about changing Solr's behavior here; maybe we can pursue that in 
a different issue ...

> Remove IndexWriter.unLock
> -
>
> Key: LUCENE-6060
> URL: https://issues.apache.org/jira/browse/LUCENE-6060
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6060.patch
>
>
> This method used to be necessary, when our locking impls were buggy, but it's 
> a godawful dangerous method: it invites index corruption.
> I think we should remove it.
> Apps that for some scary reason really need it can do their own thing...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: LuceneTestCase static method usage

2014-11-12 Thread Dawid Weiss
> it looks like some static fields in LTC aren't being initialized.

If your code is executed within a static initializer then these fields
won't be initialized. Like Chris said -- post the full stack trace
and, ideally, a short snippet of code demonstrating what you're doing
(or trying to do).

> I can't extend LTC because my fixtures already need to extend JUnit's 'Rule' 
> class for JUnit to know how to use them

This isn't true. Your suite class can extend LTC and you can add
additional rules within that class. You probably mean something else
-- be specific, provide code examples.

Dawid

On Wed, Nov 12, 2014 at 4:06 PM, Jason Gerlowski  wrote:
> Hi all,
>
> I'm seeing NPE's when calling static methods on LuceneTestCase (without
> extending it).
>
> I'm trying to write tests for a few classes that interact with Lucene.  To
> do that, I was trying to create JUnit @Rule
> (https://github.com/junit-team/junit/wiki/Rules) fixtures that I can share
> across different test classes.  I want these fixtures to use the
> randomness/extra-checks found in LuceneTestCase, but I can't extend LTC
> because my fixtures already need to extend JUnit's 'Rule' class for JUnit to
> know how to use them.
>
> So instead I wrote the fixtures to access LTC functionality though the
> class's static methods (LTC.newDirectory(), LTC.newSearcher(), etc.)
>
> Is this a valid way to access LTC methods?  Is there any special
> initialization I need to do before using the class in this way?
>
> I ask because I've started to see a few occasional NPE's in LTC during tests
> that use these fixtures.  To an amateur, it looks like some static fields in
> LTC aren't being initialized.  It's hard to tell whether I should consider
> this a bug in the class, or whether I'm using it incorrectly.
>
> Thanks for any help/insight you can offer!
>
> Best,
>
> Jason Gerlowski
>
>
> (I can follow-up with code-snippets that reproduce this issue.  I didn't
> post them in this email because I thought I might just be misusing
> LuceneTestCase).

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5635) Payloads malfunctioning in basic use case

2014-11-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208537#comment-14208537
 ] 

Hoss Man commented on SOLR-5635:


i haven't dug into this, but it's possible that LUCENE-6055 is the root cause 
here?

> Payloads malfunctioning in basic use case
> -
>
> Key: SOLR-5635
> URL: https://issues.apache.org/jira/browse/SOLR-5635
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6
>Reporter: michael boom
>
> This issue is also discussed on the mailing list:
> http://lucene.472066.n3.nabble.com/Simple-payloads-example-not-working-td4110998.html
> It proved that for a term search, all documents would have the same score, 
> equal to the payload value of the first document:
> - 
> http://localhost:8983/solr/collection1/pds-search?q=payloads:testone&wt=json&indent=true&debugQuery=true
>  with result: https://gist.github.com/maephisto/8433641
> I tried building a simple payloads example using the stock Solr/Lucene 4.6.0. 
> I created a custom similarity and a custom query parser - built my plugin and 
> tested it out.
> collection1 schema.xml changes: https://gist.github.com/maephisto/8433537
> collection1 sorlconfig.xml changes: https://gist.github.com/maephisto/8433550
> custom similarity: https://gist.github.com/maephisto/8433263
> custom query parser: https://gist.github.com/maephisto/8433217
> documents added: https://gist.github.com/maephisto/8433719
> I tested it with Solr/Lucene4.6.0 stock example. The plugin was built using 
> NetBeans.
> I used gists inside the ticket in order to keep the description shorter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: LuceneTestCase static method usage

2014-11-12 Thread Chris Hostetter

Posting the specific details of the NPEs you are seeing (ie: stack 
traces) would help to answer your question.

Some of the functionality in LTC is definitely tied to the JUnit test case 
lifecycle, so it's certainly possible that some static objects used by 
static methods aren't being initialized yet when you make your calls from 
rules -- but ewther that's a bug or a neccessity (ie: maybe can'd 
do "newSearcher" until the the randomseed is set and that's set y 
another Rule?) remains to be seen.



: Date: Wed, 12 Nov 2014 10:06:36 -0500
: From: Jason Gerlowski 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: LuceneTestCase static method usage
: 
: Hi all,
: 
: I'm seeing NPE's when calling static methods on LuceneTestCase (without
: extending it).
: 
: I'm trying to write tests for a few classes that interact with Lucene.  To
: do that, I was trying to create JUnit @Rule (
: https://github.com/junit-team/junit/wiki/Rules) fixtures that I can share
: across different test classes.  I want these fixtures to use the
: randomness/extra-checks found in LuceneTestCase, but I can't extend LTC
: because my fixtures already need to extend JUnit's 'Rule' class for JUnit
: to know how to use them.
: 
: So instead I wrote the fixtures to access LTC functionality though the
: class's static methods (LTC.newDirectory(), LTC.newSearcher(), etc.)
: 
: Is this a valid way to access LTC methods?  Is there any special
: initialization I need to do before using the class in this way?
: 
: I ask because I've started to see a few occasional NPE's in LTC during
: tests that use these fixtures.  To an amateur, it looks like some static
: fields in LTC aren't being initialized.  It's hard to tell whether I should
: consider this a bug in the class, or whether I'm using it incorrectly.
: 
: Thanks for any help/insight you can offer!
: 
: Best,
: 
: Jason Gerlowski
: 
: 
: (I can follow-up with code-snippets that reproduce this issue.  I didn't
: post them in this email because I thought I might just be misusing
: LuceneTestCase).
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6637) Solr should have a way to restore a core

2014-11-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208458#comment-14208458
 ] 

Noble Paul commented on SOLR-6637:
--

Please post the patch anyway




> Solr should have a way to restore a core
> 
>
> Key: SOLR-6637
> URL: https://issues.apache.org/jira/browse/SOLR-6637
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Attachments: SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, 
> SOLR-6637.patch, SOLR-6637.patch
>
>
> We have a core backup command which backs up the index. We should have a 
> restore command too. 
> This would restore any named snapshots created by the replication handlers 
> backup command.
> While working on this patch right now I realized that during backup we only 
> backup the index. Should we backup the conf files also? Any thoughts? I could 
> separate Jira for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6637) Solr should have a way to restore a core

2014-11-12 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208425#comment-14208425
 ] 

Greg Solovyev commented on SOLR-6637:
-

I have been looking for this functionality, but with a slight twist where index 
files for the core need to be shipped over the network rather than provided on 
storage local to the Solr instance where the core is being restored. I have an 
implementation of CoreRestoreHandler that takes index files over HTTP as 
ContentStreams. Should I submit it to this ticket as a patch?

> Solr should have a way to restore a core
> 
>
> Key: SOLR-6637
> URL: https://issues.apache.org/jira/browse/SOLR-6637
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Attachments: SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, 
> SOLR-6637.patch, SOLR-6637.patch
>
>
> We have a core backup command which backs up the index. We should have a 
> restore command too. 
> This would restore any named snapshots created by the replication handlers 
> backup command.
> While working on this patch right now I realized that during backup we only 
> backup the index. Should we backup the conf files also? Any thoughts? I could 
> separate Jira for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 210 - Still Failing

2014-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/210/

No tests ran.

Build Log:
[...truncated 51670 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
"file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (15.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.8 MB in 0.04 sec (666.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 63.7 MB in 0.10 sec (654.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 73.2 MB in 0.12 sec (604.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 206 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (103.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 34.1 MB in 0.09 sec (365.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 146.4 MB in 1.07 sec (137.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 152.5 MB in 0.64 sec (236.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
   [smoker] Startup failed; see log 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log
   [smoker] 
   [smoker] Starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unp

[JENKINS] Lucene-Solr-4.10-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 80 - Failure!

2014-11-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/80/
Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseConcMarkSweepGC (asserts: false)

1 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at 
__randomizedtesting.SeedInfo.seed([2209E006EC17A244:26016FF5FEB24D65]:0)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:150)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:157)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:662)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lu

[jira] [Created] (SOLR-6732) Back-compat break for LIR state in 4.10.2

2014-11-12 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6732:
---

 Summary: Back-compat break for LIR state in 4.10.2
 Key: SOLR-6732
 URL: https://issues.apache.org/jira/browse/SOLR-6732
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Shalin Shekhar Mangar
Priority: Blocker
 Fix For: 4.10.3


We changed the LIR state to be kept as a map but it is not back-compatible. The 
problem is that we're checking for map or string after parsing JSON but if the 
key has "down" as a string then json parsing will fail.

This was introduced in SOLR-6511. This error will prevent anyone from upgrading 
to 4.10.2

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201411.mbox/%3c54636ed2.8040...@cytainment.de%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208282#comment-14208282
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1638868 from [~sar...@syr.edu] in branch 'cms/trunk'
[ https://svn.apache.org/r1638868 ]

SOLR-6058: footer link: tutorials.html->quickstart.html

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6060) Remove IndexWriter.unLock

2014-11-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208217#comment-14208217
 ] 

Uwe Schindler commented on LUCENE-6060:
---

+1 - throw it away!

We should also fix this Solr part! Forcefully unlocking solr should also go 
away - in fact this can no longer happen with NativeFSLockFactory because the 
lock is gone once all writers finished or crushed their JVM!

> Remove IndexWriter.unLock
> -
>
> Key: LUCENE-6060
> URL: https://issues.apache.org/jira/browse/LUCENE-6060
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6060.patch
>
>
> This method used to be necessary, when our locking impls were buggy, but it's 
> a godawful dangerous method: it invites index corruption.
> I think we should remove it.
> Apps that for some scary reason really need it can do their own thing...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6684) Fix-up /export JSON

2014-11-12 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6684:
-
Description: 
This ticket does a couple of things. 

1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
records. 

2) Changes the JSON format to match-up with the normal JSON result set.

 

  was:
This ticket does a couple of things. 

1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
records. 

2) Changes the JSON format to match-up with the normal JSON result set.

Both changes will go in trunk and 5x. Only the bug fix will go in the 4.10 
branch.

 


> Fix-up /export JSON
> ---
>
> Key: SOLR-6684
> URL: https://issues.apache.org/jira/browse/SOLR-6684
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Fix For: 4.10.3, 5.0
>
> Attachments: SOLR-6684.patch
>
>
> This ticket does a couple of things. 
> 1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
> records. 
> 2) Changes the JSON format to match-up with the normal JSON result set.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6684) Fix-up /export JSON

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208181#comment-14208181
 ] 

ASF subversion and git services commented on SOLR-6684:
---

Commit 1638821 from [~joel.bernstein] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1638821 ]

SOLR-6684 Fix-up /export JSON

> Fix-up /export JSON
> ---
>
> Key: SOLR-6684
> URL: https://issues.apache.org/jira/browse/SOLR-6684
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Fix For: 4.10.3, 5.0
>
> Attachments: SOLR-6684.patch
>
>
> This ticket does a couple of things. 
> 1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
> records. 
> 2) Changes the JSON format to match-up with the normal JSON result set.
> Both changes will go in trunk and 5x. Only the bug fix will go in the 4.10 
> branch.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



LuceneTestCase static method usage

2014-11-12 Thread Jason Gerlowski
Hi all,

I'm seeing NPE's when calling static methods on LuceneTestCase (without
extending it).

I'm trying to write tests for a few classes that interact with Lucene.  To
do that, I was trying to create JUnit @Rule (
https://github.com/junit-team/junit/wiki/Rules) fixtures that I can share
across different test classes.  I want these fixtures to use the
randomness/extra-checks found in LuceneTestCase, but I can't extend LTC
because my fixtures already need to extend JUnit's 'Rule' class for JUnit
to know how to use them.

So instead I wrote the fixtures to access LTC functionality though the
class's static methods (LTC.newDirectory(), LTC.newSearcher(), etc.)

Is this a valid way to access LTC methods?  Is there any special
initialization I need to do before using the class in this way?

I ask because I've started to see a few occasional NPE's in LTC during
tests that use these fixtures.  To an amateur, it looks like some static
fields in LTC aren't being initialized.  It's hard to tell whether I should
consider this a bug in the class, or whether I'm using it incorrectly.

Thanks for any help/insight you can offer!

Best,

Jason Gerlowski


(I can follow-up with code-snippets that reproduce this issue.  I didn't
post them in this email because I thought I might just be misusing
LuceneTestCase).


[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208082#comment-14208082
 ] 

ASF subversion and git services commented on LUCENE-2878:
-

Commit 1638800 from [~romseygeek]
[ https://svn.apache.org/r1638800 ]

Branch for LUCENE-2878

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Robert Muir
>  Labels: gsoc2014
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, 
> PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208075#comment-14208075
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1638799 from [~sar...@syr.edu] in branch 'cms/trunk'
[ https://svn.apache.org/r1638799 ]

SOLR-6058: fix quickstart URL typo

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6033) Add CachingTokenFilter.isCached and switch LinkedList to ArrayList

2014-11-12 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-6033.
--
   Resolution: Fixed
Fix Version/s: Trunk

> Add CachingTokenFilter.isCached and switch LinkedList to ArrayList
> --
>
> Key: LUCENE-6033
> URL: https://issues.apache.org/jira/browse/LUCENE-6033
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6033.patch
>
>
> CachingTokenFilter could use a simple boolean isCached() method implemented 
> as-such:
> {code:java}
>   /** If the underlying token stream was consumed and cached */
>   public boolean isCached() {
> return cache != null;
>   }
> {code}
> It's useful for the highlighting code to remove its wrapping of 
> CachingTokenFilter if after handing-off to parts of its framework it turns 
> out that it wasn't used.
> Furthermore, use an ArrayList, not a LinkedList.  ArrayList is leaner when 
> the token count is high, and this class doesn't manipulate the list in a way 
> that might favor LL.
> A separate patch will come that actually uses this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6033) Add CachingTokenFilter.isCached and switch LinkedList to ArrayList

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208066#comment-14208066
 ] 

ASF subversion and git services commented on LUCENE-6033:
-

Commit 1638796 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1638796 ]

LUCENE-6033: CachingTokenFilter now uses ArrayList not LinkedList, and has new 
isCached() method

> Add CachingTokenFilter.isCached and switch LinkedList to ArrayList
> --
>
> Key: LUCENE-6033
> URL: https://issues.apache.org/jira/browse/LUCENE-6033
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.0
>
> Attachments: LUCENE-6033.patch
>
>
> CachingTokenFilter could use a simple boolean isCached() method implemented 
> as-such:
> {code:java}
>   /** If the underlying token stream was consumed and cached */
>   public boolean isCached() {
> return cache != null;
>   }
> {code}
> It's useful for the highlighting code to remove its wrapping of 
> CachingTokenFilter if after handing-off to parts of its framework it turns 
> out that it wasn't used.
> Furthermore, use an ArrayList, not a LinkedList.  ArrayList is leaner when 
> the token count is high, and this class doesn't manipulate the list in a way 
> that might favor LL.
> A separate patch will come that actually uses this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6033) Add CachingTokenFilter.isCached and switch LinkedList to ArrayList

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208036#comment-14208036
 ] 

ASF subversion and git services commented on LUCENE-6033:
-

Commit 1638794 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1638794 ]

LUCENE-6033: CachingTokenFilter now uses ArrayList not LinkedList, and has new 
isCached() method

> Add CachingTokenFilter.isCached and switch LinkedList to ArrayList
> --
>
> Key: LUCENE-6033
> URL: https://issues.apache.org/jira/browse/LUCENE-6033
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.0
>
> Attachments: LUCENE-6033.patch
>
>
> CachingTokenFilter could use a simple boolean isCached() method implemented 
> as-such:
> {code:java}
>   /** If the underlying token stream was consumed and cached */
>   public boolean isCached() {
> return cache != null;
>   }
> {code}
> It's useful for the highlighting code to remove its wrapping of 
> CachingTokenFilter if after handing-off to parts of its framework it turns 
> out that it wasn't used.
> Furthermore, use an ArrayList, not a LinkedList.  ArrayList is leaner when 
> the token count is high, and this class doesn't manipulate the list in a way 
> that might favor LL.
> A separate patch will come that actually uses this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1258: POMs out of sync

2014-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1258/

5 tests failed.
FAILED:  
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([3FD178B1286BEC36]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=1496, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@c30f7c9[State = -1, empty 
queue], state=WAITING, group=TGRP-MorphlineBasicMiniMRTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=1496, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@c30f7c9[State = -1, empty 
queue], state=WAITING, group=TGRP-MorphlineBasicMiniMRTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([EC72FC441256CF3A]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=1496, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@c30f7c9[State = -1, empty 
queue], state=WAITING, group=TGRP-MorphlineBasicMiniMRTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1496, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@c30f7c9[State = -1, empty 
queue], state=WAITING, group=TGRP-MorphlineBasicMiniMRTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4323 - Failure!

2014-11-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4323/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseSerialGC (asserts: false)

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([1F54EE25ED2E866F:9EB2603D9A71E653]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgn

[jira] [Updated] (LUCENE-6057) Clarify the Sort(SortField...) constructor)

2014-11-12 Thread Martin Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Braun updated LUCENE-6057:
-
Lucene Fields: New  (was: New,Patch Available)

> Clarify the Sort(SortField...) constructor)
> ---
>
> Key: LUCENE-6057
> URL: https://issues.apache.org/jira/browse/LUCENE-6057
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.2, Trunk
>Reporter: Martin Braun
>Priority: Minor
>  Labels: Clarification, Documentation, New_Users, Sort
> Fix For: 4.10.2
>
>
> I don't really know which version this affects, but I clarified the 
> documentation of the Sort(SortField...) constructor to ease the understanding 
> for new users.
> Pull Request:
> https://github.com/apache/lucene-solr/pull/20



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6057) Clarify the Sort(SortField...) constructor)

2014-11-12 Thread Martin Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Braun updated LUCENE-6057:
-
Labels: Clarification Documentation New_Users Sort  (was: Clarification 
Sort)

> Clarify the Sort(SortField...) constructor)
> ---
>
> Key: LUCENE-6057
> URL: https://issues.apache.org/jira/browse/LUCENE-6057
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 4.10.2, Trunk
>Reporter: Martin Braun
>Priority: Minor
>  Labels: Clarification, Documentation, New_Users, Sort
> Fix For: 4.10.2
>
>
> I don't really know which version this affects, but I clarified the 
> documentation of the Sort(SortField...) constructor to ease the understanding 
> for new users.
> Pull Request:
> https://github.com/apache/lucene-solr/pull/20



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6050) Add possibility to specify SHOUD or MUST for each context for AnalyzingInfixSuggester.loockup()

2014-11-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207914#comment-14207914
 ] 

Michael McCandless commented on LUCENE-6050:


Thank you [~arcadius] for opening the issue in the first place: this is the 
hardest part ;)

> Add possibility to specify SHOUD or MUST for each context for 
> AnalyzingInfixSuggester.loockup()
> ---
>
> Key: LUCENE-6050
> URL: https://issues.apache.org/jira/browse/LUCENE-6050
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10.2
>Reporter: Arcadius Ahouansou
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6050.patch, LUCENE-6050.patch, LUCENE-6050.patch, 
> LUCENE-6050.patch
>
>
> Currently as shown at 
> https://github.com/apache/lucene-solr/blob/lucene_solr_4_9_0/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingInfixSuggester.java#L362
>   , we have:
> {code}
> lookup(CharSequence key, Set contexts, int num, boolean 
> allTermsRequired, boolean doHighlight)
> {code}
> and *SHOULD* is being applied to all contexts.
> We need the ability to specify whether it's a *SHOULD* or a *MUST* on each 
> individual context.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6060) Remove IndexWriter.unLock

2014-11-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6060:
---
Attachment: LUCENE-6060.patch

Simple patch...

> Remove IndexWriter.unLock
> -
>
> Key: LUCENE-6060
> URL: https://issues.apache.org/jira/browse/LUCENE-6060
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6060.patch
>
>
> This method used to be necessary, when our locking impls were buggy, but it's 
> a godawful dangerous method: it invites index corruption.
> I think we should remove it.
> Apps that for some scary reason really need it can do their own thing...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6060) Remove IndexWriter.unLock

2014-11-12 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6060:
--

 Summary: Remove IndexWriter.unLock
 Key: LUCENE-6060
 URL: https://issues.apache.org/jira/browse/LUCENE-6060
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk


This method used to be necessary, when our locking impls were buggy, but it's a 
godawful dangerous method: it invites index corruption.

I think we should remove it.

Apps that for some scary reason really need it can do their own thing...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1387) Add more search options for filtering field facets.

2014-11-12 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207873#comment-14207873
 ] 

Alan Woodward commented on SOLR-1387:
-

This looks great.

Rather than using BytesRef.utf8ToString() in StringUtils.contains() (which can 
be expensive), can we use CharacterUtils.toLowerCase() instead? Have a look at 
LowercaseFilterFactory to see how that works.

It would be nice to make ignoreCase more general, rather than only applying to 
facet.contains, but I guess it won't really apply cleanly to things like 
facet.prefix.

> Add more search options for filtering field facets.
> ---
>
> Key: SOLR-1387
> URL: https://issues.apache.org/jira/browse/SOLR-1387
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Anil Khadka
>Assignee: Alan Woodward
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-1387.patch
>
>
> Currently for filtering the facets, we have to use prefix (which use 
> String.startsWith() in java). 
> We can add some parameters like
> * facet.iPrefix : this would act like case-insensitive search. (or --->  
> facet.prefix=a&facet.caseinsense=on)
> * facet.regex : this is pure regular expression search (which obviously would 
> be expensive if issued).
> Moreover, allowing multiple filtering for same field would be great like
> facet.prefix=a OR facet.prefix=A ... sth like this.
> All above concepts could be equally applicable to TermsComponent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-1387) Add more search options for filtering field facets.

2014-11-12 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned SOLR-1387:
---

Assignee: Alan Woodward

> Add more search options for filtering field facets.
> ---
>
> Key: SOLR-1387
> URL: https://issues.apache.org/jira/browse/SOLR-1387
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Anil Khadka
>Assignee: Alan Woodward
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-1387.patch
>
>
> Currently for filtering the facets, we have to use prefix (which use 
> String.startsWith() in java). 
> We can add some parameters like
> * facet.iPrefix : this would act like case-insensitive search. (or --->  
> facet.prefix=a&facet.caseinsense=on)
> * facet.regex : this is pure regular expression search (which obviously would 
> be expensive if issued).
> Moreover, allowing multiple filtering for same field would be great like
> facet.prefix=a OR facet.prefix=A ... sth like this.
> All above concepts could be equally applicable to TermsComponent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4428 - Failure!

2014-11-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4428/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

3 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([76D30CF49A66D538:F73582ECED39B504]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRun

[jira] [Commented] (SOLR-6643) Core load silently aborted if missing depenencies

2014-11-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207830#comment-14207830
 ] 

Jan Høydahl commented on SOLR-6643:
---

Any ideas for how to solve this?

> Core load silently aborted if missing depenencies
> -
>
> Key: SOLR-6643
> URL: https://issues.apache.org/jira/browse/SOLR-6643
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Priority: Minor
>  Labels: logging
>
> *How to reproduce*
> # Start with standard collection1 config
> # Add a field type to schema using the ICU contrib, no need for a field
> {code:XML}
> 
>   
> 
> {code}
> # {{cd example}}
> # {{mkdir solr/lib}}
> # {{cp ../contrib/analysis-extras/lucene-libs/lucene-analyzers-icu-4.10.1.jar 
> solr/lib/}}
> # {{bin/solr -f}}
> # Core is not loaded, and no messages in log after this line
> {code}
> ... INFO  org.apache.solr.schema.IndexSchema  – [collection1] Schema 
> name=example
> {code}
> Note that we did *not* add the dependency libs from {{analysis-extras/lib}}, 
> so we'd expect a {{ClassNotFoundException}}, but some way the initialization 
> of schema aborts silently. The ICUTokenizerFactory is instansiated by 
> reflection and I suspect that some exception is swallowed in 
> {{AbstractPluginLoader#create()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5736) Separate the classifiers to online and caching where possible

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207835#comment-14207835
 ] 

ASF subversion and git services commented on LUCENE-5736:
-

Commit 1638724 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1638724 ]

LUCENE-5736 - fixed test javadoc

> Separate the classifiers to online and caching where possible
> -
>
> Key: LUCENE-5736
> URL: https://issues.apache.org/jira/browse/LUCENE-5736
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/classification
>Reporter: Gergő Törcsvári
>Assignee: Tommaso Teofili
>  Labels: gsoc2014
> Fix For: 5.0
>
> Attachments: 0803-caching.patch, 0810-caching.patch, 
> CachingNaiveBayesClassifier.java
>
>
> The Lucene classifier implementations are now near onlines if they get a near 
> realtime reader. It is good for the users whoes have a continously changing 
> dataset, but slow for not changing datasets.
> The idea is: What if we implement a cache and speed up the results where it 
> is possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6671) Introduce a solr.data.root as root dir for all data

2014-11-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-6671:
--
Issue Type: New Feature  (was: Bug)

> Introduce a solr.data.root as root dir for all data
> ---
>
> Key: SOLR-6671
> URL: https://issues.apache.org/jira/browse/SOLR-6671
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6671.patch
>
>
> Many users prefer to deploy code, config and data on separate disk locations, 
> so the default of placing the indexes under 
> {{$\{solr.solr.home\}/$\{solr.core.name\}/data}} is not always wanted.
> In a multi-core/collection system, there is not much help in the 
> {{solr.data.dir}} option, as it would set the {{dataDir}} to the same folder 
> for all collections. One workaround, if you don't want to hardcode paths in 
> your {{solrconfig.xml}}, is to specify the {{dataDir}} property in each 
> {{solr.properties}} file.
> A more elegant solution would be to introduce a new Java-option 
> {{solr.data.root}} which would be to data the same as {{solr.solr.home}} is 
> for config. If set, all collections would default their {{dataDir}} as 
> {{$\{solr.data.root\)/$\{solr.core.name\}/data}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6085) Suggester crashes when prefixToken is longer than surface form

2014-11-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-6085.
---
Resolution: Fixed

Fixed, [~jferrandez] you may now build the lucene_solr_4_10 branch if you like 
to test it

> Suggester crashes when prefixToken is longer than surface form
> --
>
> Key: SOLR-6085
> URL: https://issues.apache.org/jira/browse/SOLR-6085
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.7.1, 4.8
>Reporter: Jorge Ferrández
>Assignee: Jan Høydahl
>  Labels: suggester
> Fix For: 4.10.3, 5.0, Trunk, 4.7.3
>
> Attachments: SOLR-6085.patch
>
>
> AnalyzingInfixSuggester class fails when is queried with a ß character 
> (ezsett) used in German, but it doesn't happen for all data or for all words 
> containing this character. The exception reported is the following: 
> {code:java}
> 
> 
> 500
> 18
> 
> 
> String index out of range: 5
> 
> java.lang.StringIndexOutOfBoundsException: String index out of range: 5 at 
> java.lang.String.substring(String.java:1907) at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.addPrefixMatch(AnalyzingInfixSuggester.java:575)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.highlight(AnalyzingInfixSuggester.java:525)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.createResults(AnalyzingInfixSuggester.java:479)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:437)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:338)
>  at 
> org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:181)
>  at 
> org.apache.solr.handler.component.SuggestComponent.process(SuggestComponent.java:232)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:217)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:368) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>  at java.lang.Thread.run(Thread.java:744)
> 
> 500
> 
> 
> {code}
> With this query

[jira] [Commented] (LUCENE-5548) Improve flexibility and testability of the classification module

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207817#comment-14207817
 ] 

ASF subversion and git services commented on LUCENE-5548:
-

Commit 1638718 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1638718 ]

LUCENE-5548 - minor fixes (imports, comments, method names)

> Improve flexibility and testability of the classification module
> 
>
> Key: LUCENE-5548
> URL: https://issues.apache.org/jira/browse/LUCENE-5548
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>  Labels: gsoc2014, mentor
>
> Lucene classification module's flexibility and capabilities may be improved 
> with the following:
> - make it possible to use them "online" (or provide an online version of 
> them) so that if the underlying index(reader) is updated the classifier 
> doesn't need to be trained again to take into account newly added docs
> - eventually pass a different Analyzer together with the text to be 
> classified (or directly a TokenStream) to specify custom 
> tokenization/filtering.
> - normalize score calculations of existing classifiers
> - provide publicly available dataset based accuracy and speed tests
> - more Lucene based classification algorithms
> Specific subtasks for each of the above topics should be created to discuss 
> each of them in depth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5736) Separate the classifiers to online and caching where possible

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207815#comment-14207815
 ] 

ASF subversion and git services commented on LUCENE-5736:
-

Commit 1638717 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1638717 ]

LUCENE-5736 - adding test for caching nb classifier

> Separate the classifiers to online and caching where possible
> -
>
> Key: LUCENE-5736
> URL: https://issues.apache.org/jira/browse/LUCENE-5736
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/classification
>Reporter: Gergő Törcsvári
>Assignee: Tommaso Teofili
>  Labels: gsoc2014
> Fix For: 5.0
>
> Attachments: 0803-caching.patch, 0810-caching.patch, 
> CachingNaiveBayesClassifier.java
>
>
> The Lucene classifier implementations are now near onlines if they get a near 
> realtime reader. It is good for the users whoes have a continously changing 
> dataset, but slow for not changing datasets.
> The idea is: What if we implement a cache and speed up the results where it 
> is possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6085) Suggester crashes when prefixToken is longer than surface form

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207814#comment-14207814
 ] 

ASF subversion and git services commented on SOLR-6085:
---

Commit 1638716 from jan...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1638716 ]

SOLR-6085: Suggester crashes when prefixToken is longer than surface form 
(backport)

> Suggester crashes when prefixToken is longer than surface form
> --
>
> Key: SOLR-6085
> URL: https://issues.apache.org/jira/browse/SOLR-6085
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.7.1, 4.8
>Reporter: Jorge Ferrández
>Assignee: Jan Høydahl
>  Labels: suggester
> Fix For: 4.7.3, 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6085.patch
>
>
> AnalyzingInfixSuggester class fails when is queried with a ß character 
> (ezsett) used in German, but it doesn't happen for all data or for all words 
> containing this character. The exception reported is the following: 
> {code:java}
> 
> 
> 500
> 18
> 
> 
> String index out of range: 5
> 
> java.lang.StringIndexOutOfBoundsException: String index out of range: 5 at 
> java.lang.String.substring(String.java:1907) at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.addPrefixMatch(AnalyzingInfixSuggester.java:575)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.highlight(AnalyzingInfixSuggester.java:525)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.createResults(AnalyzingInfixSuggester.java:479)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:437)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:338)
>  at 
> org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:181)
>  at 
> org.apache.solr.handler.component.SuggestComponent.process(SuggestComponent.java:232)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:217)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:368) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool

[jira] [Commented] (LUCENE-5699) Lucene classification score calculation normalize and return lists

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207813#comment-14207813
 ] 

ASF subversion and git services commented on LUCENE-5699:
-

Commit 1638715 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1638715 ]

LUCENE-5699 - normalized score for boolean perceptron classifier

> Lucene classification score calculation normalize and return lists
> --
>
> Key: LUCENE-5699
> URL: https://issues.apache.org/jira/browse/LUCENE-5699
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/classification
>Reporter: Gergő Törcsvári
>Assignee: Tommaso Teofili
>  Labels: gsoc2014
> Fix For: 5.0, Trunk
>
> Attachments: 06-06-5699.patch, 0730.patch, 0803-base.patch, 
> 0810-base.patch
>
>
> Now the classifiers can return only the "best matching" classes. If somebody 
> want it to use more complex tasks he need to modify these classes for get 
> second and third results too. If it is possible to return a list and it is 
> not a lot resource why we dont do that? (We iterate a list so also.)
> The Bayes classifier get too small return values, and there were a bug with 
> the zero floats. It was fixed with logarithmic. It would be nice to scale the 
> class scores sum vlue to one, and then we coud compare two documents return 
> score and relevance. (If we dont do this the wordcount in the test documents 
> affected the result score.)
> With bulletpoints:
> * In the Bayes classification normalized score values, and return with result 
> lists.
> * In the KNN classifier possibility to return a result list.
> * Make the ClassificationResult Comparable for list sorting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6085) Suggester crashes when prefixToken is longer than surface form

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207810#comment-14207810
 ] 

ASF subversion and git services commented on SOLR-6085:
---

Commit 1638712 from jan...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1638712 ]

SOLR-6085: Suggester crashes when prefixToken is longer than surface form 
(merge)

> Suggester crashes when prefixToken is longer than surface form
> --
>
> Key: SOLR-6085
> URL: https://issues.apache.org/jira/browse/SOLR-6085
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.7.1, 4.8
>Reporter: Jorge Ferrández
>Assignee: Jan Høydahl
>  Labels: suggester
> Fix For: 4.7.3, 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6085.patch
>
>
> AnalyzingInfixSuggester class fails when is queried with a ß character 
> (ezsett) used in German, but it doesn't happen for all data or for all words 
> containing this character. The exception reported is the following: 
> {code:java}
> 
> 
> 500
> 18
> 
> 
> String index out of range: 5
> 
> java.lang.StringIndexOutOfBoundsException: String index out of range: 5 at 
> java.lang.String.substring(String.java:1907) at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.addPrefixMatch(AnalyzingInfixSuggester.java:575)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.highlight(AnalyzingInfixSuggester.java:525)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.createResults(AnalyzingInfixSuggester.java:479)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:437)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:338)
>  at 
> org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:181)
>  at 
> org.apache.solr.handler.component.SuggestComponent.process(SuggestComponent.java:232)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:217)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:368) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)

[jira] [Commented] (SOLR-6085) Suggester crashes when prefixToken is longer than surface form

2014-11-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207807#comment-14207807
 ] 

ASF subversion and git services commented on SOLR-6085:
---

Commit 1638711 from jan...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1638711 ]

SOLR-6085: Suggester crashes when prefixToken is longer than surface form

> Suggester crashes when prefixToken is longer than surface form
> --
>
> Key: SOLR-6085
> URL: https://issues.apache.org/jira/browse/SOLR-6085
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.7.1, 4.8
>Reporter: Jorge Ferrández
>Assignee: Jan Høydahl
>  Labels: suggester
> Fix For: 4.7.3, 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6085.patch
>
>
> AnalyzingInfixSuggester class fails when is queried with a ß character 
> (ezsett) used in German, but it doesn't happen for all data or for all words 
> containing this character. The exception reported is the following: 
> {code:java}
> 
> 
> 500
> 18
> 
> 
> String index out of range: 5
> 
> java.lang.StringIndexOutOfBoundsException: String index out of range: 5 at 
> java.lang.String.substring(String.java:1907) at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.addPrefixMatch(AnalyzingInfixSuggester.java:575)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.highlight(AnalyzingInfixSuggester.java:525)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.createResults(AnalyzingInfixSuggester.java:479)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:437)
>  at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:338)
>  at 
> org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:181)
>  at 
> org.apache.solr.handler.component.SuggestComponent.process(SuggestComponent.java:232)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:217)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:368) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.