Re: Processor support for select operations?

2014-11-06 Thread Dawid Weiss
And there's always hacker's delight if you're sure popcnt is efficient:

ctz(x) = pop((x & (−x)) − 1)
ffs(x) = pop(x ^ (~(−x)))

D.

On Fri, Nov 7, 2014 at 8:41 AM, Dawid Weiss
 wrote:
> Hi Paul,
>
> I think compiler-dev hotspot list would be adequate. Coincidentally,
> John Rose mentioned ffs (find first bit set) ops just recently and
> gave an interesting link to all sorts of implementations:
>
> http://markmail.org/message/siucbahtw2jzblzl
>
> Dawid
>
> On Thu, Nov 6, 2014 at 5:18 PM, Paul Elschot  wrote:
>> Dear all,
>>
>> For LUCENE-6040 it would be good to have better processor support for
>> selecting the i-th set bit from a 64-bit integer.
>>
>> Not too long ago Long.bitCount() was intrinsified in JVM's.
>>
>> I hope something similar will happen to a select(long x, int i)
>> method. However, better processor support is needed first.
>>
>> This is somewhat off topic here, but does anyone know how to request
>> better processor support for select operations?
>>
>>
>> Regards,
>> Paul Elschot
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Processor support for select operations?

2014-11-06 Thread Dawid Weiss
Hi Paul,

I think compiler-dev hotspot list would be adequate. Coincidentally,
John Rose mentioned ffs (find first bit set) ops just recently and
gave an interesting link to all sorts of implementations:

http://markmail.org/message/siucbahtw2jzblzl

Dawid

On Thu, Nov 6, 2014 at 5:18 PM, Paul Elschot  wrote:
> Dear all,
>
> For LUCENE-6040 it would be good to have better processor support for
> selecting the i-th set bit from a 64-bit integer.
>
> Not too long ago Long.bitCount() was intrinsified in JVM's.
>
> I hope something similar will happen to a select(long x, int i)
> method. However, better processor support is needed first.
>
> This is somewhat off topic here, but does anyone know how to request
> better processor support for select operations?
>
>
> Regards,
> Paul Elschot
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6715) ZkSolrResourceLoader constructors accept a parameter called "collection" but it should be "configName"

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201701#comment-14201701
 ] 

ASF subversion and git services commented on SOLR-6715:
---

Commit 1637299 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1637299 ]

SOLR-6715: Fix javadoc warning

> ZkSolrResourceLoader constructors accept a parameter called "collection" but 
> it should be "configName"
> --
>
> Key: SOLR-6715
> URL: https://issues.apache.org/jira/browse/SOLR-6715
> Project: Solr
>  Issue Type: Bug
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Attachments: SOLR-6715.patch
>
>
> {code}
> public ZkSolrResourceLoader(String instanceDir, String collection,
>   ZkController zooKeeperController);
> public ZkSolrResourceLoader(String instanceDir, String collection, 
> ClassLoader parent,
>   Properties coreProperties, ZkController zooKeeperController);
> {code}
> The CloudConfigSetService created ZkSolrResourceLoader using the configName 
> (which is correct).
> We should renamed the param in ZkSolrResourceLoader to be configSetName and 
> also rename the "collectionZkPath" member to be "configSetZkPath"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6715) ZkSolrResourceLoader constructors accept a parameter called "collection" but it should be "configName"

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201690#comment-14201690
 ] 

ASF subversion and git services commented on SOLR-6715:
---

Commit 1637296 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1637296 ]

SOLR-6715: ZkSolrResourceLoader constructors accept a parameter called 
'collection' but it should be 'configName'

> ZkSolrResourceLoader constructors accept a parameter called "collection" but 
> it should be "configName"
> --
>
> Key: SOLR-6715
> URL: https://issues.apache.org/jira/browse/SOLR-6715
> Project: Solr
>  Issue Type: Bug
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Attachments: SOLR-6715.patch
>
>
> {code}
> public ZkSolrResourceLoader(String instanceDir, String collection,
>   ZkController zooKeeperController);
> public ZkSolrResourceLoader(String instanceDir, String collection, 
> ClassLoader parent,
>   Properties coreProperties, ZkController zooKeeperController);
> {code}
> The CloudConfigSetService created ZkSolrResourceLoader using the configName 
> (which is correct).
> We should renamed the param in ZkSolrResourceLoader to be configSetName and 
> also rename the "collectionZkPath" member to be "configSetZkPath"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6715) ZkSolrResourceLoader constructors accept a parameter called "collection" but it should be "configName"

2014-11-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6715:

Summary: ZkSolrResourceLoader constructors accept a parameter called 
"collection" but it should be "configName"  (was: ZkResourceLoader constructors 
accept a parameter called "collection" but it should be "configName")

> ZkSolrResourceLoader constructors accept a parameter called "collection" but 
> it should be "configName"
> --
>
> Key: SOLR-6715
> URL: https://issues.apache.org/jira/browse/SOLR-6715
> Project: Solr
>  Issue Type: Bug
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Attachments: SOLR-6715.patch
>
>
> {code}
> public ZkSolrResourceLoader(String instanceDir, String collection,
>   ZkController zooKeeperController);
> public ZkSolrResourceLoader(String instanceDir, String collection, 
> ClassLoader parent,
>   Properties coreProperties, ZkController zooKeeperController);
> {code}
> The CloudConfigSetService created ZkSolrResourceLoader using the configName 
> (which is correct).
> We should renamed the param in ZkSolrResourceLoader to be configSetName and 
> also rename the "collectionZkPath" member to be "configSetZkPath"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6715) ZkResourceLoader constructors accept a parameter called "collection" but it should be "configName"

2014-11-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6715:

Attachment: SOLR-6715.patch

Patch which renames collectionZkPath to configSetZkPath everywhere.

> ZkResourceLoader constructors accept a parameter called "collection" but it 
> should be "configName"
> --
>
> Key: SOLR-6715
> URL: https://issues.apache.org/jira/browse/SOLR-6715
> Project: Solr
>  Issue Type: Bug
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Attachments: SOLR-6715.patch
>
>
> {code}
> public ZkSolrResourceLoader(String instanceDir, String collection,
>   ZkController zooKeeperController);
> public ZkSolrResourceLoader(String instanceDir, String collection, 
> ClassLoader parent,
>   Properties coreProperties, ZkController zooKeeperController);
> {code}
> The CloudConfigSetService created ZkSolrResourceLoader using the configName 
> (which is correct).
> We should renamed the param in ZkSolrResourceLoader to be configSetName and 
> also rename the "collectionZkPath" member to be "configSetZkPath"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6715) ZkResourceLoader constructors accept a parameter called "collection" but it should be "configName"

2014-11-06 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6715:
---

 Summary: ZkResourceLoader constructors accept a parameter called 
"collection" but it should be "configName"
 Key: SOLR-6715
 URL: https://issues.apache.org/jira/browse/SOLR-6715
 Project: Solr
  Issue Type: Bug
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Trivial


{code}
public ZkSolrResourceLoader(String instanceDir, String collection,
  ZkController zooKeeperController);
public ZkSolrResourceLoader(String instanceDir, String collection, ClassLoader 
parent,
  Properties coreProperties, ZkController zooKeeperController);
{code}

The CloudConfigSetService created ZkSolrResourceLoader using the configName 
(which is correct).

We should renamed the param in ZkSolrResourceLoader to be configSetName and 
also rename the "collectionZkPath" member to be "configSetZkPath"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 669 - Still Failing

2014-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/669/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([3B3CA9597CD84E7B:BADA27410B872E47]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$S

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_67) - Build # 11417 - Failure!

2014-11-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11417/
Java: 32bit/jdk1.7.0_67 -server -XX:+UseParallelGC (asserts: false)

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MultiThreadedOCPTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.MultiThreadedOCPTest: 
1) Thread[id=7594, name=OverseerThreadFactory-4096-thread-5, 
state=TIMED_WAITING, group=Overseer collection creation process.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1847)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1729)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:615)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2856)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.MultiThreadedOCPTest: 
   1) Thread[id=7594, name=OverseerThreadFactory-4096-thread-5, 
state=TIMED_WAITING, group=Overseer collection creation process.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1847)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1729)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:615)
at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2856)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([CE6D0732FFA7EB0D]:0)




Build Log:
[...truncated 12197 lines...]
   [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.MultiThreadedOCPTest-CE6D0732FFA7EB0D-001/init-core-data-001
   [junit4]   2> 2501531 T7405 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2> 2501531 T7405 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2> 2501534 T7405 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 2501535 T7405 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2501535 T7406 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 2501635 T7405 oasc.ZkTestServer.run start zk server on 
port:42482
   [junit4]   2> 2501636 T7405 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2501636 T7405 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2501638 T7412 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@ff4994 name:ZooKeeperConnection 
Watcher:127.0.0.1:42482 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 2501638 T7405 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2501639 T7405 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2501639 T7405 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 2501641 T7405 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2501642 T7405 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2501643 T7414 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@27c6fe name:ZooKeeperConnection 
Watcher:127.0.0.1:42482/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 2501643 T7405 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2501643 T7405 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2501644 T7405 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 2501645 T7405 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 2501647 T7405 oascc.SolrZkClient.makePath makePath: 
/collections/control_col

[jira] [Commented] (SOLR-6654) add a standard way to listen to config changes in cloud mode

2014-11-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201575#comment-14201575
 ] 

Noble Paul commented on SOLR-6654:
--

OK peace. 
I agree with all of you :)

> add a standard way to listen to config changes in cloud mode
> 
>
> Key: SOLR-6654
> URL: https://issues.apache.org/jira/browse/SOLR-6654
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6654) add a standard way to listen to config changes in cloud mode

2014-11-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201529#comment-14201529
 ] 

Shawn Heisey commented on SOLR-6654:


bq. If i am fixing an issue , that means the first commit, what should be the 
message? isn't it same as the issue description?. OTOH , if I am doing an extra 
commit to make some enhancements to the same issue , I can explicitly specify 
the enhancement

I would say that the minimum message should include the issue summary (title).  
When the issue number is the only thing in the message, most people will not 
remember what that issue is for, so if they want to know, they have to go look 
it up.  That might be hard to do in some circumstances ... for instance, you 
might be skimming email on a phone or other mobile device, or you might be 
reviewing "svn log" output.  If there's a meaningful message, I can decide 
immediately on reading the commit email whether I need/want to look at the 
issue for more detail.

Commit messages are by developers, for developers ... so it's helpful to have a 
message that describes the *commit* and includes a little more detail.  Other 
summaries (like CHANGES.txt) are for users ... developer detail there tends to 
just cause confusion.

I don't know if you know this ... but there is a comm...@lucene.apache.org 
mailing list.  Each commit will result in at least one email (revision 1633340 
was 19 messages, and I've seen larger ones), showing the revision number, 
commit message, and the full diff.  Unlike the Commit Bot messages that get 
added to the Jira issue, you cannot see any other details, like the issue 
summary.  I rely on the commits list more than the Jira updates for staying 
aware of code updates.  I use the messages in my Jira folder to stay informed 
of Jira discussions, and mostly skip over the Commit Bot messages, because they 
do not include the diff.


> add a standard way to listen to config changes in cloud mode
> 
>
> Key: SOLR-6654
> URL: https://issues.apache.org/jira/browse/SOLR-6654
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-11-06 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201523#comment-14201523
 ] 

Julie Tibshirani commented on LUCENE-5929:
--

Pinging this ticket since I haven't heard back in a while.

> Standard highlighting doesn't work for ToParentBlockJoinQuery
> -
>
> Key: LUCENE-5929
> URL: https://issues.apache.org/jira/browse/LUCENE-5929
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Julie Tibshirani
>Priority: Critical
> Attachments: HighligherTest.patch, LUCENE-5929.patch
>
>
> Because WeightedSpanTermExtractor#extract doesn't check for 
> ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
> this type of query.
> At first it may seem like there's no issue, because ToParentBlockJoinQuery 
> only returns parent documents, while the highlighting applies to children. 
> But if a client can directly supply the text from child documents (as 
> elasticsearch does if _source is enabled), then highlighting will 
> unexpectedly fail.
> A test case that triggers the bug is attached. The same issue exists for 
> ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6707) Recovery/election for invalid core results in rapid-fire re-attempts until /overseer/queue is clogged

2014-11-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201501#comment-14201501
 ] 

Mark Miller commented on SOLR-6707:
---

bq. Solr then began rapid-fire reattempting recovery of said node, trying maybe 
20-30 times per second.

Do you have logs or something for that? We should not be doing that.

> Recovery/election for invalid core results in rapid-fire re-attempts until 
> /overseer/queue is clogged
> -
>
> Key: SOLR-6707
> URL: https://issues.apache.org/jira/browse/SOLR-6707
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: James Hardwick
>
> We experienced an issue the other day that brought a production solr server 
> down, and this is what we found after investigating:
> - Running solr instance with two separate cores, one of which is perpetually 
> down because it's configs are not yet completely updated for Solr-cloud. This 
> was thought to be harmless since it's not currently in use. 
> - Solr experienced an "internal server error" supposedly because of "No space 
> left on device" even though we appeared to have ~10GB free. 
> - Solr immediately went into recovery, and subsequent leader election for 
> each shard of each core. 
> - Our primary core recovered immediately. Our additional core which was never 
> active in the first place, attempted to recover but of course couldn't due to 
> the improper configs. 
> - Solr then began rapid-fire reattempting recovery of said node, trying maybe 
> 20-30 times per second.
> - This in turn bombarded zookeepers /overseer/queue into oblivion
> - At some point /overseer/queue becomes so backed up that normal cluster 
> coordination can no longer play out, and Solr topples over. 
> I know this is a bit of an unusual circumstance due to us keeping the dead 
> core around, and our quick solution has been to remove said core. However I 
> can see other potential scenarios that might cause the same issue to arise. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6654) add a standard way to listen to config changes in cloud mode

2014-11-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201478#comment-14201478
 ] 

Noble Paul commented on SOLR-6654:
--

bq.I also wish you would return to using commit messages beyond just the issue 
number as the rest of the project does.

If i am fixing an issue , that means the first commit, what should be the 
message? isn't it same as the issue description?. OTOH , if I am doing an extra 
commit to make some enhancements to the same issue , I can explicitly specify 
the enhancement

> add a standard way to listen to config changes in cloud mode
> 
>
> Key: SOLR-6654
> URL: https://issues.apache.org/jira/browse/SOLR-6654
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201456#comment-14201456
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1637290 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1637290 ]

SOLR-6058: Fix the top nav bar menu so the items have the right padding and 
font weight between 40.063em (the old menu toggle breakpoint) and 47.5em (the 
new menu toggle breakpoint)

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (32bit/jdk1.9.0-ea-b34) - Build # 59 - Failure!

2014-11-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/59/
Java: 32bit/jdk1.9.0-ea-b34 -server -XX:+UseG1GC (asserts: true)

1 tests failed.
REGRESSION:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:42179/_/sp/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:42179/_/sp/collection1
at 
__randomizedtesting.SeedInfo.seed([1B6104EF76E47C87:9A878AF701BB1CBB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:564)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201418#comment-14201418
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1637286 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1637286 ]

SOLR-6058: Fix the top nav bar menu so that it expands when it's visible at 
widths between 40.063em (the old menu toggle breakpoint) and 47.5em (the new 
menu toggle breakpoint)

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6051.
-
Resolution: Fixed

Thank you again Simon!

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201367#comment-14201367
 ] 

ASF subversion and git services commented on LUCENE-6051:
-

Commit 1637284 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1637284 ]

LUCENE-6051: don't use trappy Iterable in helper functions

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Multi-valued fields and TokenStream

2014-11-06 Thread Robert Muir
On Thu, Nov 6, 2014 at 3:41 PM, david.w.smi...@gmail.com
 wrote:
> On Thu, Nov 6, 2014 at 3:19 PM, Robert Muir  wrote:
>>
>> Do the concatenation yourself with your own TokenStream. You can index
>> a field with a tokenstream for expert cases (the individual stored
>> values can be added separately)
>
>
> Yes, but that’s quite awkward and a fair amount of surrounding code when, in
> the end, it could be so much simpler if somehow the TokenStream could be
> notified.  I’d feel a little better about it if Lucene included the
> tokenStream concatenating code (I’ve done a prototype for this, I could work
> on it more and contribute) and if the Solr layer had a nice way of
> presenting all the values to the Solr FieldType at once instead of
> separately — SOLR-4329.
>
>>
>> No need to make the tokenstream API more complicated: its already very
>> complicated.
>
>
> Ehh, that’s arguable.  Steve’s suggestion amounts to one line of production
> code (javadoc & test is separate).  If that’s too much then adding a boolean
> argument to reset() would feel cleaner, be 0 lines of new code, but would be
> backwards-incompatible.  Shrug.

Thats just not true, thats why I am against such a change. It is not
one line, it makes the "protocol" of tokenstream a lot more complex,
we have to ensure the correct values are passed by all consumers
(including indexwriter) etc. Same goes regardless of whether it is
extra parameters or strange values.

Its also bogus to add such stuff when its specific to indexwriter
concatenating multiple fields, which anyone can do themselves, with a
TokenStream. Its unnecessary.

Instead we already provide an expert api (index a TokenStream) for you
to do whatever it is you want, without dirtying up lucene's API.

Sorry, i dont think we should hack booleans into indexwriter or
tokenstream or analyzer for this expert use case, when we already
supply you an API to do it, and just "dont wanna".

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201338#comment-14201338
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1637282 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1637282 ]

SOLR-6058: Enlarge the top nav bar toggle width to accomodate the new Lucene 
TLP item - otherwise the top nav bar overflows, wraps, and looks bad in a 
medium range of window widths before the menu is toggled

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201329#comment-14201329
 ] 

ASF subversion and git services commented on LUCENE-6051:
-

Commit 1637278 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1637278 ]

LUCENE-6051: don't use trappy Iterable in helper functions

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Time for a Lucene/Solr 5.0

2014-11-06 Thread Robert Muir
+1

Thank you for signing up to be RM, Anshum

On Thu, Nov 6, 2014 at 3:33 PM, Anshum Gupta  wrote:

> Hi,
>
> I think there are quite a few things in the change list for 5.0 already
> and I propose creating an RC in the early December. I'll be the RM for the
> release and I think that early Dec should give people reasonable time to
> think about outstanding API/concept issues, specially, considering there's
> Lucene/Solr revolution next week and the US Thanksgiving around the end of
> the month.
>
> --
>
> Anshum Gupta
> [image: http://]about.me/anshumgupta
>
>


[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201246#comment-14201246
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1637271 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1637271 ]

SOLR-6058: fix link to Tutorials section from the resources page sub-nav bar

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Multi-valued fields and TokenStream

2014-11-06 Thread Steve Rowe

> On Nov 6, 2014, at 3:13 PM, david.w.smi...@gmail.com wrote:
> 
> Are you suggesting that DefaultIndexingChain.PerField.invert(boolean 
> firstValue) would, prior to calling reset(), call 
> setPositionIncrement(Integer.MAX_VALUE), but only when ‘firstValue’ is false? 
>  H.  I guess that would work, although it seems a bit hacky and it’s 
> tying this to a specific attribute when ideally we notify the chain as a 
> whole what’s going on.  But it doesn’t require any new API, save for some 
> javadocs.  And it’s extremely unlikely there would be a 
> backwards-incompatible problem, so that’s good.  And I find this use is 
> related to positions so it’s not so bad to abuse the position increment for 
> this.  Nice idea Steve; this works for me.

Um, I meant something much simpler (but wrong): use the existing 
Analyzer.getPositionIncrementGap() to allow analysis components to infer 
whether a value was first.  I can see now from 
DefaultIndexingChain.PerField.invert(), though, that this info isn’t available 
to analysis components, but is only used to adjust the FieldInvertState’s 
position.  Sorry for the noise.

Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201222#comment-14201222
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1637270 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1637270 ]

SOLR-6058: fix links from the 4-column row of feature box near the top of the 
home page by having the 'learn-more' anchor be on an empty section above the 
target section

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Time for a Lucene/Solr 5.0

2014-11-06 Thread Joel Bernstein
+1, works for me too.

Joel Bernstein
Search Engineer at Heliosearch

On Thu, Nov 6, 2014 at 4:06 PM, Jack Krupansky 
wrote:

>   Sounds like a good idea, the sooner the better. I mean, I don’t expect
> that many people will want to go into production with 5.0 – not that there
> is any specific technical reason they couldn’t other than simply wanting to
> give it time to bake – but for longer-lead apps and experimental
> development it would be great to actually have a “release” to work off of.
>
> That said, I think I would advise people to think of 5.0 as more of a
> “Beta” release until it gets a lot more usage.
>
> -- Jack Krupansky
>
>  *From:* Anshum Gupta 
> *Sent:* Thursday, November 6, 2014 3:33 PM
> *To:* dev@lucene.apache.org
> *Subject:* Time for a Lucene/Solr 5.0
>
>  Hi,
>
> I think there are quite a few things in the change list for 5.0 already
> and I propose creating an RC in the early December. I'll be the RM for the
> release and I think that early Dec should give people reasonable time to
> think about outstanding API/concept issues, specially, considering there's
> Lucene/Solr revolution next week and the US Thanksgiving around the end of
> the month.
>
> --
>
>   Anshum Gupta
>  [image: http://]about.me/anshumgupta
>
>


[jira] [Resolved] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6351.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0
 Assignee: Hoss Man

backported to 5x earlier today, and updated the ref guide...

https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-CombiningStatsComponentWithPivots
https://cwiki.apache.org/confluence/display/solr/The+Stats+Component#TheStatsComponent-TheStatsComponentandFaceting

...calling this done.

Big thanks to Vitaliy & Steve for their contributions on this.

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201087#comment-14201087
 ] 

ASF subversion and git services commented on SOLR-6351:
---

Commit 1637249 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1637249 ]

SOLR-6351: fix CHANGES, forgot to credit Steve Molloy (merge r1637248)

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201078#comment-14201078
 ] 

ASF subversion and git services commented on SOLR-6351:
---

Commit 1637248 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1637248 ]

SOLR-6351: fix CHANGES, forgot to credit Steve Molloy

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6707) Recovery/election for invalid core results in rapid-fire re-attempts until /overseer/queue is clogged

2014-11-06 Thread James Hardwick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201049#comment-14201049
 ] 

James Hardwick commented on SOLR-6707:
--

My assumption was wrong about the feature. Here is the initial error that 
kicked off the sequence:

{noformat}
2014-11-03 11:13:37,734 [updateExecutor-1-thread-4] ERROR 
update.StreamingSolrServers  - error
org.apache.solr.common.SolrException: Internal Server Error
 
 
 
request: 
http://xxx.xxx.xxx.xxx:8081/app-search/appindex/update?update.chain=updateRequestProcessorChain&update.distrib=TOLEADER&distrib.from=http%3A%2F%2Fxxx.xxx.xxx.xxx%3A8081%2Fapp-search%2Fappindex%2F&wt=javabin&version=2
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:240)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2014-11-03 11:13:38,056 [http-bio-8081-exec-336] WARN  
processor.DistributedUpdateProcessor  - Error sending update
org.apache.solr.common.SolrException: Internal Server Error
 
 
 
request: 
http://xxx.xxx.xxx.xxx:8081/app-search/appindex/update?update.chain=updateRequestProcessorChain&update.distrib=TOLEADER&distrib.from=http%3A%2F%2Fxxx.xxx.xxx.xxx%3A8081%2Fapp-search%2Fappindex%2F&wt=javabin&version=2
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:240)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2014-11-03 11:13:38,364 [http-bio-8081-exec-324] INFO  update.UpdateHandler  - 
start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
2014-11-03 11:13:38,364 [http-bio-8081-exec-324] INFO  update.UpdateHandler  - 
No uncommitted changes. Skipping IW.commit.
2014-11-03 11:13:38,365 [http-bio-8081-exec-324] INFO  search.SolrIndexSearcher 
 - Opening Searcher@60515a83[appindex] main
2014-11-03 11:13:38,372 [http-bio-8081-exec-324] INFO  update.UpdateHandler  - 
end_commit_flush
2014-11-03 11:13:38,373 [updateExecutor-1-thread-6] ERROR 
update.SolrCmdDistributor  - 
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: No space 
left on device
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.request(ConcurrentUpdateSolrServer.java:292)
at 
org.apache.solr.update.SolrCmdDistributor.doRequest(SolrCmdDistributor.java:296)
at 
org.apache.solr.update.SolrCmdDistributor.access$000(SolrCmdDistributor.java:53)
at 
org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:283)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
 
2014-11-03 11:13:40,812 [http-bio-8081-exec-336] WARN  
processor.DistributedUpdateProcessor  - Error sending update
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: No space 
left on device
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.request(ConcurrentUpdateSolrServer.java:292)
at 
org.apache.solr.update.SolrCmdDistributor.doRequest(SolrCmdDistributor.java:296)
at 
org.apache.solr.update.SolrCmdDistributor.access$000(SolrCmdDistributor.java:53)
at 
org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:283)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecut

[jira] [Updated] (SOLR-6707) Recovery/election for invalid core results in rapid-fire re-attempts until /overseer/queue is clogged

2014-11-06 Thread James Hardwick (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Hardwick updated SOLR-6707:
-
Description: 
We experienced an issue the other day that brought a production solr server 
down, and this is what we found after investigating:

- Running solr instance with two separate cores, one of which is perpetually 
down because it's configs are not yet completely updated for Solr-cloud. This 
was thought to be harmless since it's not currently in use. 
- Solr experienced an "internal server error" supposedly because of "No space 
left on device" even though we appeared to have ~10GB free. 
- Solr immediately went into recovery, and subsequent leader election for each 
shard of each core. 
- Our primary core recovered immediately. Our additional core which was never 
active in the first place, attempted to recover but of course couldn't due to 
the improper configs. 
- Solr then began rapid-fire reattempting recovery of said node, trying maybe 
20-30 times per second.
- This in turn bombarded zookeepers /overseer/queue into oblivion
- At some point /overseer/queue becomes so backed up that normal cluster 
coordination can no longer play out, and Solr topples over. 

I know this is a bit of an unusual circumstance due to us keeping the dead core 
around, and our quick solution has been to remove said core. However I can see 
other potential scenarios that might cause the same issue to arise. 

  was:
We experienced an issue the other day that brought a production solr server 
down, and this is what we found after investigating:

- Running solr instance with two separate cores, one of which is perpetually 
down because it's configs are not yet completely updated for Solr-cloud. This 
was thought to be harmless since it's not currently in use. 
- Solr experienced an "internal server error" I believe due in part to a fairly 
new feature we are using, which seemingly caused all cores to go down. 
- Solr immediately went into recovery, and subsequent leader election for each 
shard of each core. 
- Our primary core recovered immediately. Our additional core which was never 
active in the first place, attempted to recover but of course couldn't due to 
the improper configs. 
- Solr then began rapid-fire reattempting recovery of said node, trying maybe 
20-30 times per second.
- This in turn bombarded zookeepers /overseer/queue into oblivion
- At some point /overseer/queue becomes so backed up that normal cluster 
coordination can no longer play out, and Solr topples over. 

I know this is a bit of an unusual circumstance due to us keeping the dead core 
around, and our quick solution has been to remove said core. However I can see 
other potential scenarios that might cause the same issue to arise. 


> Recovery/election for invalid core results in rapid-fire re-attempts until 
> /overseer/queue is clogged
> -
>
> Key: SOLR-6707
> URL: https://issues.apache.org/jira/browse/SOLR-6707
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: James Hardwick
>
> We experienced an issue the other day that brought a production solr server 
> down, and this is what we found after investigating:
> - Running solr instance with two separate cores, one of which is perpetually 
> down because it's configs are not yet completely updated for Solr-cloud. This 
> was thought to be harmless since it's not currently in use. 
> - Solr experienced an "internal server error" supposedly because of "No space 
> left on device" even though we appeared to have ~10GB free. 
> - Solr immediately went into recovery, and subsequent leader election for 
> each shard of each core. 
> - Our primary core recovered immediately. Our additional core which was never 
> active in the first place, attempted to recover but of course couldn't due to 
> the improper configs. 
> - Solr then began rapid-fire reattempting recovery of said node, trying maybe 
> 20-30 times per second.
> - This in turn bombarded zookeepers /overseer/queue into oblivion
> - At some point /overseer/queue becomes so backed up that normal cluster 
> coordination can no longer play out, and Solr topples over. 
> I know this is a bit of an unusual circumstance due to us keeping the dead 
> core around, and our quick solution has been to remove said core. However I 
> can see other potential scenarios that might cause the same issue to arise. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fuzzy queries in edismax

2014-11-06 Thread Walter Underwood
Can someone look at the patch for 
https://issues.apache.org/jira/browse/SOLR-629?

This is a really handy change to edismax to support fuzzy queries.  It is a 
small spec change, but touches a lot of the parser. Basically, it means you can 
do this:

   title~^4 author~

And get fuzzy matches on title and author.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/




Re: Time for a Lucene/Solr 5.0

2014-11-06 Thread Jack Krupansky
Sounds like a good idea, the sooner the better. I mean, I don’t expect that 
many people will want to go into production with 5.0 – not that there is any 
specific technical reason they couldn’t other than simply wanting to give it 
time to bake – but for longer-lead apps and experimental development it would 
be great to actually have a “release” to work off of.

That said, I think I would advise people to think of 5.0 as more of a “Beta” 
release until it gets a lot more usage.

-- Jack Krupansky

From: Anshum Gupta 
Sent: Thursday, November 6, 2014 3:33 PM
To: dev@lucene.apache.org 
Subject: Time for a Lucene/Solr 5.0

Hi, 

I think there are quite a few things in the change list for 5.0 already and I 
propose creating an RC in the early December. I'll be the RM for the release 
and I think that early Dec should give people reasonable time to think about 
outstanding API/concept issues, specially, considering there's Lucene/Solr 
revolution next week and the US Thanksgiving around the end of the month.

-- 

Anshum Gupta 
about.me/anshumgupta 

 


Re: Time for a Lucene/Solr 5.0

2014-11-06 Thread Yonik Seeley
+1, thanks for volunteering!

-Yonik
http://heliosearch.org - native code faceting, facet functions, sub-facets,
off-heap data

On Thu, Nov 6, 2014 at 3:33 PM, Anshum Gupta  wrote:

> Hi,
>
> I think there are quite a few things in the change list for 5.0 already
> and I propose creating an RC in the early December. I'll be the RM for the
> release and I think that early Dec should give people reasonable time to
> think about outstanding API/concept issues, specially, considering there's
> Lucene/Solr revolution next week and the US Thanksgiving around the end of
> the month.
>
> --
>
> Anshum Gupta
> [image: http://]about.me/anshumgupta
>
>


Re: Time for a Lucene/Solr 5.0

2014-11-06 Thread Erick Erickson
Works for me. I need to get one JIRA fixed before that, but this should be
plenty of time.

On Thu, Nov 6, 2014 at 12:33 PM, Anshum Gupta 
wrote:

> Hi,
>
> I think there are quite a few things in the change list for 5.0 already
> and I propose creating an RC in the early December. I'll be the RM for the
> release and I think that early Dec should give people reasonable time to
> think about outstanding API/concept issues, specially, considering there's
> Lucene/Solr revolution next week and the US Thanksgiving around the end of
> the month.
>
> --
>
> Anshum Gupta
> [image: http://]about.me/anshumgupta
>
>


[jira] [Updated] (SOLR-6714) Collection RELOAD returns 200 even when some shards fail to reload -- other APIs with similar problems?

2014-11-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6714:
-
Summary: Collection RELOAD returns 200 even when some shards fail to reload 
-- other APIs with similar problems?  (was: Collection RELOAD returns 200 even 
when osme hsards fail to reload -- other APIs with similar problems?)

> Collection RELOAD returns 200 even when some shards fail to reload -- other 
> APIs with similar problems?
> ---
>
> Key: SOLR-6714
> URL: https://issues.apache.org/jira/browse/SOLR-6714
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> Using 4.10.2, if you startup a simple 2 node cloud with...
> {noformat}
> ./bin/solr start -e cloud -noprompt
> {noformat}
> And then try to force a situation where a replica is hozed like this...
> {noformat}
> rm -rf node1/solr/gettingstarted_shard1_replica1/*
> chmod a-rw node1/solr/gettingstarted_shard1_replica1
> {noformat}
> The result of a Collection RELOAD command is still a success...
> {noformat}
> curl -sS -D - 
> 'http://localhost:8983/solr/admin/collections?action=RELOAD&name=gettingstarted'
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">1866 name="127.0.1.1:8983_solr">org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error
>  handling 'reload' action name="127.0.1.1:8983_solr"> name="status">01631 name="127.0.1.1:7574_solr"> name="status">01710 name="127.0.1.1:7574_solr"> name="status">01795
> 
> {noformat}
> The HTTP stats code of collection level APIs should not be 200 if any of the 
> underlying requests that it depends on result in 4xx or 5xx errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Multi-valued fields and TokenStream

2014-11-06 Thread david.w.smi...@gmail.com
On Thu, Nov 6, 2014 at 3:19 PM, Robert Muir  wrote:

> Do the concatenation yourself with your own TokenStream. You can index
> a field with a tokenstream for expert cases (the individual stored
> values can be added separately)
>

Yes, but that’s quite awkward and a fair amount of surrounding code when,
in the end, it could be so much simpler if somehow the TokenStream could be
notified.  I’d feel a little better about it if Lucene included the
tokenStream concatenating code (I’ve done a prototype for this, I could
work on it more and contribute) and if the Solr layer had a nice way of
presenting all the values to the Solr FieldType at once instead of
separately — SOLR-4329.


> No need to make the tokenstream API more complicated: its already very
> complicated.
>

Ehh, that’s arguable.  Steve’s suggestion amounts to one line of production
code (javadoc & test is separate).  If that’s too much then adding a
boolean argument to reset() would feel cleaner, be 0 lines of new code, but
would be backwards-incompatible.  Shrug.

Another idea is if Field.tokenStream(Analyzer analyzer, TokenStream reuse)
had another boolean to indicate first value or not.  I think I like the
other ideas better though.


>
> On Thu, Nov 6, 2014 at 3:13 PM, david.w.smi...@gmail.com
>  wrote:
> > Are you suggesting that DefaultIndexingChain.PerField.invert(boolean
> > firstValue) would, prior to calling reset(), call
> > setPositionIncrement(Integer.MAX_VALUE), but only when ‘firstValue’ is
> > false?  H.  I guess that would work, although it seems a bit hacky
> and
> > it’s tying this to a specific attribute when ideally we notify the chain
> as
> > a whole what’s going on.  But it doesn’t require any new API, save for
> some
> > javadocs.  And it’s extremely unlikely there would be a
> > backwards-incompatible problem, so that’s good.  And I find this use is
> > related to positions so it’s not so bad to abuse the position increment
> for
> > this.  Nice idea Steve; this works for me.
> >
> > Does anyone else have an opinion before I create an issue?
> >
> > ~ David Smiley
> > Freelance Apache Lucene/Solr Search Consultant/Developer
> > http://www.linkedin.com/in/davidwsmiley
> >
> > On Thu, Nov 6, 2014 at 2:13 PM, Steve Rowe  wrote:
> >>
> >> Maybe the position increment gap would be useful?  If set to a value
> >> larger than likely max position for any individual value, it could be
> used
> >> to infer (non-)first-value-ness.
> >>
> >> > On Nov 5, 2014, at 1:03 PM, david.w.smi...@gmail.com wrote:
> >> >
> >> > Several times now, I’ve had to come up with work-arounds for a
> >> > TokenStream not knowing it’s processing the first value or a
> >> > subsequent-value of a multi-valued field.  Two of these times, the
> use-case
> >> > was ensuring the first position of each value started at a multiple
> of 1000
> >> > (or some other configurable value), and the third was encoding
> sentence
> >> > paragraph counters (similar to a do-it-yourself position increment).
> >> >
> >> > The work-arounds are awkward and hacky.  For example if you’re in
> >> > control of your Tokenizer, you can prefix subsequent values with a
> special
> >> > flag, and then do the right think in reset().  But then the
> highlighter or
> >> > value retrieval in general is impacted.  It’s also possible to create
> the
> >> > fields with the constructor that accepts a TokenStream that you’ve
> told it’s
> >> > the first or subsequent value but it’s awkward going that route, and
> >> > sometimes (e.g. Solr) it’s hard to know all the values you have
> up-front to
> >> > even do that.
> >> >
> >> > It would be nice if TokenStream.reset() took a boolean ‘first’
> argument.
> >> > Such a change would obviously be backwards incompatible.  Simply
> overloading
> >> > the method to call the no-arg version is problematic because
> TokenStreams
> >> > are a chain, and it would likely result in the chain getting
> doubly-reset.
> >> >
> >> > Any ideas?
> >> >
> >> > ~ David Smiley
> >> > Freelance Apache Lucene/Solr Search Consultant/Developer
> >> > http://www.linkedin.com/in/davidwsmiley
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-6714) Collection RELOAD returns 200 even when osme hsards fail to reload -- other APIs with similar problems?

2014-11-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200877#comment-14200877
 ] 

Hoss Man commented on SOLR-6714:


This behavior seems really absurd to me -- almost as if it was intentional.  Is 
there some reason i can't think of why it was implemented like this?

Assuming folks agree this should be fixed -- we should audit & sanity check 
that no other Collection APIs have a similar behavior in failure cases like 
this.

> Collection RELOAD returns 200 even when osme hsards fail to reload -- other 
> APIs with similar problems?
> ---
>
> Key: SOLR-6714
> URL: https://issues.apache.org/jira/browse/SOLR-6714
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> Using 4.10.2, if you startup a simple 2 node cloud with...
> {noformat}
> ./bin/solr start -e cloud -noprompt
> {noformat}
> And then try to force a situation where a replica is hozed like this...
> {noformat}
> rm -rf node1/solr/gettingstarted_shard1_replica1/*
> chmod a-rw node1/solr/gettingstarted_shard1_replica1
> {noformat}
> The result of a Collection RELOAD command is still a success...
> {noformat}
> curl -sS -D - 
> 'http://localhost:8983/solr/admin/collections?action=RELOAD&name=gettingstarted'
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">1866 name="127.0.1.1:8983_solr">org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error
>  handling 'reload' action name="127.0.1.1:8983_solr"> name="status">01631 name="127.0.1.1:7574_solr"> name="status">01710 name="127.0.1.1:7574_solr"> name="status">01795
> 
> {noformat}
> The HTTP stats code of collection level APIs should not be 200 if any of the 
> underlying requests that it depends on result in 4xx or 5xx errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6714) Collection RELOAD returns 200 even when osme hsards fail to reload -- other APIs with similar problems?

2014-11-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6714:
--

 Summary: Collection RELOAD returns 200 even when osme hsards fail 
to reload -- other APIs with similar problems?
 Key: SOLR-6714
 URL: https://issues.apache.org/jira/browse/SOLR-6714
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


Using 4.10.2, if you startup a simple 2 node cloud with...

{noformat}
./bin/solr start -e cloud -noprompt
{noformat}

And then try to force a situation where a replica is hozed like this...

{noformat}
rm -rf node1/solr/gettingstarted_shard1_replica1/*
chmod a-rw node1/solr/gettingstarted_shard1_replica1
{noformat}

The result of a Collection RELOAD command is still a success...

{noformat}
curl -sS -D - 
'http://localhost:8983/solr/admin/collections?action=RELOAD&name=gettingstarted'
HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Transfer-Encoding: chunked



01866org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error
 handling 'reload' action016310171001795

{noformat}

The HTTP stats code of collection level APIs should not be 200 if any of the 
underlying requests that it depends on result in 4xx or 5xx errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Time for a Lucene/Solr 5.0

2014-11-06 Thread Anshum Gupta
Hi,

I think there are quite a few things in the change list for 5.0 already and
I propose creating an RC in the early December. I'll be the RM for the
release and I think that early Dec should give people reasonable time to
think about outstanding API/concept issues, specially, considering there's
Lucene/Solr revolution next week and the US Thanksgiving around the end of
the month.

-- 

Anshum Gupta
[image: http://]about.me/anshumgupta


Re: Multi-valued fields and TokenStream

2014-11-06 Thread Robert Muir
Do the concatenation yourself with your own TokenStream. You can index
a field with a tokenstream for expert cases (the individual stored
values can be added separately)

No need to make the tokenstream API more complicated: its already very
complicated.

On Thu, Nov 6, 2014 at 3:13 PM, david.w.smi...@gmail.com
 wrote:
> Are you suggesting that DefaultIndexingChain.PerField.invert(boolean
> firstValue) would, prior to calling reset(), call
> setPositionIncrement(Integer.MAX_VALUE), but only when ‘firstValue’ is
> false?  H.  I guess that would work, although it seems a bit hacky and
> it’s tying this to a specific attribute when ideally we notify the chain as
> a whole what’s going on.  But it doesn’t require any new API, save for some
> javadocs.  And it’s extremely unlikely there would be a
> backwards-incompatible problem, so that’s good.  And I find this use is
> related to positions so it’s not so bad to abuse the position increment for
> this.  Nice idea Steve; this works for me.
>
> Does anyone else have an opinion before I create an issue?
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
> On Thu, Nov 6, 2014 at 2:13 PM, Steve Rowe  wrote:
>>
>> Maybe the position increment gap would be useful?  If set to a value
>> larger than likely max position for any individual value, it could be used
>> to infer (non-)first-value-ness.
>>
>> > On Nov 5, 2014, at 1:03 PM, david.w.smi...@gmail.com wrote:
>> >
>> > Several times now, I’ve had to come up with work-arounds for a
>> > TokenStream not knowing it’s processing the first value or a
>> > subsequent-value of a multi-valued field.  Two of these times, the use-case
>> > was ensuring the first position of each value started at a multiple of 1000
>> > (or some other configurable value), and the third was encoding sentence
>> > paragraph counters (similar to a do-it-yourself position increment).
>> >
>> > The work-arounds are awkward and hacky.  For example if you’re in
>> > control of your Tokenizer, you can prefix subsequent values with a special
>> > flag, and then do the right think in reset().  But then the highlighter or
>> > value retrieval in general is impacted.  It’s also possible to create the
>> > fields with the constructor that accepts a TokenStream that you’ve told 
>> > it’s
>> > the first or subsequent value but it’s awkward going that route, and
>> > sometimes (e.g. Solr) it’s hard to know all the values you have up-front to
>> > even do that.
>> >
>> > It would be nice if TokenStream.reset() took a boolean ‘first’ argument.
>> > Such a change would obviously be backwards incompatible.  Simply 
>> > overloading
>> > the method to call the no-arg version is problematic because TokenStreams
>> > are a chain, and it would likely result in the chain getting doubly-reset.
>> >
>> > Any ideas?
>> >
>> > ~ David Smiley
>> > Freelance Apache Lucene/Solr Search Consultant/Developer
>> > http://www.linkedin.com/in/davidwsmiley
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200831#comment-14200831
 ] 

ASF subversion and git services commented on SOLR-6351:
---

Commit 1637204 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1637204 ]

SOLR-6351: Stats can now be nested under pivot values by adding a 'stats' local 
param (merge r1636772)

> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Multi-valued fields and TokenStream

2014-11-06 Thread david.w.smi...@gmail.com
Are you suggesting that DefaultIndexingChain.PerField.invert(boolean
firstValue) would, prior to calling reset(), call
setPositionIncrement(Integer.MAX_VALUE), but only when ‘firstValue’ is
false?  H.  I guess that would work, although it seems a bit hacky and
it’s tying this to a specific attribute when ideally we notify the chain as
a whole what’s going on.  But it doesn’t require any new API, save for some
javadocs.  And it’s extremely unlikely there would be a
backwards-incompatible problem, so that’s good.  And I find this use is
related to positions so it’s not so bad to abuse the position increment for
this.  Nice idea Steve; this works for me.

Does anyone else have an opinion before I create an issue?

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Thu, Nov 6, 2014 at 2:13 PM, Steve Rowe  wrote:

> Maybe the position increment gap would be useful?  If set to a value
> larger than likely max position for any individual value, it could be used
> to infer (non-)first-value-ness.
>
> > On Nov 5, 2014, at 1:03 PM, david.w.smi...@gmail.com wrote:
> >
> > Several times now, I’ve had to come up with work-arounds for a
> TokenStream not knowing it’s processing the first value or a
> subsequent-value of a multi-valued field.  Two of these times, the use-case
> was ensuring the first position of each value started at a multiple of 1000
> (or some other configurable value), and the third was encoding sentence
> paragraph counters (similar to a do-it-yourself position increment).
> >
> > The work-arounds are awkward and hacky.  For example if you’re in
> control of your Tokenizer, you can prefix subsequent values with a special
> flag, and then do the right think in reset().  But then the highlighter or
> value retrieval in general is impacted.  It’s also possible to create the
> fields with the constructor that accepts a TokenStream that you’ve told
> it’s the first or subsequent value but it’s awkward going that route, and
> sometimes (e.g. Solr) it’s hard to know all the values you have up-front to
> even do that.
> >
> > It would be nice if TokenStream.reset() took a boolean ‘first’
> argument.  Such a change would obviously be backwards incompatible.  Simply
> overloading the method to call the no-arg version is problematic because
> TokenStreams are a chain, and it would likely result in the chain getting
> doubly-reset.
> >
> > Any ideas?
> >
> > ~ David Smiley
> > Freelance Apache Lucene/Solr Search Consultant/Developer
> > http://www.linkedin.com/in/davidwsmiley
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11415 - Failure!

2014-11-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11415/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
(asserts: false)

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
startOffset 916 expected:<6784> but was:<6783>

Stack Trace:
java.lang.AssertionError: startOffset 916 expected:<6784> but was:<6783>
at 
__randomizedtesting.SeedInfo.seed([53AD60AE02CAFEF6:DB246010A1CEA9C3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:182)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:295)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:299)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:859)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:614)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:512)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:436)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apach

Re: Multi-valued fields and TokenStream

2014-11-06 Thread Steve Rowe
Maybe the position increment gap would be useful?  If set to a value larger 
than likely max position for any individual value, it could be used to infer 
(non-)first-value-ness.

> On Nov 5, 2014, at 1:03 PM, david.w.smi...@gmail.com wrote:
> 
> Several times now, I’ve had to come up with work-arounds for a TokenStream 
> not knowing it’s processing the first value or a subsequent-value of a 
> multi-valued field.  Two of these times, the use-case was ensuring the first 
> position of each value started at a multiple of 1000 (or some other 
> configurable value), and the third was encoding sentence paragraph counters 
> (similar to a do-it-yourself position increment).  
> 
> The work-arounds are awkward and hacky.  For example if you’re in control of 
> your Tokenizer, you can prefix subsequent values with a special flag, and 
> then do the right think in reset().  But then the highlighter or value 
> retrieval in general is impacted.  It’s also possible to create the fields 
> with the constructor that accepts a TokenStream that you’ve told it’s the 
> first or subsequent value but it’s awkward going that route, and sometimes 
> (e.g. Solr) it’s hard to know all the values you have up-front to even do 
> that.
> 
> It would be nice if TokenStream.reset() took a boolean ‘first’ argument.  
> Such a change would obviously be backwards incompatible.  Simply overloading 
> the method to call the no-arg version is problematic because TokenStreams are 
> a chain, and it would likely result in the chain getting doubly-reset.
> 
> Any ideas?
> 
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 677 - Still Failing

2014-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/677/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([FEA2EACA01B8699E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([FEA2EACA01B8699E]:0)


REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([FEA2EACA01B8699E:7F4464D276E709A2]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:137)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:132)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:834)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.createCollection(CollectionsAPIDistributedZkTest.java:1332)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.addReplicaTest(CollectionsAPIDistributedZkTest.java:1259)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:210)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
c

[jira] [Commented] (SOLR-6654) add a standard way to listen to config changes in cloud mode

2014-11-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200667#comment-14200667
 ] 

Steve Rowe commented on SOLR-6654:
--

bq. I also wish you would return to using commit messages beyond just the issue 
number as the rest of the project does.

+1

> add a standard way to listen to config changes in cloud mode
> 
>
> Key: SOLR-6654
> URL: https://issues.apache.org/jira/browse/SOLR-6654
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6713) CLONE - Highlighting not working in solr cloud grouping query when using group.query=xxx

2014-11-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-6713.
--
Resolution: Duplicate

Duplicate of SOLR-6712 AFAIK, feel free to re-open if cloning it was 
intentional and there's something else to be done here.

> CLONE - Highlighting not working in solr cloud grouping query when using 
> group.query=xxx
> 
>
> Key: SOLR-6713
> URL: https://issues.apache.org/jira/browse/SOLR-6713
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.10.2
>Reporter: melissa h
>
> The highlighting is throwing an exception in sold cloud when you are using 
> group.query. Example:
> /select?group=true&group.query=livesuggesttype_s:game_movie&hl=true&hl.q=test&hl.fl=content
> The following exception will be throwen:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:195)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:760)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:745)
> I think this is also mentioned in the following stackoverflow post:
> http://stackoverflow.com/questions/25548063/solr-search-with-multicore-grouping-highlighting-null-pointer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200543#comment-14200543
 ] 

Michael McCandless commented on LUCENE-6051:


+1, what an evil bug.  Path should not implement Iterable.

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6708) Smoke tester couldn't communicate with Solr started using 'bin/solr start'

2014-11-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200532#comment-14200532
 ] 

Steve Rowe commented on SOLR-6708:
--

bq. Maybe there are orphaned Solr server(s) running on the lucene Jenkins 
slave? I'll take a look.

Yup, from the 5.x run yesterday ({{ps}} output):

{noformat}
77398 ??  IJ1:07.76 /home/jenkins/tools/java/latest1.7/bin/java -server 
-Xss256k -Xms512m -Xmx512m -XX:MaxPermSize=256m -XX:PermSize=256m 
-XX:-UseSuperWord -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark 
-XX:PretenureSizeThreshold=64m -XX:CMSFullGCsBeforeCompaction=1 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSTriggerPermRatio=80 -XX:CMSMaxAbortablePrecleanTime=6000 
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:+AggressiveOpts 
-XX:+UseLargePages -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails 
-XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime 
-Xloggc:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/logs/solr_gc.log
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=1083 
-Dcom.sun.management.jmxremote.rmi.port=1083 -DSTOP.PORT=7983 
-DSTOP.KEY=solrrocks -Djetty.port=8983 
-Dsolr.solr.home=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/solr
 
-Dsolr.install.dir=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
 -Duser.timezone=UTC -Djava.net.preferIPv4Stack=true 
-XX:OnOutOfMemoryError=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/bin/oom_solr.sh
 8983 -jar start.jar
{quote}

I'll try some manual {{curl}}-ing against it before I shut it down.

> Smoke tester couldn't communicate with Solr started using 'bin/solr start'
> --
>
> Key: SOLR-6708
> URL: https://issues.apache.org/jira/browse/SOLR-6708
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Steve Rowe
> Attachments: solr-example.log
>
>
> The nightly-smoke target failed on ASF Jenkins 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/208/]: 
> {noformat}
>[smoker]   unpack solr-5.0.0.tgz...
>[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
>[smoker] unpack lucene-5.0.0.tgz...
>[smoker]   **WARNING**: skipping check of 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
>  it has javax.* classes
>[smoker]   **WARNING**: skipping check of 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
>  it has javax.* classes
>[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
> java.* classes...
>[smoker] unpack lucene-5.0.0.tgz...
>[smoker] copying unpacked distribution for Java 7 ...
>[smoker] test solr example w/ Java 7...
>[smoker]   start Solr instance 
> (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
>[smoker]   startup done
>[smoker] Failed to determine the port of a local Solr instance, cannot 
> create core!
>[smoker]   test utf8...
>[smoker] 
>[smoker] command "sh ./exampledocs/test_utf8.sh 
> http://localhost:8983/solr/techproducts"; failed:
>[smoker] ERROR: Could not curl to Solr - is curl installed? Is Solr not 
> running?
>[smoker] 
>[smoker] 
>[smoker]   stop server using: bin/solr stop -p 8983
>[smoker] No process found for Solr node running on port 8983
>[smoker] ***WARNING***: Solr instance didn't respond to SIGINT; using 
> SIGKILL now...
>[smoker] ***WARNING***: Solr instance didn't respond to SIGKILL; 
> ignoring...
>[smoker] Traceback (most recent call last):
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1526, in 
>[smoker] main()
>[smoker]   File 
>

[jira] [Comment Edited] (SOLR-6708) Smoke tester couldn't communicate with Solr started using 'bin/solr start'

2014-11-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200532#comment-14200532
 ] 

Steve Rowe edited comment on SOLR-6708 at 11/6/14 5:41 PM:
---

bq. Maybe there are orphaned Solr server(s) running on the lucene Jenkins 
slave? I'll take a look.

Yup, from the 5.x run yesterday ({{ps}} output):

{noformat}
77398 ??  IJ1:07.76 /home/jenkins/tools/java/latest1.7/bin/java -server 
-Xss256k -Xms512m -Xmx512m -XX:MaxPermSize=256m -XX:PermSize=256m 
-XX:-UseSuperWord -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark 
-XX:PretenureSizeThreshold=64m -XX:CMSFullGCsBeforeCompaction=1 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSTriggerPermRatio=80 -XX:CMSMaxAbortablePrecleanTime=6000 
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:+AggressiveOpts 
-XX:+UseLargePages -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails 
-XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime 
-Xloggc:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/logs/solr_gc.log
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=1083 
-Dcom.sun.management.jmxremote.rmi.port=1083 -DSTOP.PORT=7983 
-DSTOP.KEY=solrrocks -Djetty.port=8983 
-Dsolr.solr.home=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/solr
 
-Dsolr.install.dir=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
 -Duser.timezone=UTC -Djava.net.preferIPv4Stack=true 
-XX:OnOutOfMemoryError=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/bin/oom_solr.sh
 8983 -jar start.jar
{noformat}

I'll try some manual {{curl}}-ing against it before I shut it down.


was (Author: steve_rowe):
bq. Maybe there are orphaned Solr server(s) running on the lucene Jenkins 
slave? I'll take a look.

Yup, from the 5.x run yesterday ({{ps}} output):

{noformat}
77398 ??  IJ1:07.76 /home/jenkins/tools/java/latest1.7/bin/java -server 
-Xss256k -Xms512m -Xmx512m -XX:MaxPermSize=256m -XX:PermSize=256m 
-XX:-UseSuperWord -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark 
-XX:PretenureSizeThreshold=64m -XX:CMSFullGCsBeforeCompaction=1 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSTriggerPermRatio=80 -XX:CMSMaxAbortablePrecleanTime=6000 
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:+AggressiveOpts 
-XX:+UseLargePages -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails 
-XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime 
-Xloggc:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/logs/solr_gc.log
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=1083 
-Dcom.sun.management.jmxremote.rmi.port=1083 -DSTOP.PORT=7983 
-DSTOP.KEY=solrrocks -Djetty.port=8983 
-Dsolr.solr.home=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/solr
 
-Dsolr.install.dir=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
 -Duser.timezone=UTC -Djava.net.preferIPv4Stack=true 
-XX:OnOutOfMemoryError=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/bin/oom_solr.sh
 8983 -jar start.jar
{quote}

I'll try some manual {{curl}}-ing against it before I shut it down.

> Smoke tester couldn't communicate with Solr started using 'bin/solr start'
> --
>
> Key: SOLR-6708
> URL: https://issues.apache.org/jira/browse/SOLR-6708
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Steve Rowe
> Attachments: solr-example.log
>
>
> The nightly-smoke target failed on ASF 

[jira] [Moved] (LUCENE-6052) "ant regenerate" causes compilation errors

2014-11-06 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob moved ACCUMULO-3307 to LUCENE-6052:
-

  Component/s: (was: build)
   general/build
 Assignee: (was: Christopher Tubbs)
Lucene Fields: New
 Workflow: classic default workflow  (was: patch-available, re-open 
possible)
  Key: LUCENE-6052  (was: ACCUMULO-3307)
  Project: Lucene - Core  (was: Accumulo)

> "ant regenerate" causes compilation errors
> --
>
> Key: LUCENE-6052
> URL: https://issues.apache.org/jira/browse/LUCENE-6052
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Mike Drob
>
> The following is the output of {{ant -diagnostics}} followed by {{ant 
> regenerate}} on a clean checkout of trunk.
> {noformat}
> --- Ant diagnostics report ---
> Apache Ant(TM) version 1.9.3 compiled on April 8 2014
> ---
>  Implementation Version
> ---
> core tasks : 1.9.3 in file:/usr/share/ant/lib/ant.jar
> ---
>  ANT PROPERTIES
> ---
> ant.version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014
> ant.java.version: 1.7
> Is this the Apache Harmony VM? no
> Is this the Kaffe VM? no
> Is this gij/gcj? no
> ant.core.lib: /usr/share/ant/lib/ant.jar
> ant.home: /usr/share/ant
> ---
>  ANT_HOME/lib jar listing
> ---
> ant.home: /usr/share/ant
> ant-apache-oro.jar (9750 bytes)
> junit.jar (108762 bytes)
> ant-jdepend.jar (13865 bytes)
> ant-launcher.jar (18382 bytes)
> ant-apache-bsf.jar (9786 bytes)
> ant-swing.jar (13285 bytes)
> ant-antlr.jar (11605 bytes)
> ant.jar (2008109 bytes)
> ant-apache-log4j.jar (8634 bytes)
> ant-testutil.jar (21042 bytes)
> ant-junit4.jar (13104 bytes)
> ant-apache-resolver.jar (9681 bytes)
> ant-junit.jar (114902 bytes)
> ant-commons-logging.jar (9758 bytes)
> ant-apache-bcel.jar (15114 bytes)
> ant-javamail.jar (13847 bytes)
> ant-jmf.jar (12317 bytes)
> ant-jsch.jar (46562 bytes)
> ant-commons-net.jar (91319 bytes)
> ant-apache-regexp.jar (9610 bytes)
> ant-apache-xalan2.jar (8141 bytes)
> ---
>  USER_HOME/.ant/lib jar listing
> ---
> user.home: /home/mdrob
> ivy.jar (1222059 bytes)
> ---
>  Tasks availability
> ---
> image : Not Available (the implementation class is not present)
> sshexec : Missing dependency com.jcraft.jsch.Logger
> scp : Missing dependency com.jcraft.jsch.Logger
> sshsession : Missing dependency com.jcraft.jsch.Logger
> netrexxc : Not Available (the implementation class is not present)
> jdepend : Missing dependency jdepend.xmlui.JDepend
> gjdoc : Not Available (the implementation class is not present)
> A task being missing/unavailable should only matter if you are trying to use 
> it
> ---
>  org.apache.env.Which diagnostics
> ---
> Not available.
> Download it at http://xml.apache.org/commons/
> ---
>  XML Parser information
> ---
> XML Parser : org.apache.xerces.jaxp.SAXParserImpl
> XML Parser Location: file:/usr/share/java/xercesImpl-2.11.0.jar
> Namespace-aware parser : org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser
> Namespace-aware parser Location: file:/usr/share/java/xercesImpl-2.11.0.jar
> ---
>  XSLT Processor information
> ---
> XSLT Processor : com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl
> XSLT Processor Location: unknown
> ---
>  System properties
> ---
> java.runtime.name : Java(TM) SE Runtime Environment
> sun.boot.library.path : /usr/lib/jvm/java-7-oracle/jre/lib/amd64
> java.vm.version : 24.72-b04
> ant.library.dir : /usr/share/ant/lib
> java.vm.vendor : Oracle Corporation
> java.vendor.url : http://java.oracle.com/
> path.separator : :
> java.vm.name : Java HotSpot(TM) 64-Bit Server VM
> file.encoding.pkg : sun.io
> user.country : US
> sun.java.launcher : SUN_STANDARD
> sun.os.patch.level : unknown
> java.vm.specification.name : Java Virtual Machine Specification
> user.dir : /home/mdrob/workspace/lucene-solr
> java.runtime.version : 1.7.0_72-b14
> java.awt.graphicsenv : sun.awt.X11GraphicsEnvironment
> java.endorsed.dirs : /usr/lib/jvm/java-7-oracle/jre/lib/endorsed
> os.arch : amd64
> java.io.tmpdir : /tmp
> line.

[jira] [Commented] (SOLR-6712) Highlighting not working in solr cloud grouping query when using group.query=xxx

2014-11-06 Thread Timo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200474#comment-14200474
 ] 

Timo Schmidt commented on SOLR-6712:


Small hint:

It seems to work in other cases like using "group.field" 
/select?group=true&group.field=livesuggesttype_s&hl=true&hl.q=test&hl.fl=content

> Highlighting not working in solr cloud grouping query when using 
> group.query=xxx
> 
>
> Key: SOLR-6712
> URL: https://issues.apache.org/jira/browse/SOLR-6712
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.10.2
>Reporter: Timo Schmidt
>
> The highlighting is throwing an exception in sold cloud when you are using 
> group.query. Example:
> /select?group=true&group.query=livesuggesttype_s:game_movie&hl=true&hl.q=test&hl.fl=content
> The following exception will be throwen:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:195)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:760)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:745)
> I think this is also mentioned in the following stackoverflow post:
> http://stackoverflow.com/questions/25548063/solr-search-with-multicore-grouping-highlighting-null-pointer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6713) CLONE - Highlighting not working in solr cloud grouping query when using group.query=xxx

2014-11-06 Thread melissa h (JIRA)
melissa h created SOLR-6713:
---

 Summary: CLONE - Highlighting not working in solr cloud grouping 
query when using group.query=xxx
 Key: SOLR-6713
 URL: https://issues.apache.org/jira/browse/SOLR-6713
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.10.2
Reporter: melissa h


The highlighting is throwing an exception in sold cloud when you are using 
group.query. Example:

/select?group=true&group.query=livesuggesttype_s:game_movie&hl=true&hl.q=test&hl.fl=content

The following exception will be throwen:

java.lang.NullPointerException
at 
org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:195)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:760)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)


I think this is also mentioned in the following stackoverflow post:

http://stackoverflow.com/questions/25548063/solr-search-with-multicore-grouping-highlighting-null-pointer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6712) Highlighting not working in solr cloud grouping query when using group.query=xxx

2014-11-06 Thread Timo Schmidt (JIRA)
Timo Schmidt created SOLR-6712:
--

 Summary: Highlighting not working in solr cloud grouping query 
when using group.query=xxx
 Key: SOLR-6712
 URL: https://issues.apache.org/jira/browse/SOLR-6712
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.10.2
Reporter: Timo Schmidt


The highlighting is throwing an exception in sold cloud when you are using 
group.query. Example:

/select?group=true&group.query=livesuggesttype_s:game_movie&hl=true&hl.q=test&hl.fl=content

The following exception will be throwen:

java.lang.NullPointerException
at 
org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:195)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:760)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)


I think this is also mentioned in the following stackoverflow post:

http://stackoverflow.com/questions/25548063/solr-search-with-multicore-grouping-highlighting-null-pointer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6711) [ Legacy Scaling and Distribution] HTTP API replication?command=disablereplication & replication?command=disablepoll are not persisistentt

2014-11-06 Thread Guilhem RAMBAL (JIRA)
Guilhem RAMBAL created SOLR-6711:


 Summary: [ Legacy Scaling and Distribution] HTTP API  
replication?command=disablereplication  & replication?command=disablepoll  are 
not  persisistentt
 Key: SOLR-6711
 URL: https://issues.apache.org/jira/browse/SOLR-6711
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.10
Reporter: Guilhem RAMBAL


Disablepoll and DisableReplication are not persistent after a solr restart.

Is there a workaround to make this possible ?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b34) - Build # 11410 - Failure!

2014-11-06 Thread Chris Hostetter
: As I understood this was an svn eol style problem. I provided a git diff 
patch.
: Is it possible to avoid this svn problem in a git patch?
: Could ant precommit catch this on a git branch? I did not use that this time.

i don't know if there is anything you can do when using git and generating 
a parth to avoid problems like this.


When using "svn diff" it includes metdata like this in the diffs...

--CUT--
Property changes on: foo/bar/baz.java
___
Added: svn:eol-style
   + native
--CUT--

...and suposedly if you use "svn patch" it will look for that extra data 
and apply those properties -- but i've never actually tried it.  So i 
suppose if you cna make "git diff" tack on that same syntax, you've done 
everything you can do.

But if the committer (like me) typically just uses the "patch" command, 
then that metadata is just going to be ignored. -- but "ant preocmmit" run 
just prior to commit will still catch this.



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200444#comment-14200444
 ] 

Simon Willnauer commented on LUCENE-6051:
-

+1

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6051:

Attachment: LUCENE-6051.patch

Simple patch.

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6051.patch
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-6051:

Description: 
We have two methods in IOUtils
{code}
 public static void deleteFilesIgnoringExceptions(Iterable 
files);

 public static void deleteFilesIfExist(Iterable files) throws 
IOException
{code}

if you call these with a single Path instance it interprets it as 
Iterable since Path implements Iternable and in-turn tries to 
delete every element of the path. I guess we should fix this before we release. 
We also need to check if there are other places where we do this... it's 
nasty... 


  was:
We have two methods in IOUtils
{code}
public static void deleteFilesIgnoringExceptions(Iterable 
files);

 public static void deleteFilesIfExist(Iterable files) throws 
IOException
{code}

if you call these with a single Path instance it interprets it as 
Iterable since Path implements Iternable and in-turn tries to 
delete every element of the path. I guess we should fix this before we release. 
We also need to check if there are other places where we do this... it's 
nasty... 



> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
>
> We have two methods in IOUtils
> {code}
>  public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200424#comment-14200424
 ] 

Simon Willnauer commented on LUCENE-6051:
-

+1 we should also have simple dedicated tests for this - it would have caught 
this before we'd have even committed it

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
>
> We have two methods in IOUtils
> {code}
> public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200420#comment-14200420
 ] 

Steve Rowe commented on SOLR-3619:
--

See SOLR-6708 - problems using {{bin/solr}} in the smoke tester run under 
Jenkins

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
> managed-schema, server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6708) Smoke tester couldn't communicate with Solr started using 'bin/solr start'

2014-11-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200417#comment-14200417
 ] 

Steve Rowe commented on SOLR-6708:
--

Another failure, on trunk this time: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/217/

Here's the entire contents of the log from starting solr 
({{/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/solr-example.log}}):

{noformat}
Starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/server

OpenJDK 64-Bit Server VM warning: -XX:+UseLargePages is disabled in this 
release.
Error: Exception thrown by the agent : java.lang.NullPointerException
{noformat}

So this is apparently a different problem.

Maybe there are orphaned Solr server(s) running on the lucene Jenkins slave?  
I'll take a look.

> Smoke tester couldn't communicate with Solr started using 'bin/solr start'
> --
>
> Key: SOLR-6708
> URL: https://issues.apache.org/jira/browse/SOLR-6708
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Steve Rowe
> Attachments: solr-example.log
>
>
> The nightly-smoke target failed on ASF Jenkins 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/208/]: 
> {noformat}
>[smoker]   unpack solr-5.0.0.tgz...
>[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
>[smoker] unpack lucene-5.0.0.tgz...
>[smoker]   **WARNING**: skipping check of 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
>  it has javax.* classes
>[smoker]   **WARNING**: skipping check of 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
>  it has javax.* classes
>[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
> java.* classes...
>[smoker] unpack lucene-5.0.0.tgz...
>[smoker] copying unpacked distribution for Java 7 ...
>[smoker] test solr example w/ Java 7...
>[smoker]   start Solr instance 
> (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
>[smoker]   startup done
>[smoker] Failed to determine the port of a local Solr instance, cannot 
> create core!
>[smoker]   test utf8...
>[smoker] 
>[smoker] command "sh ./exampledocs/test_utf8.sh 
> http://localhost:8983/solr/techproducts"; failed:
>[smoker] ERROR: Could not curl to Solr - is curl installed? Is Solr not 
> running?
>[smoker] 
>[smoker] 
>[smoker]   stop server using: bin/solr stop -p 8983
>[smoker] No process found for Solr node running on port 8983
>[smoker] ***WARNING***: Solr instance didn't respond to SIGINT; using 
> SIGKILL now...
>[smoker] ***WARNING***: Solr instance didn't respond to SIGKILL; 
> ignoring...
>[smoker] Traceback (most recent call last):
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1526, in 
>[smoker] main()
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1471, in main
>[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
> c.is_signed, ' '.join(c.test_args))
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1515, in smokeTest
>[smoker] unpackAndVerify(java, 'solr', tmpDir, artifact, svnRevision, 
> version, testArgs, baseURL)
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 616, in unpackAndVerify
>[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
> svnRevision, version, testArgs, tmpDir, baseURL)
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 783, in verifyUnpacked
>[smoker] testSolrExample(java7UnpackPath, java.java7_home, False)
>[smoker]   File 
> "/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
>  line 888, in testSolrExample
>[smoker] run('sh ./exampledocs/test_utf8.sh 
> http://localhost:8

[jira] [Commented] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200412#comment-14200412
 ] 

Robert Muir commented on LUCENE-6051:
-

nice catch simon! 

I think we should change the signatures to take Collection instead. It will 
give 90% of the usefulness without bugs or traps.

> IOUtils methods taking Iterable try to delete every element 
> of the path
> ---
>
> Key: LUCENE-6051
> URL: https://issues.apache.org/jira/browse/LUCENE-6051
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 5.0, Trunk
>
>
> We have two methods in IOUtils
> {code}
> public static void deleteFilesIgnoringExceptions(Iterable 
> files);
>  public static void deleteFilesIfExist(Iterable files) throws 
> IOException
> {code}
> if you call these with a single Path instance it interprets it as 
> Iterable since Path implements Iternable and in-turn tries to 
> delete every element of the path. I guess we should fix this before we 
> release. We also need to check if there are other places where we do this... 
> it's nasty... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Processor support for select operations?

2014-11-06 Thread Paul Elschot
Dear all,

For LUCENE-6040 it would be good to have better processor support for
selecting the i-th set bit from a 64-bit integer.

Not too long ago Long.bitCount() was intrinsified in JVM's.

I hope something similar will happen to a select(long x, int i)
method. However, better processor support is needed first.

This is somewhat off topic here, but does anyone know how to request
better processor support for select operations?


Regards,
Paul Elschot

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6051) IOUtils methods taking Iterable try to delete every element of the path

2014-11-06 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-6051:
---

 Summary: IOUtils methods taking Iterable try to 
delete every element of the path
 Key: LUCENE-6051
 URL: https://issues.apache.org/jira/browse/LUCENE-6051
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Simon Willnauer
Priority: Blocker
 Fix For: 5.0, Trunk


We have two methods in IOUtils
{code}
public static void deleteFilesIgnoringExceptions(Iterable 
files);

 public static void deleteFilesIfExist(Iterable files) throws 
IOException
{code}

if you call these with a single Path instance it interprets it as 
Iterable since Path implements Iternable and in-turn tries to 
delete every element of the path. I guess we should fix this before we release. 
We also need to check if there are other places where we do this... it's 
nasty... 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 217 - Still Failing

2014-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/217/

No tests ran.

Build Log:
[...truncated 51002 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
"file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 27.5 MB in 0.04 sec (682.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 63.2 MB in 0.15 sec (426.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 72.4 MB in 0.09 sec (821.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5414 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5414 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.disableHdfs=true -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 208 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (10.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.0-src.tgz...
   [smoker] 33.8 MB in 0.10 sec (334.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.tgz...
   [smoker] 145.7 MB in 0.43 sec (338.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.zip...
   [smoker] 151.9 MB in 0.36 sec (426.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/solr-example.log)...
   [smoker] Startup failed; see log 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/solr-example.log
   [smoker] 
   [smoker] Starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/server
   [smoker] 
   [smoker] OpenJDK 64-Bit Server VM warning: -XX:+UseLargePages is disabled in 
this release.
   [smoker] Error: Exception thrown by the agent : 
java.lang.Nu

[jira] [Commented] (SOLR-6637) Solr should have a way to restore a core

2014-11-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200216#comment-14200216
 ] 

Noble Paul commented on SOLR-6637:
--

I believe having a new class RestoreCore for this small functionality is 
overkill. It can be made part of SnapPuller. If you refactor a bit , both 
snappull and restore will have a lot of commonalities

> Solr should have a way to restore a core
> 
>
> Key: SOLR-6637
> URL: https://issues.apache.org/jira/browse/SOLR-6637
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Attachments: SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, 
> SOLR-6637.patch, SOLR-6637.patch
>
>
> We have a core backup command which backs up the index. We should have a 
> restore command too. 
> This would restore any named snapshots created by the replication handlers 
> backup command.
> While working on this patch right now I realized that during backup we only 
> backup the index. Should we backup the conf files also? Any thoughts? I could 
> separate Jira for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: running smokeTestRelease.py on my local machine

2014-11-06 Thread Shawn Heisey
On 11/6/2014 7:13 AM, Shawn Heisey wrote:
> On 11/6/2014 6:36 AM, Anurag Sharma wrote:
>> Please suggest if I am missing anything (path/env setting) while running
>> through URL param in the above fashion. Also, is there a way I can run
>> the smoke locally without giving URL params.
> 
> Here is a full commandline example given by the 4.7.0 release manager
> for that release.  I know this works, after setting the java env variables:
> 
> python3.2 -u dev-tools/scripts/smokeTestRelease.py
> http://people.apache.org/~simonw/staging_area/lucene-solr-4.7.0-RC1-rev1569660/
> 1569660 4.7.0 /tmp/smoke_test_4_7
> 
> To test the local code checkout, run a command like this, after setting
> the requisite java environment variables:
> 
> ant nightly-smoke -Dversion=4.10.2

Followup:

When running that exact command on the tags/lucene_solr_4_10_2 checkout,
it fails.  I think there must be something in the configuration that
still says 4.10.1:

prepare-release-no-sign:
[mkdir] Created dir:
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease
 [copy] Copying 431 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/lucene
 [copy] Copying 239 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/solr
 [exec] JAVA7_HOME is /usr/lib/jvm/java-7-oracle
 [exec] Traceback (most recent call last):
 [exec]   File
"/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py",
line 1467, in 
 [exec] main()
 [exec]   File
"/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py",
line 1308, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir,
isSigned, testArgs)
 [exec]   File
"/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py",
line 1446, in smokeTest
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File
"/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py",
line 359, in checkSigs
 [exec] raise RuntimeError('%s: unknown artifact %s: expected
prefix %s' % (project, text, expected))
 [exec] RuntimeError: lucene: unknown artifact
lucene-4.10.2-src.tgz: expected prefix lucene-4.10.1
 [exec] NOTE: output encoding is UTF-8
 [exec]
 [exec] Load release URL
"file:/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/"...
 [exec]
 [exec] Test Lucene...
 [exec]   test basics...

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Regarding SolrSecurity

2014-11-06 Thread Shawn Heisey
On 11/6/2014 5:56 AM, Prasanth Gangaraju wrote:
> I do realise Solr doesn't concern itself with access control as stated
> in the wiki page, but I think a general warning message in the admin
> page if the server is publicly accessible will help. Also, it would be
> nice if one of the developers can send a passive email to the users list
> telling them to lock up their solr setup.

I think it's a good idea for the Solr dashboard to include a message
about publicly-accessible Solr servers.

If Stefan thinks this is a good idea, I'm not sure exactly how it should
be worded.  Maybe something like this:

"A Solr server that can be accessed by the public is a major security
hazard.  The best option is to keep it behind a firewall that does not
allow the public to reach it at all.  Securing a publicly accessible
Solr server is not a trivial task, and must be accomplished with
third-party software."

The problem with an email telling users how to secure their Solr server
is simply that Solr itself has no mechanisms for security, and we have
no idea what servlet container the user is running under.  Solr does not
control its own network layer.

We do have plans to eliminate the separate servlet container and put
Solr in charge of its own network layer, which would allow Solr itself
to control security.  That is likely to happen first in a 6.0-SNAPSHOT
version, and if it proves to be stable, may be backported to a future
5.x release as an alternate build target.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6700) ChildDocTransformer doesn't return correct children after updating and optimising sol'r index

2014-11-06 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200194#comment-14200194
 ] 

Mikhail Khludnev commented on SOLR-6700:


in fact update="set" is handled by complete reindexing single document 
underneath. but these atomic updates isn't implemented for blocks  SOLR-6596

> ChildDocTransformer doesn't return correct children after updating and 
> optimising sol'r index
> -
>
> Key: SOLR-6700
> URL: https://issues.apache.org/jira/browse/SOLR-6700
> Project: Solr
>  Issue Type: Bug
>Reporter: Bogdan Marinescu
>Priority: Blocker
> Fix For: 4.10.3, 5.0
>
>
> I have an index with nested documents. 
> {code:title=schema.xml snippet|borderStyle=solid}
>   multiValued="false" />
>  required="true"/>
> 
> 
> 
> 
> 
> {code}
> Afterwards I add the following documents:
> {code}
> 
>   
> 1
> Test Artist 1
> 1
> 
> 11
> Test Album 1
>   Test Song 1
> 2
> 
>   
>   
> 2
> Test Artist 2
> 1
> 
> 22
> Test Album 2
>   Test Song 2
> 2
> 
>   
> 
> {code}
> After performing the following query 
> {quote}
> http://localhost:8983/solr/collection1/select?q=%7B!parent+which%3DentityType%3A1%7D&fl=*%2Cscore%2C%5Bchild+parentFilter%3DentityType%3A1%5D&wt=json&indent=true
> {quote}
> I get a correct answer (child matches parent, check _root_ field)
> {code:title=add docs|borderStyle=solid}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"1",
> "pName":"Test Artist 1",
> "entityType":1,
> "_version_":1483832661048819712,
> "_root_":"1",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"}]},
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]}]
>   }}
> {code}
> Afterwards I try to update one document:
> {code:title=update doc|borderStyle=solid}
> 
> 
> 1
> INIT
> 
> 
> {code}
> After performing the previous query I get the right result (like the previous 
> one but with the pName field updated).
> The problem only comes after performing an *optimize*. 
> Now, the same query yields the following result:
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"},
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]},
>   {
> "id":"1",
> "pName":"INIT",
> "entityType":1,
> "_root_":"1",
> "_version_":1483832916867809280,
> "score":1.0}]
>   }}
> {code}
> As can be seen, the document with id:2 now contains the child with id:11 that 
> belongs to the document with id:1. 
> I haven't found any references on the web about this except 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html
> Similar issue: SOLR-6096
> Is this problem known? Is there a workaround for this? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: running smokeTestRelease.py on my local machine

2014-11-06 Thread Shawn Heisey
On 11/6/2014 6:36 AM, Anurag Sharma wrote:
> Please suggest if I am missing anything (path/env setting) while running
> through URL param in the above fashion. Also, is there a way I can run
> the smoke locally without giving URL params.

Here is a full commandline example given by the 4.7.0 release manager
for that release.  I know this works, after setting the java env variables:

python3.2 -u dev-tools/scripts/smokeTestRelease.py
http://people.apache.org/~simonw/staging_area/lucene-solr-4.7.0-RC1-rev1569660/
1569660 4.7.0 /tmp/smoke_test_4_7

To test the local code checkout, run a command like this, after setting
the requisite java environment variables:

ant nightly-smoke -Dversion=4.10.2

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4345) Create a Classification module

2014-11-06 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200174#comment-14200174
 ] 

Tommaso Teofili commented on LUCENE-4345:
-

bq. But I can't find solr/contrib/classification in dev/trunk. Is it not 
checked in?

correct, that was not checked in as this only referred to the stuff to go in 
Lucene

bq. Is it possible to also check in it to Solr?

this would have to be discussed in a separate (Solr) issue I think, also the 
code that I have for that is 2 years old so it'd probably need some cleaning / 
refactoring, however that should be easy.

> Create a Classification module
> --
>
> Key: LUCENE-4345
> URL: https://issues.apache.org/jira/browse/LUCENE-4345
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Minor
> Fix For: Trunk
>
> Attachments: LUCENE-4345.patch, LUCENE-4345_2.patch, SOLR-3700.patch, 
> SOLR-3700_2.patch
>
>
> Lucene/Solr can host huge sets of documents containing lots of information in 
> fields so that these can be used as training examples (w/ features) in order 
> to very quickly create classifiers algorithms to use on new documents and / 
> or to provide an additional service.
> So the idea is to create a contrib module (called 'classification') to host a 
> ClassificationComponent that will use already seen data (the indexed 
> documents / fields) to classify new documents / text fragments.
> The first version will contain a (simplistic) Lucene based Naive Bayes 
> classifier but more implementations should be added in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4954 - Failure

2014-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4954/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([A2D45733DA7AC54B:2332D92BAD25A577]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl

running smokeTestRelease.py on my local machine

2014-11-06 Thread Anurag Sharma
Hi

I am running smokeTestRelease.py first time on my local machine in the
context of https://issues.apache.org/jira/browse/SOLR-6474 and
understanding how the smoke test can be launched using the script.

First I was running it using Python-27 and faced SyntaxError issues and got
rid of them when tried with Python 3.4.2.

Now getting error when am trying to run smoke using command below:
python -u smokeTestRelease.py
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293

Java 1.7 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
Traceback (most recent call last):
  File "smokeTestRelease.py", line 1522, in 
main()
  File "smokeTestRelease.py", line 1465, in main
c = parse_config()
  File "smokeTestRelease.py", line 1351, in parse_config
c.java = make_java_config(parser, c.test_java8)
  File "smokeTestRelease.py", line 1303, in make_java_config
run_java7 = _make_runner(java7_home, '1.7')
  File "smokeTestRelease.py", line 1294, in _make_runner
shell=True, stderr=subprocess.STDOUT).decode('utf-8')
  File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 620, in
check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command 'export JAVA_HOME="C:\Program
Files\Java\jdk1.7.0_51" PATH="C:\Program Files\Java\jdk1.7.0_51/bin:$PATH"
JAVACMD="C:\Program Files\Java\jdk1.7.0_51/bin/java"; java -version'
returned non-zero exit status 1

The only usage example I find in the code is it takes a URL param and it's
giving the above error:
Example usage:
python3.2 -u dev-tools/scripts/smokeTestRelease.py
http://people.apache.org/~whoever/staging_area/lucene-solr-4.3.0-RC1-rev1469340

Please suggest if I am missing anything (path/env setting) while running
through URL param in the above fashion. Also, is there a way I can run the
smoke locally without giving URL params.

Thanks
Anurag


[jira] [Created] (SOLR-6710) EarlyTerminatingCollectorException thrown during auto-warming

2014-11-06 Thread JIRA
Dirk Högemann created SOLR-6710:
---

 Summary: EarlyTerminatingCollectorException thrown during 
auto-warming
 Key: SOLR-6710
 URL: https://issues.apache.org/jira/browse/SOLR-6710
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1
 Environment: Solaris, Solr in multicore-setup
Reporter: Dirk Högemann
Priority: Minor


Our production Solr-Slaves-Cores (we have about 40 Cores (each has a moderate 
size about 10K documents to  90K documents)) produce many exceptions of type:

014-11-05 15:06:06.247 [searcherExecutor-158-thread-1] ERROR 
org.apache.solr.search.SolrCache: Error during auto-warming of 
key:org.apache.solr.search.QueryResultKey@62340b01:org.apache.solr.search.EarlyTerminatingCollectorException

Our relevant solrconfig is

  

  18

  

  
2


   


  

  

Answer from List (Mikhail Khludnev):

https://github.com/apache/lucene-solr/blob/20f9303f5e2378e2238a5381291414881ddb8172/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L522
at least this ERRORs broke nothing  see
https://github.com/apache/lucene-solr/blob/20f9303f5e2378e2238a5381291414881ddb8172/solr/core/src/java/org/apache/solr/search/FastLRUCache.java#L165

anyway, here are two usability issues:
 - of key:org.apache.solr.search.QueryResultKey@62340b01 lack of readable
toString()
 - I don't think regeneration exceptions are ERRORs, they seem WARNs for me
or even lower. also for courtesy, particularly
EarlyTerminatingCollectorExcepions can be recognized, and even ignored,
providing SolrIndexSearcher.java#L522

-> Maybe the log-level could be set to info/warn, if there are no implications 
on the functionality?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Regarding SolrSecurity

2014-11-06 Thread Prasanth Gangaraju
Hi,

I've realised just at work today that an unsecured tomcat server will
expose database credentials when solr has been configured to import data
with dataimporthandler. We do have basic authentication setup with tomcat
to prevent this. But there are quite a few servers out there on which the
solr config is publicly viewable. This endpoint also allows to retrieve any
files in the conf folder by simple modifying the url. Google dorks already
shows several thousand public solr instances easily.

I do realise Solr doesn't concern itself with access control as stated in
the wiki page, but I think a general warning message in the admin page if
the server is publicly accessible will help. Also, it would be nice if one
of the developers can send a passive email to the users list telling them
to lock up their solr setup.

Thanks,
Prasanth

P.S. this is my first mail to solr dev list, solr is awesome!


[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200090#comment-14200090
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637082 from [~mikemccand] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1637082 ]

LUCENE-6046: remove det state limit for all AutomatonTestUtil.randomAutomaton 
since they can become biggish

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200086#comment-14200086
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637080 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1637080 ]

LUCENE-6046: remove det state limit for all AutomatonTestUtil.randomAutomaton 
since they can become biggish

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200079#comment-14200079
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637078 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1637078 ]

LUCENE-6046: remove det state limit for all AutomatonTestUtil.randomAutomaton 
since they can become biggish

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-11-06 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200071#comment-14200071
 ] 

Grant Ingersoll commented on SOLR-6058:
---

[~sbower] Thanks!

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
> SOLR-6058.offset-fix.patch, Solr_Icons.pdf, Solr_Logo_on_black.pdf, 
> Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, 
> Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6693) Start script for windows fails with 32bit JRE

2014-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-6693:
--
Attachment: SOLR-6693.patch

First patch
* Fixes {{echo}} print of variable containing unsafe chars like {{)}}
* Detects Java version, JRE/JDK and 32/64bit
* Prints warning if 32 bit or if no support for {{-server}} arg

All in all, this patch lets Windows users test Solr even if they only have a 
32bit JRE installed, but the warnings will urge them to choose another JVM for 
production.

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6693) Start script for windows fails with 32bit JRE

2014-11-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-6693:
-

Assignee: Jan Høydahl

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2014-11-06 Thread Fredrik Rodland (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1426#comment-1426
 ] 

Fredrik Rodland edited comment on SOLR-4587 at 11/6/14 9:52 AM:


Sounds good!

Having implemented a pretty large system for matching documents against queries 
(using elasticsearch to index the queries) we discovered very early that 
filtering the queries was an important requirement to get things running with 
acceptable performance. 

So I would add to your list of acceptance criteria that the request must 
support *fq* and that this is performed prior to the looping.  This would 
enable us to get a smaller list of queries to loop and thus reducing the time 
to complete the request.  For this to work queries also need to support 
filter-fields - i.e. regular solr fields in addition to the fq, q, defType, etc 
mentioned above.

For the record our system has ≈1mill queries, and we're matching ≈10 doc/s.  I 
believe that much of the job in luwak also comes from the realization that the 
number of documents must be reduced prior to looping.  I'm sure [~romseygeek] 
can elaborate on this as well.


was (Author: fmr):
Sound good!

Having implemented a pretty large system for matching documents against queries 
(using elasticsearch to index the queries) we discovered very early that 
filtering the queries was an important requirement to get things running with 
acceptable performance. 

So I would add to your list of acceptance criteria that the request must 
support *fq* and that this is performed prior to the looping.  This would 
enable us to get a smaller list of queries to loop and thus reducing the time 
to complete the request.  For this to work queries also need to support 
filter-fields - i.e. regular solr fields in addition to the fq, q, defType, etc 
mentioned above.

For the record our system has ≈1mill queries, and we're matching ≈10 doc/s.  I 
believe that much of the job in luwak also comes from the realization that the 
number of documents must be reduced prior to looping.  I'm sure [~romseygeek] 
can elaborate on this as well.

> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: Trunk
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1637054 - /lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java

2014-11-06 Thread Michael McCandless
Thanks Rob.

I'll do a grep & fix all other places using AutomatonTestUtil.randomAutomaton.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Nov 6, 2014 at 4:21 AM,   wrote:
> Author: rmuir
> Date: Thu Nov  6 09:21:58 2014
> New Revision: 1637054
>
> URL: http://svn.apache.org/r1637054
> Log:
> LUCENE-6046: let this test determinize massive automata
>
> Modified:
> 
> lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java
>
> Modified: 
> lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java
> URL: 
> http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java?rev=1637054&r1=1637053&r2=1637054&view=diff
> ==
> --- 
> lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java
>  (original)
> +++ 
> lucene/dev/trunk/lucene/core/src/test/org/apache/lucene/search/TestAutomatonQuery.java
>  Thu Nov  6 09:21:58 2014
> @@ -214,7 +214,7 @@ public class TestAutomatonQuery extends
>public void testHashCodeWithThreads() throws Exception {
>  final AutomatonQuery queries[] = new AutomatonQuery[1000];
>  for (int i = 0; i < queries.length; i++) {
> -  queries[i] = new AutomatonQuery(new Term("bogus", "bogus"), 
> AutomatonTestUtil.randomAutomaton(random()));
> +  queries[i] = new AutomatonQuery(new Term("bogus", "bogus"), 
> AutomatonTestUtil.randomAutomaton(random()), Integer.MAX_VALUE);
>  }
>  final CountDownLatch startingGun = new CountDownLatch(1);
>  int numThreads = TestUtil.nextInt(random(), 2, 5);
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an "expanded" section

2014-11-06 Thread Simon Endele (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Endele updated SOLR-6709:
---
Attachment: test-response.xml

> ClassCastException in QueryResponse after applying XMLResponseParser on a 
> response containing an "expanded" section
> ---
>
> Key: SOLR-6709
> URL: https://issues.apache.org/jira/browse/SOLR-6709
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Simon Endele
> Attachments: test-response.xml
>
>
> Shouldn't the following code work on the attached input file?
> It matches the structure of a Solr response with wt=xml.
> {code}import java.io.InputStream;
> import org.apache.solr.client.solrj.ResponseParser;
> import org.apache.solr.client.solrj.impl.XMLResponseParser;
> import org.apache.solr.client.solrj.response.QueryResponse;
> import org.apache.solr.common.util.NamedList;
> import org.junit.Test;
> public class ParseXmlExpandedTest {
>   @Test
>   public void test() {
>   ResponseParser responseParser = new XMLResponseParser();
>   InputStream inStream = getClass()
>   .getResourceAsStream("test-response.xml");
>   NamedList response = responseParser
>   .processResponse(inStream, "UTF-8");
>   QueryResponse queryResponse = new QueryResponse(response, null);
>   }
> }{code}
> Unexpectedly (for me), it throws a
> java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
> cannot be cast to java.util.Map
> at 
> org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)
> Am I missing something, is XMLResponseParser deprecated or something?
> We use a setup like this to "mock" a QueryResponse for unit tests in our 
> service that post-processes the Solr response.
> Obviously, it works with the javabin format which SolrJ uses internally.
> But that is no appropriate format for unit tests, where the response should 
> be human readable.
> I think there's some conversion missing in QueryResponse or XMLResponseParser.
> Note: The null value supplied as SolrServer argument to the constructor of 
> QueryResponse shouldn't have an effect as the error occurs before the 
> parameter is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an "expanded" section

2014-11-06 Thread Simon Endele (JIRA)
Simon Endele created SOLR-6709:
--

 Summary: ClassCastException in QueryResponse after applying 
XMLResponseParser on a response containing an "expanded" section
 Key: SOLR-6709
 URL: https://issues.apache.org/jira/browse/SOLR-6709
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Simon Endele


Shouldn't the following code work on the attached input file?
It matches the structure of a Solr response with wt=xml.

{code}import java.io.InputStream;
import org.apache.solr.client.solrj.ResponseParser;
import org.apache.solr.client.solrj.impl.XMLResponseParser;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.util.NamedList;
import org.junit.Test;

public class ParseXmlExpandedTest {
@Test
public void test() {
ResponseParser responseParser = new XMLResponseParser();
InputStream inStream = getClass()
.getResourceAsStream("test-response.xml");
NamedList response = responseParser
.processResponse(inStream, "UTF-8");
QueryResponse queryResponse = new QueryResponse(response, null);
}
}{code}

Unexpectedly (for me), it throws a
java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
cannot be cast to java.util.Map
at 
org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)

Am I missing something, is XMLResponseParser deprecated or something?

We use a setup like this to "mock" a QueryResponse for unit tests in our 
service that post-processes the Solr response.
Obviously, it works with the javabin format which SolrJ uses internally.
But that is no appropriate format for unit tests, where the response should be 
human readable.

I think there's some conversion missing in QueryResponse or XMLResponseParser.

Note: The null value supplied as SolrServer argument to the constructor of 
QueryResponse shouldn't have an effect as the error occurs before the parameter 
is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200021#comment-14200021
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637056 from [~rcmuir] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1637056 ]

LUCENE-6046: let this test determinize massive automata

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200019#comment-14200019
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637055 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1637055 ]

LUCENE-6046: let this test determinize massive automata

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6046) RegExp.toAutomaton high memory use

2014-11-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200018#comment-14200018
 ] 

ASF subversion and git services commented on LUCENE-6046:
-

Commit 1637054 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1637054 ]

LUCENE-6046: let this test determinize massive automata

> RegExp.toAutomaton high memory use
> --
>
> Key: LUCENE-6046
> URL: https://issues.apache.org/jira/browse/LUCENE-6046
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.1
>Reporter: Lee Hinman
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-6046.patch, LUCENE-6046.patch, LUCENE-6046.patch
>
>
> When creating an automaton from an org.apache.lucene.util.automaton.RegExp, 
> it's possible for the automaton to use so much memory it exceeds the maximum 
> array size for java.
> The following caused an OutOfMemoryError with a 32gb heap:
> {noformat}
> new 
> RegExp("\\[\\[(Datei|File|Bild|Image):[^]]*alt=[^]|}]{50,200}").toAutomaton();
> {noformat}
> When increased to a 60gb heap, the following exception is thrown:
> {noformat}
>   1> java.lang.IllegalArgumentException: requested array size 2147483624 
> exceeds maximum array in java (2147483623)
>   1> 
> __randomizedtesting.SeedInfo.seed([7BE81EF678615C32:95C8057A4ABA5B52]:0)
>   1> org.apache.lucene.util.ArrayUtil.oversize(ArrayUtil.java:168)
>   1> org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:295)
>   1> 
> org.apache.lucene.util.automaton.Automaton$Builder.addTransition(Automaton.java:639)
>   1> 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:741)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:62)
>   1> 
> org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:51)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:477)
>   1> org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6690) Highlight expanded results

2014-11-06 Thread Simon Endele (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Endele updated SOLR-6690:
---
Priority: Major  (was: Minor)

> Highlight expanded results
> --
>
> Key: SOLR-6690
> URL: https://issues.apache.org/jira/browse/SOLR-6690
> Project: Solr
>  Issue Type: Wish
>  Components: highlighter
>Reporter: Simon Endele
>  Labels: expand, highlight
> Attachments: HighlightComponent.java.patch
>
>
> Is it possible to highlight documents in the "expand" section in the Solr 
> response?
> I'm aware that https://cwiki.apache.org/confluence/x/jiBqAg states:
> "All downstream components (faceting, highlighting, etc...) will work with 
> the collapsed result set."
> So I tried to put the highlight component after the expand component like 
> this:
> {code:xml}
>   query
>   facet
>   stats
>   debug
>   expand
>   highlight
> {code}
> But with no effect.
> Is there another switch that needs to be flipped or could this be implemented 
> easily?
> IMHO this is quite a common use case...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2014-11-06 Thread Fredrik Rodland (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1426#comment-1426
 ] 

Fredrik Rodland edited comment on SOLR-4587 at 11/6/14 9:01 AM:


Sound good!

Having implemented a pretty large system for matching documents against queries 
(using elasticsearch to index the queries) we discovered very early that 
filtering the queries was an important requirement to get things running with 
acceptable performance. 

So I would add to your list of acceptance criteria that the request must 
support *fq* and that this is performed prior to the looping.  This would 
enable us to get a smaller list of queries to loop and thus reducing the time 
to complete the request.  For this to work queries also need to support 
filter-fields - i.e. regular solr fields in addition to the fq, q, defType, etc 
mentioned above.

For the record our system has ≈1mill queries, and we're matching ≈10 doc/s.  I 
believe that much of the job in luwak also comes from the realization that the 
number of documents must be reduced prior to looping.  I'm sure [~romseygeek] 
can elaborate on this as well.


was (Author: fmr):
Sound good!

Having implemented a pretty large system for matching documents against queries 
(using elasticsearch to index the queries) we discovered very early that 
filtering the queries was an important requirement to get things running with 
acceptable performance. 

So I would add to your list of acceptance criteria that the request must 
support *fq* and that this is performed prior to the looping.  This would 
enable us to get a smaller list of queries to loop and thus reducing the time 
to complete the request.  For this to work queries also need to support 
filter-fields - i.e. regular solr fields in addition to the fq, q, defType, etc 
mentioned above.

For the record our system has ≈1mill queries, and we're matching ≈10 doc/s.  I 
believe that much of the job in luwak also comes from the realization that the 
number of filters must be reduced prior to looping.  I'm sure [~romseygeek] can 
elaborate on this as well.

> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: Trunk
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2014-11-06 Thread Fredrik Rodland (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1426#comment-1426
 ] 

Fredrik Rodland commented on SOLR-4587:
---

Sound good!

Having implemented a pretty large system for matching documents against queries 
(using elasticsearch to index the queries) we discovered very early that 
filtering the queries was an important requirement to get things running with 
acceptable performance. 

So I would add to your list of acceptance criteria that the request must 
support *fq* and that this is performed prior to the looping.  This would 
enable us to get a smaller list of queries to loop and thus reducing the time 
to complete the request.  For this to work queries also need to support 
filter-fields - i.e. regular solr fields in addition to the fq, q, defType, etc 
mentioned above.

For the record our system has ≈1mill queries, and we're matching ≈10 doc/s.  I 
believe that much of the job in luwak also comes from the realization that the 
number of filters must be reduced prior to looping.  I'm sure [~romseygeek] can 
elaborate on this as well.

> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: Trunk
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b34) - Build # 11410 - Failure!

2014-11-06 Thread Paul Elschot
As I understood this was an svn eol style problem. I provided a git diff patch.
Is it possible to avoid this svn problem in a git patch?
Could ant precommit catch this on a git branch? I did not use that this time.

Regards,
Paul Elschot

On 5 november 2014 23:47:58 CET, Adrien Grand  wrote:
>Thanks Hoss!
>
>On Wed, Nov 5, 2014 at 10:35 PM, Chris Hostetter
> wrote:
>>
>> I fixed the svn:eol-style on trunk & 5x
>>
>> : Date: Wed, 5 Nov 2014 19:32:30 + (UTC)
>> : From: Policeman Jenkins Server 
>> : Reply-To: dev@lucene.apache.org
>> : To: sha...@apache.org, no...@apache.org, jpou...@apache.org,
>> : dev@lucene.apache.org
>> : Subject: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b34) -
>Build #
>> : 11410 - Failure!
>> :
>> : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11410/
>> : Java: 32bit/jdk1.9.0-ea-b34 -server -XX:+UseSerialGC (asserts:
>true)
>> :
>> : All tests passed
>> :
>> : Build Log:
>> : [...truncated 52180 lines...]
>> : BUILD FAILED
>> : /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:525: The
>following error occurred while executing this line:
>> : /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:432: The
>following error occurred while executing this line:
>> :
>/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:105:
>The following error occurred while executing this line:
>> :
>/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:204:
>The following files are missing svn:eol-style (or binary
>svn:mime-type):
>> : * ./lucene/core/src/test/org/apache/lucene/util/TestBitUtil.java
>> :
>> : Total time: 121 minutes 56 seconds
>> : Build step 'Invoke Ant' marked build as failure
>> : [description-setter] Description set: Java: 32bit/jdk1.9.0-ea-b34
>-server -XX:+UseSerialGC (asserts: true)
>> : Archiving artifacts
>> : Recording test results
>> : Email was triggered for: Failure - Any
>> : Sending email for trigger: Failure - Any
>> :
>> :
>> :
>>
>> -Hoss
>> http://www.lucidworks.com/
>
>
>
>-- 
>Adrien
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org