[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115837#comment-14115837
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1621389 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1621389 ]

LUCENE-5904: fix corruption case caused by virus checker after an unclean IW 
shutdown

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5596) OverseerTest.testOverseerFailure - leader node already exists.

2014-08-29 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-5596:
-

Assignee: Mark Miller  (was: Shalin Shekhar Mangar)

 OverseerTest.testOverseerFailure - leader node already exists.
 --

 Key: SOLR-5596
 URL: https://issues.apache.org/jira/browse/SOLR-5596
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.11


 Seeing this a bunch on jenkins - previous leader ephemeral node is still 
 around for some reason.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5596) OverseerTest.testOverseerFailure - leader node already exists.

2014-08-29 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5596.
---

   Resolution: Fixed
Fix Version/s: (was: 4.9)
   4.11

 OverseerTest.testOverseerFailure - leader node already exists.
 --

 Key: SOLR-5596
 URL: https://issues.apache.org/jira/browse/SOLR-5596
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.11


 Seeing this a bunch on jenkins - previous leader ephemeral node is still 
 around for some reason.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5820) SuggestStopFilter should have a factory

2014-08-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-5820:
--

Assignee: Steve Rowe

 SuggestStopFilter should have a factory
 ---

 Key: LUCENE-5820
 URL: https://issues.apache.org/jira/browse/LUCENE-5820
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Varun Thacker
Assignee: Steve Rowe
Priority: Minor
 Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch


 While trying to use the new Suggester in Solr I realized that 
 SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-08-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-5833:
--

Assignee: Steve Rowe

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Attachments: LUCENE-5833.patch, SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115901#comment-14115901
 ] 

ASF subversion and git services commented on LUCENE-5904:
-

Commit 1621392 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1621392 ]

LUCENE-5904: fix corruption case caused by virus checker after an unclean IW 
shutdown

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-29 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5904.


   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5909) Run smoketester on Java 8

2014-08-29 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5909:
---

Attachment: LUCENE-5909.patch

Another patch with a bunch of fixes.  Should actually run now.

 Run smoketester on Java 8
 -

 Key: LUCENE-5909
 URL: https://issues.apache.org/jira/browse/LUCENE-5909
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Ryan Ernst
  Labels: Java8
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5909.patch, LUCENE-5909.patch, LUCENE-5909.patch


 In the past, when we were on Java 6, we ran the Smoketester on Java 6 and 
 Java 7. As Java 8 is now officially released and supported, smoketester 
 should now use and require JAVA8_HOME.
 For the nightly-smoke tests I have to install the openjdk8 FreeBSD package, 
 but that should not be a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-08-29 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115991#comment-14115991
 ] 

Xu Zhang commented on SOLR-6452:


Looks like this bug is only for multi-valued field. 

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe

 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-08-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116025#comment-14116025
 ] 

Erick Erickson commented on LUCENE-5914:


Haven't looked at the patch I confess, but is there a way to turn compression 
off completely? I've seen a few situations in the wild where 
compressing/decompressing is taking up large amounts of CPU.

FWIW,
Erick

 More options for stored fields compression
 --

 Key: LUCENE-5914
 URL: https://issues.apache.org/jira/browse/LUCENE-5914
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.11

 Attachments: LUCENE-5914.patch


 Since we added codec-level compression in Lucene 4.1 I think I got about the 
 same amount of users complaining that compression was too aggressive and that 
 compression was too light.
 I think it is due to the fact that we have users that are doing very 
 different things with Lucene. For example if you have a small index that fits 
 in the filesystem cache (or is close to), then you might never pay for actual 
 disk seeks and in such a case the fact that the current stored fields format 
 needs to over-decompress data can sensibly slow search down on cheap queries.
 On the other hand, it is more and more common to use Lucene for things like 
 log analytics, and in that case you have huge amounts of data for which you 
 don't care much about stored fields performance. However it is very 
 frustrating to notice that the data that you store takes several times less 
 space when you gzip it compared to your index although Lucene claims to 
 compress stored fields.
 For that reason, I think it would be nice to have some kind of options that 
 would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-08-29 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116075#comment-14116075
 ] 

Shawn Heisey commented on LUCENE-5914:
--

bq. Haven't looked at the patch I confess, but is there a way to turn 
compression off completely?

+1.

From what I understand, this would be relatively straightforward in Lucene, 
where you can swap low-level components in and out pretty easily, but I'm 
really looking for that option to be user-configurable in Solr.  I know that 
will require a separate issue.  For my index, compression is a good thing, but 
like Erick, I've seen situations with Solr on the list and IRC where it really 
hurts some people.


 More options for stored fields compression
 --

 Key: LUCENE-5914
 URL: https://issues.apache.org/jira/browse/LUCENE-5914
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.11

 Attachments: LUCENE-5914.patch


 Since we added codec-level compression in Lucene 4.1 I think I got about the 
 same amount of users complaining that compression was too aggressive and that 
 compression was too light.
 I think it is due to the fact that we have users that are doing very 
 different things with Lucene. For example if you have a small index that fits 
 in the filesystem cache (or is close to), then you might never pay for actual 
 disk seeks and in such a case the fact that the current stored fields format 
 needs to over-decompress data can sensibly slow search down on cheap queries.
 On the other hand, it is more and more common to use Lucene for things like 
 log analytics, and in that case you have huge amounts of data for which you 
 don't care much about stored fields performance. However it is very 
 frustrating to notice that the data that you store takes several times less 
 space when you gzip it compared to your index although Lucene claims to 
 compress stored fields.
 For that reason, I think it would be nice to have some kind of options that 
 would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116199#comment-14116199
 ] 

ASF subversion and git services commented on SOLR-6365:
---

Commit 1621414 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1621414 ]

SOLR-6365: specify appends, defaults, invariants outside of the request handler

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6365.patch


 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
 !-- use json for all paths and _txt as the default search field--
 paramSet name=global path=/**
   lst name=defaults
  str name=wtjson/str
  str name=df_txt/str
   /lst
 /paramSet
 {code}
 other examples
 {code:xml}
 paramSet name=a path=/dump3,/root/*,/root1/**
 lst name=defaults
   str name=aA/str
 /lst
 lst name=invariants
   str name=bB/str
 /lst
 lst name=appends
   str name=cC/str
 /lst
   /paramSet
   requestHandler name=/dump3 class=DumpRequestHandler/
   requestHandler name=/dump4 class=DumpRequestHandler/
   requestHandler name=/root/dump5 class=DumpRequestHandler/
   requestHandler name=/root1/anotherlevel/dump6 
 class=DumpRequestHandler/
   requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/
   requestHandler name=/dump2 class=DumpRequestHandler paramSet=a
 lst name=defaults
   str name=aA1/str
 /lst
 lst name=invariants
   str name=bB1/str
 /lst
 lst name=appends
   str name=cC1/str
 /lst
   /requestHandler
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4819 - Failure

2014-08-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4819/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D5749907102D9F54]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([D5749907102D9F54]:0)




Build Log:
[...truncated 12481 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest
   [junit4]   2 Creating dataDir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/solr-core/test/J0/./temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest-D5749907102D9F54-001/init-core-data-001
   [junit4]   2 2785438 T6131 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 2785439 T6131 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 2785441 T6131 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 2785442 T6131 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 2785443 T6132 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 2785543 T6131 oasc.ZkTestServer.run start zk server on 
port:16486
   [junit4]   2 2785544 T6131 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2785545 T6131 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2785548 T6138 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@b912bd6 name:ZooKeeperConnection 
Watcher:127.0.0.1:16486 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 2785548 T6131 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2785548 T6131 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2785549 T6131 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 2785551 T6131 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2785552 T6131 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2785553 T6140 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@24272dc0 
name:ZooKeeperConnection Watcher:127.0.0.1:16486/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 2785553 T6131 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2785554 T6131 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2785554 T6131 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 2785556 T6131 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 2785557 T6131 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 2785558 T6131 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 2785560 T6131 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 2785561 T6131 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 2785563 T6131 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2 2785564 T6131 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 2785666 T6131 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2785666 T6131 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2785668 T6131 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 2785669 T6131 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 2785670 T6131 

[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-08-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116233#comment-14116233
 ] 

Robert Muir commented on LUCENE-5914:
-

The patch adds conditional logic to the default codec, instead of different 
formats. Why this approach? 

 More options for stored fields compression
 --

 Key: LUCENE-5914
 URL: https://issues.apache.org/jira/browse/LUCENE-5914
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.11

 Attachments: LUCENE-5914.patch


 Since we added codec-level compression in Lucene 4.1 I think I got about the 
 same amount of users complaining that compression was too aggressive and that 
 compression was too light.
 I think it is due to the fact that we have users that are doing very 
 different things with Lucene. For example if you have a small index that fits 
 in the filesystem cache (or is close to), then you might never pay for actual 
 disk seeks and in such a case the fact that the current stored fields format 
 needs to over-decompress data can sensibly slow search down on cheap queries.
 On the other hand, it is more and more common to use Lucene for things like 
 log analytics, and in that case you have huge amounts of data for which you 
 don't care much about stored fields performance. However it is very 
 frustrating to notice that the data that you store takes several times less 
 space when you gzip it compared to your index although Lucene claims to 
 compress stored fields.
 For that reason, I think it would be nice to have some kind of options that 
 would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93663 - Failure!

2014-08-29 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/93663/

3 tests failed.
REGRESSION:  
org.apache.lucene.codecs.lucene40.TestLucene40DocValuesFormat.testMergeStability

Error Message:
expected:{gen=36, null=134, fdx=5962, fdt=774, fnm=207, cfs=8015, cfe=181} 
but was:{gen=36, null=134, fdx=5962, fdt=774, idx=1245, fnm=207, cfs=8015, 
cfe=181}

Stack Trace:
java.lang.AssertionError: expected:{gen=36, null=134, fdx=5962, fdt=774, 
fnm=207, cfs=8015, cfe=181} but was:{gen=36, null=134, fdx=5962, fdt=774, 
idx=1245, fnm=207, cfs=8015, cfe=181}
at 
__randomizedtesting.SeedInfo.seed([E296A51CDF97FEF3:96DAE333D27DFC45]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testMergeStability(BaseIndexFileFormatTestCase.java:195)
at 
org.apache.lucene.index.BaseDocValuesFormatTestCase.testMergeStability(BaseDocValuesFormatTestCase.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)



[jira] [Commented] (LUCENE-5912) Non-NRT directory readers don't reuse segments maintained IndexWriter's segment reader pool

2014-08-29 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115043#comment-14115043
 ] 

Michael McCandless commented on LUCENE-5912:


Sheesh, so it is actually sharing SegmentReaders from the writer's pool?  Talk 
about confusing code ... none of can figure out how it works.

Maybe we should add a simple test case confirming that readers are in fact in 
common, to be sure :)  I'll work on this.

 Non-NRT directory readers don't reuse segments maintained IndexWriter's 
 segment reader pool
 ---

 Key: LUCENE-5912
 URL: https://issues.apache.org/jira/browse/LUCENE-5912
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein

 Currently, if you attempt to open a reader into an index at a specific commit 
 point, it will always behave as though it's opening a completely new index - 
 even if one were to use the {{DirectoryReader.openIfChanged(DirectoryReader, 
 IndexCommit)}} API, and pass in an NRT reader instance. What should ideally 
 happen here is that the SegmentReader pool managed by IndexWriter linked to 
 the NRT reader gets reused for the commit point open as much as possible, to 
 avoid wasting heap space.
 The problem becomes evident when looking at the code in DirectoryReader:
 {code}
 protected DirectoryReader doOpenIfChanged(final IndexCommit commit) throws 
 IOException {
 ensureOpen();
 // If we were obtained by writer.getReader(), re-ask the
 // writer to get a new reader.
 if (writer != null) {
   return doOpenFromWriter(commit);
 } else {
   return doOpenNoWriter(commit);
 }
   }
   private DirectoryReader doOpenFromWriter(IndexCommit commit) throws 
 IOException {
 if (commit != null) {
   return doOpenFromCommit(commit);
 }
 ..
 {code}
 Looks like the fact that a commit point is being re-opened trumps the 
 presence of the associated IndexWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5912) Non-NRT directory readers don't reuse segments maintained IndexWriter's segment reader pool

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115050#comment-14115050
 ] 

ASF subversion and git services commented on LUCENE-5912:
-

Commit 1621270 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1621270 ]

LUCENE-5912: add test case confirming openIfChanged with an NRT reader on a 
commit point in fact shares SegmentReader

 Non-NRT directory readers don't reuse segments maintained IndexWriter's 
 segment reader pool
 ---

 Key: LUCENE-5912
 URL: https://issues.apache.org/jira/browse/LUCENE-5912
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein

 Currently, if you attempt to open a reader into an index at a specific commit 
 point, it will always behave as though it's opening a completely new index - 
 even if one were to use the {{DirectoryReader.openIfChanged(DirectoryReader, 
 IndexCommit)}} API, and pass in an NRT reader instance. What should ideally 
 happen here is that the SegmentReader pool managed by IndexWriter linked to 
 the NRT reader gets reused for the commit point open as much as possible, to 
 avoid wasting heap space.
 The problem becomes evident when looking at the code in DirectoryReader:
 {code}
 protected DirectoryReader doOpenIfChanged(final IndexCommit commit) throws 
 IOException {
 ensureOpen();
 // If we were obtained by writer.getReader(), re-ask the
 // writer to get a new reader.
 if (writer != null) {
   return doOpenFromWriter(commit);
 } else {
   return doOpenNoWriter(commit);
 }
   }
   private DirectoryReader doOpenFromWriter(IndexCommit commit) throws 
 IOException {
 if (commit != null) {
   return doOpenFromCommit(commit);
 }
 ..
 {code}
 Looks like the fact that a commit point is being re-opened trumps the 
 presence of the associated IndexWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5912) Non-NRT directory readers don't reuse segments maintained IndexWriter's segment reader pool

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115049#comment-14115049
 ] 

ASF subversion and git services commented on LUCENE-5912:
-

Commit 1621269 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1621269 ]

LUCENE-5912: add test case confirming openIfChanged with an NRT reader on a 
commit point in fact shares SegmentReader

 Non-NRT directory readers don't reuse segments maintained IndexWriter's 
 segment reader pool
 ---

 Key: LUCENE-5912
 URL: https://issues.apache.org/jira/browse/LUCENE-5912
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein

 Currently, if you attempt to open a reader into an index at a specific commit 
 point, it will always behave as though it's opening a completely new index - 
 even if one were to use the {{DirectoryReader.openIfChanged(DirectoryReader, 
 IndexCommit)}} API, and pass in an NRT reader instance. What should ideally 
 happen here is that the SegmentReader pool managed by IndexWriter linked to 
 the NRT reader gets reused for the commit point open as much as possible, to 
 avoid wasting heap space.
 The problem becomes evident when looking at the code in DirectoryReader:
 {code}
 protected DirectoryReader doOpenIfChanged(final IndexCommit commit) throws 
 IOException {
 ensureOpen();
 // If we were obtained by writer.getReader(), re-ask the
 // writer to get a new reader.
 if (writer != null) {
   return doOpenFromWriter(commit);
 } else {
   return doOpenNoWriter(commit);
 }
   }
   private DirectoryReader doOpenFromWriter(IndexCommit commit) throws 
 IOException {
 if (commit != null) {
   return doOpenFromCommit(commit);
 }
 ..
 {code}
 Looks like the fact that a commit point is being re-opened trumps the 
 presence of the associated IndexWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5904) Add MDW.enableVirusScanner / fix windows handling bugs

2014-08-29 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5904:
---

Attachment: LUCENE-5904.patch

New patch, just adding a check  assert in IFD that it should never delete a 
pending file that has a non-zero refCount.

I think it's ready ... I'll commit later today ...

 Add MDW.enableVirusScanner / fix windows handling bugs
 --

 Key: LUCENE-5904
 URL: https://issues.apache.org/jira/browse/LUCENE-5904
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, 
 LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch, LUCENE-5904.patch


 IndexWriter has logic to handle the case where it can't delete a file (it 
 puts in a retry list and indexfiledeleter will periodically retry, you can 
 force this retry with deletePendingFiles).
 But from what I can tell, this logic is incomplete, e.g. its not properly 
 handled during CFS creation, so if a file temporarily can't be deleted things 
 like flush will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6450) Option to send pre-analyzed documents from leader to replica instead of replicas re-running analysis.

2014-08-29 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115174#comment-14115174
 ] 

Mark Miller commented on SOLR-6450:
---

It's almost more interesting to try working out sending the docs to replicas in 
parallel with indexing on the leader instead of after.

 Option to send pre-analyzed documents from leader to replica instead of 
 replicas re-running analysis.
 -

 Key: SOLR-6450
 URL: https://issues.apache.org/jira/browse/SOLR-6450
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Given the leader has to run the full update processor chain on each document 
 (text analysis, etc), it would be good to have it send a pre-analyzed 
 document to replicas (to improve near realtime replication), allowing the 
 replica to avoid re-doing expensive work.
 Thought should be given about allowing the leader to accept pre-analyzed as 
 well, so that you could off-load the document analysis to external processes. 
 For instance, have 1000's of Storm workers doing the analysis and then 
 sending pre-analyzed documents to Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6450) Option to send pre-analyzed documents from leader to replica instead of replicas re-running analysis.

2014-08-29 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115192#comment-14115192
 ] 

Ramkumar Aiyengar commented on SOLR-6450:
-

bq. Although the idea has always sounded good, my un-tested guess has always 
been that serialization+un-serialization will generally be more expensive than 
just running analysis on the text again (which can be thought of as a 
serialized form of analyzed text). It certainly depends on the analysis being 
performed of course.

This in part might just be due to lack of a fast generic binary serializing 
mechanism. We do have javabin, but that's very limited and forces us to 
describe the data instead of using schemas and transferring that description 
only when needed. A broader (and obviously more expensive) idea might be to 
have an out-of-band (i.e. not using the servlet) streaming of binary serialized 
data (say using Avro). This might open up a lot of other possibilities for 
SolrCloud as such..

 Option to send pre-analyzed documents from leader to replica instead of 
 replicas re-running analysis.
 -

 Key: SOLR-6450
 URL: https://issues.apache.org/jira/browse/SOLR-6450
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Given the leader has to run the full update processor chain on each document 
 (text analysis, etc), it would be good to have it send a pre-analyzed 
 document to replicas (to improve near realtime replication), allowing the 
 replica to avoid re-doing expensive work.
 Thought should be given about allowing the leader to accept pre-analyzed as 
 well, so that you could off-load the document analysis to external processes. 
 For instance, have 1000's of Storm workers doing the analysis and then 
 sending pre-analyzed documents to Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 13755 - Failure!

2014-08-29 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/13755/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
startOffset must be non-negative, and endOffset must be = startOffset, 
startOffset=4,endOffset=2

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be = startOffset, startOffset=4,endOffset=2
at 
__randomizedtesting.SeedInfo.seed([E85F79F58144AC5F:8204C6E4D80A8CAC]:0)
at 
org.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl.setOffset(OffsetAttributeImpl.java:45)
at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.reverse.ReverseStringFilter.incrementToken(ReverseStringFilter.java:91)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:704)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:615)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:513)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1798 - Failure!

2014-08-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1798/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testTwoServers

Error Message:
IOException occured when talking to server at: https://127.0.0.1:52937/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:52937/solr
at 
__randomizedtesting.SeedInfo.seed([5A8BDCFD0B69BB9:A54213420BFDB599]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:562)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.addDocs(TestLBHttpSolrServer.java:115)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.setUp(TestLBHttpSolrServer.java:98)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-4580) Support for protecting content in ZK

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115200#comment-14115200
 ] 

ASF subversion and git services commented on SOLR-4580:
---

Commit 1621294 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1621294 ]

SOLR-4580: Support for protecting content in ZooKeeper.

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4580) Support for protecting content in ZK

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115205#comment-14115205
 ] 

ASF subversion and git services commented on SOLR-4580:
---

Commit 1621295 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1621295 ]

SOLR-4580: Support for protecting content in ZooKeeper.

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Fix For: 5.0, 4.11

 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4580) Support for protecting content in ZK

2014-08-29 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4580.
---

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

Thanks Per!

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Fix For: 5.0, 4.11

 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk7) - Build # 11000 - Failure!

2014-08-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/11000/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([3D29A7E887ADDD9A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:853)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=1680, 
name=coreZkRegister-743-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:197) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2054)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:453) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1099)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 
at java.lang.Thread.run(Thread.java:853)2) Thread[id=1674, 
name=OverseerHdfsCoreFailoverThread-92361197477494787-188.138.97.18:_-n_00,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
java.lang.Thread.sleep(Thread.java:977) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:136)
 at java.lang.Thread.run(Thread.java:853)3) Thread[id=1678, 
name=Thread-509, state=WAITING, group=TGRP-TestManagedResourceStorage] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:172) at 
org.apache.solr.core.CloserThread.run(CoreContainer.java:905)4) 
Thread[id=1677, name=searcherExecutor-749-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:197) at 

[jira] [Created] (SOLR-6451) SolrCore's logging of it's directory factory implementation should give some more context.

2014-08-29 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6451:
-

 Summary: SolrCore's logging of it's directory factory 
implementation should give some more context.
 Key: SOLR-6451
 URL: https://issues.apache.org/jira/browse/SOLR-6451
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Trivial


Rather than:

INFO  org.apache.solr.core.SolrCore  – solr.DirectoryFactory

should be something like:

INFO  org.apache.solr.core.SolrCore  – Instantiating DirectoryFactory 
solr.DirectoryFactory



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5913) Request dictionary corpus from Korean national institute of the Korean language

2014-08-29 Thread Sangwhan Moon (JIRA)
Sangwhan Moon created LUCENE-5913:
-

 Summary: Request dictionary corpus from Korean national institute 
of the Korean language
 Key: LUCENE-5913
 URL: https://issues.apache.org/jira/browse/LUCENE-5913
 Project: Lucene - Core
  Issue Type: Task
Reporter: Sangwhan Moon
Priority: Minor


The Korean national institute of the Korean language is supposed to have a 
public domain dictionary database of the modern Korean language available in 
the public domain.

This task covers requesting that database and see if it is usable as a training 
set.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk7) - Build # 11000 - Failure!

2014-08-29 Thread Timothy Potter
This test doesn't even create any cores (other than what's created by
the parent class). Will AwaitFix it for now.


On Fri, Aug 29, 2014 at 6:58 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/11000/
 Java: 32bit/ibm-j9-jdk7 
 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

 3 tests failed.
 FAILED:  
 junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

 Error Message:
 SolrCore.getOpenCount()==2

 Stack Trace:
 java.lang.RuntimeException: SolrCore.getOpenCount()==2
 at __randomizedtesting.SeedInfo.seed([3D29A7E887ADDD9A]:0)
 at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
 at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
 at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
 at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:853)


 FAILED:  
 junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

 Error Message:
 5 threads leaked from SUITE scope at 
 org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=1680, 
 name=coreZkRegister-743-thread-1, state=WAITING, 
 group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
 Method) at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:197) at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2054)
  at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:453)   
   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1099) 
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.lang.Thread.run(Thread.java:853)2) Thread[id=1674, 
 name=OverseerHdfsCoreFailoverThread-92361197477494787-188.138.97.18:_-n_00,
  state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] 
 at java.lang.Thread.sleep(Native Method) at 
 java.lang.Thread.sleep(Thread.java:977) at 
 org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:136)
  at java.lang.Thread.run(Thread.java:853)3) Thread[id=1678, 
 name=Thread-509, state=WAITING, group=TGRP-TestManagedResourceStorage]
  at java.lang.Object.wait(Native Method) at 
 java.lang.Object.wait(Object.java:172) at 
 org.apache.solr.core.CloserThread.run(CoreContainer.java:905)4) 
 Thread[id=1677, 

[jira] [Created] (LUCENE-5914) More options for stored fields compression

2014-08-29 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5914:


 Summary: More options for stored fields compression
 Key: LUCENE-5914
 URL: https://issues.apache.org/jira/browse/LUCENE-5914
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.11


Since we added codec-level compression in Lucene 4.1 I think I got about the 
same amount of users complaining that compression was too aggressive and that 
compression was too light.

I think it is due to the fact that we have users that are doing very different 
things with Lucene. For example if you have a small index that fits in the 
filesystem cache (or is close to), then you might never pay for actual disk 
seeks and in such a case the fact that the current stored fields format needs 
to over-decompress data can sensibly slow search down on cheap queries.

On the other hand, it is more and more common to use Lucene for things like log 
analytics, and in that case you have huge amounts of data for which you don't 
care much about stored fields performance. However it is very frustrating to 
notice that the data that you store takes several times less space when you 
gzip it compared to your index although Lucene claims to compress stored fields.

For that reason, I think it would be nice to have some kind of options that 
would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6443) TestManagedResourceStorage fails on Jenkins with SolrCore.getOpenCount()==2

2014-08-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115455#comment-14115455
 ] 

ASF subversion and git services commented on SOLR-6443:
---

Commit 1621338 from [~thelabdude] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1621338 ]

SOLR-6443: Disable test in 4x branch until the leaking cores can be resolved in 
trunk.

 TestManagedResourceStorage fails on Jenkins with SolrCore.getOpenCount()==2
 ---

 Key: SOLR-6443
 URL: https://issues.apache.org/jira/browse/SOLR-6443
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Timothy Potter
Assignee: Timothy Potter

 FAILED:  
 junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage
 Error Message:
 SolrCore.getOpenCount()==2
 Stack Trace:
 java.lang.RuntimeException: SolrCore.getOpenCount()==2
 at __randomizedtesting.SeedInfo.seed([A491D1FD4CEF5EF8]:0)
 at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
 at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
 at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
 at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:484)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk7) - Build # 11000 - Failure!

2014-08-29 Thread Mark Miller
I’ll take a look at it.

- Mark

http://about.me/markrmiller

 On Aug 29, 2014, at 12:41 PM, Timothy Potter thelabd...@gmail.com wrote:
 
 This test doesn't even create any cores (other than what's created by
 the parent class). Will AwaitFix it for now.
 
 
 On Fri, Aug 29, 2014 at 6:58 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/11000/
 Java: 32bit/ibm-j9-jdk7 
 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
 
 3 tests failed.
 FAILED:  
 junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage
 
 Error Message:
 SolrCore.getOpenCount()==2
 
 Stack Trace:
 java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([3D29A7E887ADDD9A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:853)
 
 
 FAILED:  
 junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage
 
 Error Message:
 5 threads leaked from SUITE scope at 
 org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=1680, 
 name=coreZkRegister-743-thread-1, state=WAITING, 
 group=TGRP-TestManagedResourceStorage] at 
 sun.misc.Unsafe.park(Native Method) at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:197) at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2054)
  at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:453)  
at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1099)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.lang.Thread.run(Thread.java:853)2) Thread[id=1674, 
 name=OverseerHdfsCoreFailoverThread-92361197477494787-188.138.97.18:_-n_00,
  state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] 
 at java.lang.Thread.sleep(Native Method) at 
 java.lang.Thread.sleep(Thread.java:977) at 
 org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:136)
  at java.lang.Thread.run(Thread.java:853)3) Thread[id=1678, 
 name=Thread-509, state=WAITING, group=TGRP-TestManagedResourceStorage]   
   at java.lang.Object.wait(Native Method) at 
 

[jira] [Commented] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115472#comment-14115472
 ] 

Tomás Fernández Löbbe commented on SOLR-6024:
-

bq. isn't that a bug then?
I think it is. I'd tackle that in a different Jira, it has a workaround (use 
index=true) and it reproduces in trunk (unlike the issue described here). 
After a quick look, I think it would be easy to count the missing values for 
the complete field, but not trivial to count the missing values when using 
stats.facet (for each different value)

 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, 
 SOLR-6024-trunk.patch, SOLR-6024.patch, SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1621340 - in /lucene/dev/trunk: build.xml dev-tools/scripts/smokeTestRelease.py

2014-08-29 Thread Uwe Schindler
Thanks! Unfortunately FreeBSD Jenkins cannot handle Java 8 at the moment.

I'll work on it the following week. It looks like a problem with socket I/O, 
causing SIGSEGV and SIGBUS.

I was chatting with Ryan via Hangouts, I think we should:
- use default JAVA_HOME as basis, check that it is Java 7, otherwise fail to 
run. So we donÄt need to pass crazy JAVA7_HOME env var. Because we have 
argparse, passing non-standard env vars looks wrong
- if you want Java 8 testing, you can optionally pass --test-java8 
/path/to/jdk1.8.0: If this is done, the path is checked if it is Java 8, 
otherwise fail to run

By default it only tests Java 7.

For nightly smoke I will change the logic.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: rjer...@apache.org [mailto:rjer...@apache.org]
 Sent: Friday, August 29, 2014 6:56 PM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1621340 - in /lucene/dev/trunk: build.xml dev-
 tools/scripts/smokeTestRelease.py
 
 Author: rjernst
 Date: Fri Aug 29 16:55:31 2014
 New Revision: 1621340
 
 URL: http://svn.apache.org/r1621340
 Log:
 Revert smoke test update to java 8 for now
 
 Modified:
 lucene/dev/trunk/build.xml
 lucene/dev/trunk/dev-tools/scripts/smokeTestRelease.py
 
 Modified: lucene/dev/trunk/build.xml
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/build.xml?rev=1621340r1
 =1621339r2=1621340view=diff
 ==
 
 --- lucene/dev/trunk/build.xml (original)
 +++ lucene/dev/trunk/build.xml Fri Aug 29 16:55:31 2014
 @@ -381,14 +381,9 @@ File | Project Structure | Platform Sett
target name=-env-JAVA7_HOME depends=-load-env
 if=env.JAVA7_HOME
   property name=JAVA7_HOME value=${env.JAVA7_HOME}/
/target
 -
 -  target name=-env-JAVA8_HOME depends=-load-env
 if=env.JAVA8_HOME
 - property name=JAVA8_HOME value=${env.JAVA8_HOME}/
 -  /target
 
 -  target name=nightly-smoke description=Builds an unsigned release
 and smoke tests it depends=clean,-env-JAVA7_HOME,-env-
 JAVA8_HOME
 +  target name=nightly-smoke description=Builds an unsigned release
 + and smoke tests it depends=clean,-env-JAVA7_HOME
 fail unless=JAVA7_HOMEJAVA7_HOME property or environment
 variable is not defined./fail
 -   fail unless=JAVA8_HOMEJAVA8_HOME property or environment
 variable is not defined./fail
 exec executable=${python32.exe} failonerror=true
arg value=-V/
 /exec
 @@ -420,7 +415,6 @@ File | Project Structure | Platform Sett
   arg value=${fakeRelease.uri}/
   arg value=${smokeTestRelease.testArgs}/
   env key=JAVA7_HOME file=${JAVA7_HOME}/
 - env key=JAVA8_HOME file=${JAVA8_HOME}/
 /exec
 delete dir=${fakeRelease}/
 delete dir=${fakeReleaseTmp}/
 
 Modified: lucene/dev/trunk/dev-tools/scripts/smokeTestRelease.py
 URL: http://svn.apache.org/viewvc/lucene/dev/trunk/dev-
 tools/scripts/smokeTestRelease.py?rev=1621340r1=1621339r2=1621340
 view=diff
 ==
 
 --- lucene/dev/trunk/dev-tools/scripts/smokeTestRelease.py (original)
 +++ lucene/dev/trunk/dev-tools/scripts/smokeTestRelease.py Fri Aug 29
 +++ 16:55:31 2014
 @@ -63,8 +63,6 @@ def unshortenURL(url):
  def javaExe(version):
if version == '1.7':
  path = JAVA7_HOME
 -  elif version == '1.8':
 -path = JAVA8_HOME
else:
  raise RuntimeError(unknown Java version '%s' % version)
if cygwin:
 @@ -83,14 +81,8 @@ try:
  except KeyError:
raise RuntimeError('please set JAVA7_HOME in the env before running
 smokeTestRelease')  print('JAVA7_HOME is %s' % JAVA7_HOME)
 -try:
 -  JAVA8_HOME = env['JAVA8_HOME']
 -except KeyError:
 -  raise RuntimeError('please set JAVA7_HOME in the env before running
 smokeTestRelease') -print('JAVA8_HOME is %s' % JAVA7_HOME)
 
  verifyJavaVersion('1.7')
 -verifyJavaVersion('1.8')
 
  # TODO
  #   + verify KEYS contains key that signed the release
 @@ -747,21 +739,12 @@ def verifyUnpacked(project, artifact, un
run('%s; ant javadocs' % javaExe('1.7'), '%s/javadocs.log' % 
 unpackPath)
checkJavadocpathFull('%s/build/docs' % unpackPath)
 
 -  print(run tests w/ Java 8 and testArgs='%s'... % testArgs)
 -  run('%s; ant clean test %s' % (javaExe('1.8'), testArgs), 
 '%s/test.log' %
 unpackPath)
 -  run('%s; ant jar' % javaExe('1.8'), '%s/compile.log' % unpackPath)
 -  testDemo(isSrc, version, '1.8')
 -
 -  print('generate javadocs w/ Java 8...')
 -  run('%s; ant javadocs' % javaExe('1.8'), '%s/javadocs.log' % 
 unpackPath)
 -  checkJavadocpathFull('%s/build/docs' % unpackPath)
 -
  else:
os.chdir('solr')
 
print(run tests w/ Java 7 and testArgs='%s'... % testArgs)
run('%s; ant clean test -Dtests.slow=false %s' % (javaExe('1.7'), 
 testArgs),
 '%s/test.log' % unpackPath)
 -
 +
# test javadocs
print('

[jira] [Commented] (LUCENE-5909) Run smoketester on Java 8

2014-08-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115488#comment-14115488
 ] 

Uwe Schindler commented on LUCENE-5909:
---

I was chatting with Ryan via Hangouts, I think we should:
- use default JAVA_HOME as basis, check that it is Java 7, otherwise fail to 
run. So we donÄt need to pass crazy JAVA7_HOME env var. Because we have 
argparse, passing non-standard env vars looks wrong
- if you want Java 8 testing, you can optionally pass --test-java8 
/path/to/jdk1.8.0: If this is done, the path is checked if it is Java 8, 
otherwise fail to run

By default it only tests Java 7.

For nightly-smoke ANT task, I will change the logic.


 Run smoketester on Java 8
 -

 Key: LUCENE-5909
 URL: https://issues.apache.org/jira/browse/LUCENE-5909
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
  Labels: Java8
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5909.patch


 In the past, when we were on Java 6, we ran the Smoketester on Java 6 and 
 Java 7. As Java 8 is now officially released and supported, smoketester 
 should now use and require JAVA8_HOME.
 For the nightly-smoke tests I have to install the openjdk8 FreeBSD package, 
 but that should not be a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5909) Run smoketester on Java 8

2014-08-29 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5909:
--

Assignee: Ryan Ernst

 Run smoketester on Java 8
 -

 Key: LUCENE-5909
 URL: https://issues.apache.org/jira/browse/LUCENE-5909
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Ryan Ernst
  Labels: Java8
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5909.patch


 In the past, when we were on Java 6, we ran the Smoketester on Java 6 and 
 Java 7. As Java 8 is now officially released and supported, smoketester 
 should now use and require JAVA8_HOME.
 For the nightly-smoke tests I have to install the openjdk8 FreeBSD package, 
 but that should not be a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-5046) Explore preset dictionaries for CompressingStoredFieldsFormat

2014-08-29 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-5046.


Resolution: Duplicate

I close this issue in favor of LUCENE-5914 that uses shared dictionaries in 
order to make decompression faster.

 Explore preset dictionaries for CompressingStoredFieldsFormat
 -

 Key: LUCENE-5046
 URL: https://issues.apache.org/jira/browse/LUCENE-5046
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial

 I discussed this possible improvement with Stefan Pohl and Andrzej Białecki 
 at Berlin Buzzwords: By having preset dictionaries (which could be 
 user-provided and/or computed on a per-block basis), decompression could be 
 faster since we would never have to decompress several documents from a block 
 in order to access a single document.
 One drawback is that it would require putting some boundaries in the 
 compressed stream, so it would maybe decrease a little the compression ratio. 
 But then if decompression is faster, we could also afford larger blocks, so I 
 think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5914) More options for stored fields compression

2014-08-29 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5914:
-

Attachment: LUCENE-5914.patch

What I have been thinking about would be to provide 2 options for compression 
(we could have more than 2 but that would make it more complicated 
backward-compatibility wise):
 - one option that focuses on search speed,
 - one option that focuses on compression.

Here is how the current patch tries to address these requirements:

1. For high compression, documents are grouped into blocks of 16KB (pretty much 
like today) but instead of being compressed with LZ4, they are compressed with 
deflate and a low compression level (3 which is the highest level that doens't 
use lazy match evaluation, I think it is a good trade-off for our stored 
fields).

If you want to decompress a document, you need to decompress the whole block.

2. For better search speed, documents are compressed individually with lz4. In 
order to keep the compression ratio good enough, documents are still grouped 
into blocks, and what happens is that the data that results from the 
compression of the previous documents in the block are used as a dictionary in 
order to compress the current document.

When you want to decompress, you can decompress a single document at a time, 
all that you need to is to have a buffer that stores the compressed 
representation of the previous documents in the block so that the decompression 
routine can make references to it.

In both cases, I tried to implement it in such a way that it is not required to 
override the default bulk merge API in order to get good merging performance: 
the readers keep some state that allow them to read documents sequentially. 
This should also help for operations like exports of the indices since they 
would get much better performance when iterating over documents in order.

The patch is not ready yet, it is too light on tests so that I can sleep 
quietly, and quite inefficient on large documents. For now I'm just sharing it 
in order to get some feedback. :-)

For the shared dictionary logic, I looked at other approaches that didn't work 
out well:
 - trying to compute a shared dictionary happens to be very costly since you 
would need to compute the longest common subsequences that are shared across 
documents in a block. That is why I ended up using the compressed documents as 
a dictionary since it requires neither additional CPU nor space while works 
quite efficiently due to the way that lz4 works.
 - I tried to see what can be done with shared dictionaries and DEFLATE, but 
the dictionaries are only used for the LZ77 part of the algorithm, not for the 
huffman coding so it was not really helpful in our case.
 - I tried to see if we could compute a huffman dictionary per block and use it 
to compress all documents individually (some thing that the deflate API doesn't 
allow for) but it was very slow (probably for two reasons: 1. I wanted to keep 
it simple and 2. Java is not as efficient as C for that kind of stuff)
 - I also played with the ng2 and ng3 algorithms described in 
http://openproceedings.org/EDBT/2014/paper_25.pdf but it was significantly 
slower than lz4 (because lz4 can process large sequences of bytes at a time, 
while these formats only work on 2 or 3 bytes at a time).

 More options for stored fields compression
 --

 Key: LUCENE-5914
 URL: https://issues.apache.org/jira/browse/LUCENE-5914
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.11

 Attachments: LUCENE-5914.patch


 Since we added codec-level compression in Lucene 4.1 I think I got about the 
 same amount of users complaining that compression was too aggressive and that 
 compression was too light.
 I think it is due to the fact that we have users that are doing very 
 different things with Lucene. For example if you have a small index that fits 
 in the filesystem cache (or is close to), then you might never pay for actual 
 disk seeks and in such a case the fact that the current stored fields format 
 needs to over-decompress data can sensibly slow search down on cheap queries.
 On the other hand, it is more and more common to use Lucene for things like 
 log analytics, and in that case you have huge amounts of data for which you 
 don't care much about stored fields performance. However it is very 
 frustrating to notice that the data that you store takes several times less 
 space when you gzip it compared to your index although Lucene claims to 
 compress stored fields.
 For that reason, I think it would be nice to have some kind of options that 
 would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA

[jira] [Created] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-08-29 Thread JIRA
Tomás Fernández Löbbe created SOLR-6452:
---

 Summary: StatsComponent missing stat won't work with 
docValues=true and indexed=false
 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe


StatsComponent can work with DocValues, but it still required to use 
indexed=true for the missing stat to work. Missing values should be obtained 
from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115542#comment-14115542
 ] 

Tomás Fernández Löbbe commented on SOLR-6024:
-

I created SOLR-6452 for the bug with the missing stat

 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, 
 SOLR-6024-trunk.patch, SOLR-6024.patch, SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6024:


Attachment: SOLR-6024-trunk.patch

Added preconditions to the new test. 
Test for docValues=true  indexed=false (commented out for now)

 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, 
 SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024.patch, SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Stop attempting to check Overseer leader...

2014-08-29 Thread andyetitmoves
GitHub user andyetitmoves opened a pull request:

https://github.com/apache/lucene-solr/pull/89

Stop attempting to check Overseer leadership on exit

Patch for SOLR-6453

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr stop-exception-exit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/89.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #89


commit bb3e78b402e0d316cee6ec09494e28db02e2743c
Author: Ramkumar Aiyengar raiyen...@bloomberg.net
Date:   2014-08-29T18:59:25Z

Stop attempting to check Overseer leadership on exit




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-08-29 Thread Ramkumar Aiyengar (JIRA)
Ramkumar Aiyengar created SOLR-6453:
---

 Summary: Stop throwing an error message from Overseer on Solr exit
 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Priority: Minor


SOLR-5859 adds a leadership check every time Overseer exits loop. This however 
gets triggered even when Solr really is exiting, causing a spurious error. 
Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115678#comment-14115678
 ] 

ASF GitHub Bot commented on SOLR-6453:
--

GitHub user andyetitmoves opened a pull request:

https://github.com/apache/lucene-solr/pull/89

Stop attempting to check Overseer leadership on exit

Patch for SOLR-6453

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr stop-exception-exit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/89.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #89


commit bb3e78b402e0d316cee6ec09494e28db02e2743c
Author: Ramkumar Aiyengar raiyen...@bloomberg.net
Date:   2014-08-29T18:59:25Z

Stop attempting to check Overseer leadership on exit




 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Priority: Minor

 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6454) Suppress EOFExceptions in SolrDispatchFilter

2014-08-29 Thread Ramkumar Aiyengar (JIRA)
Ramkumar Aiyengar created SOLR-6454:
---

 Summary: Suppress EOFExceptions in SolrDispatchFilter
 Key: SOLR-6454
 URL: https://issues.apache.org/jira/browse/SOLR-6454
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Priority: Minor


Suppress {{EOFException}}s in {{SolrDispatchFilter}}, these just mean we are 
shutting down or the client has closed the connections, and currently we flag 
it up as an error in the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Suppress EofExceptions happening when wr...

2014-08-29 Thread andyetitmoves
GitHub user andyetitmoves opened a pull request:

https://github.com/apache/lucene-solr/pull/90

Suppress EofExceptions happening when writing responses

Patch for SOLR-6454

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-suppress-eofe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/90.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #90


commit 293e34d7c41ce2be36736dfcc85c62c041c7f4e5
Author: Ramkumar Aiyengar andyetitmo...@gmail.com
Date:   2014-08-25T12:49:21Z

Suppress EofExceptions happening when writing responses




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6454) Suppress EOFExceptions in SolrDispatchFilter

2014-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115689#comment-14115689
 ] 

ASF GitHub Bot commented on SOLR-6454:
--

GitHub user andyetitmoves opened a pull request:

https://github.com/apache/lucene-solr/pull/90

Suppress EofExceptions happening when writing responses

Patch for SOLR-6454

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-suppress-eofe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/90.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #90


commit 293e34d7c41ce2be36736dfcc85c62c041c7f4e5
Author: Ramkumar Aiyengar andyetitmo...@gmail.com
Date:   2014-08-25T12:49:21Z

Suppress EofExceptions happening when writing responses




 Suppress EOFExceptions in SolrDispatchFilter
 

 Key: SOLR-6454
 URL: https://issues.apache.org/jira/browse/SOLR-6454
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Priority: Minor

 Suppress {{EOFException}}s in {{SolrDispatchFilter}}, these just mean we are 
 shutting down or the client has closed the connections, and currently we flag 
 it up as an error in the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Description: 
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
!-- use json for all paths and _txt as the default search field--
paramSet name=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/paramSet
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 

  was:
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
!-- use json for all paths and _txt as the default search field--
params path=/** defaults=wt=jsondf=_txt /
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6365.patch


 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
 !-- use json for all paths and _txt as the default search field--
 paramSet name=global path=/**
   lst name=defaults
  str name=wtjson/str
  str name=df_txt/str
   /lst
 /paramSet
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14109859#comment-14109859
 ] 

Noble Paul edited comment on SOLR-6365 at 8/29/14 7:22 PM:
---

I'm going with the legacy solr way of doing this



{code:xml}
!-- use json for all paths and _txt as the default search field--
paramSet name=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/paramSet
{code}


The feature is more important than the syntax itself


was (Author: noble.paul):
I'm going with the legacy solr way of doing this



{code:xml}
!-- use json for all paths and _txt as the default search field--
paramSet id=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/paramSet
{code}


The feature is more important than the syntax itself

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6365.patch


 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Attachment: SOLR-6365.patch

Fix with testcases. I plan to commit this soon

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6365.patch


 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Description: 
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
!-- use json for all paths and _txt as the default search field--
paramSet name=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/paramSet
{code}

other examples

{code:xml}
paramSet name=a path=/dump3,/root/*,/root1/**
lst name=defaults
  str name=aA/str
/lst
lst name=invariants
  str name=bB/str
/lst
lst name=appends
  str name=cC/str
/lst
  /paramSet
  requestHandler name=/dump3 class=DumpRequestHandler/
  requestHandler name=/dump4 class=DumpRequestHandler/
  requestHandler name=/root/dump5 class=DumpRequestHandler/
  requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/
  requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/
  requestHandler name=/dump2 class=DumpRequestHandler paramSet=a
lst name=defaults
  str name=aA1/str
/lst
lst name=invariants
  str name=bB1/str
/lst
lst name=appends
  str name=cC1/str
/lst
  /requestHandler
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 

  was:
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
!-- use json for all paths and _txt as the default search field--
paramSet name=global path=/**
  lst name=defaults
 str name=wtjson/str
 str name=df_txt/str
  /lst
/paramSet
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6365.patch


 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
 !-- use json for all paths and _txt as the default search field--
 paramSet name=global path=/**
   lst name=defaults
  str name=wtjson/str
  str name=df_txt/str
   /lst
 /paramSet
 {code}
 other examples
 {code:xml}
 paramSet name=a path=/dump3,/root/*,/root1/**
 lst name=defaults
   str name=aA/str
 /lst
 lst name=invariants
   str name=bB/str
 /lst
 lst name=appends
   str name=cC/str
 /lst
   /paramSet
   requestHandler name=/dump3 class=DumpRequestHandler/
   requestHandler name=/dump4 class=DumpRequestHandler/
   requestHandler name=/root/dump5 class=DumpRequestHandler/
   requestHandler name=/root1/anotherlevel/dump6 
 class=DumpRequestHandler/
   requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/
   requestHandler name=/dump2 class=DumpRequestHandler paramSet=a
 lst name=defaults
   str name=aA1/str
 /lst
 lst name=invariants
   str name=bB1/str
 /lst
 lst name=appends
   str name=cC1/str
 /lst
   /requestHandler
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5909) Run smoketester on Java 8

2014-08-29 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5909:
---

Attachment: LUCENE-5909.patch

Here is a patch that adds --test-java8 as described above by Uwe.  It omits 
build.xml changes, which still need to be worked on to only pass --test-java8 
for nightly-smoke when appropriate.

I'm running the smoke tester now to check it still works...

 Run smoketester on Java 8
 -

 Key: LUCENE-5909
 URL: https://issues.apache.org/jira/browse/LUCENE-5909
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Ryan Ernst
  Labels: Java8
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5909.patch, LUCENE-5909.patch


 In the past, when we were on Java 6, we ran the Smoketester on Java 6 and 
 Java 7. As Java 8 is now officially released and supported, smoketester 
 should now use and require JAVA8_HOME.
 For the nightly-smoke tests I have to install the openjdk8 FreeBSD package, 
 but that should not be a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Ref Guide: access to intermediate HTML from a PDF export

2014-08-29 Thread Steve Rowe
I talked to Gavin McDonald and Tony Stevenson on HipChat today about access to 
the intermediate HTML produced during PDF export of the Solr Ref Guide.

Access to the intermediate HTML would be useful for two things: troubleshooting 
CSS issues, and exploring an alternative PDF conversion mechanism.

Gavin helped track down where it’s stored (in a different place than described 
in the Confluence docs), e.g.:

/x1/cwiki/confluence-data/temp/htmlexport-20140829-193431-10839/export-intermediate-193431-10840.html

I’ve made an INFRA JIRA to provide regular access to these, and Gavin has 
suggested that he’ll setup a cron job to copy them to a better place than the 
temp dir, to be served by a web server:

https://issues.apache.org/jira/browse/INFRA-8262

Gavin sent me the intermediate HTML for a PDF export I did about an hour ago, 
and I can send it to anybody who wants it.

Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.8.0_20) - Build # 4186 - Failure!

2014-08-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4186/
Java: 32bit/jdk1.8.0_20 -server -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E651868CFA3C043E:67B708948D636402]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:161)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at