[jira] [Commented] (SOLR-5767) dataimport configuration file issue in the solr cloud

2014-12-15 Thread Vijaya Jonnakuti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246420#comment-14246420
 ] 

Vijaya Jonnakuti commented on SOLR-5767:


Hi 
Can I apply this patch to Solr4.8.0?
Thanks
Vijaya

 dataimport configuration file issue in the solr cloud
 -

 Key: SOLR-5767
 URL: https://issues.apache.org/jira/browse/SOLR-5767
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, SolrCloud
Affects Versions: 4.6, 4.6.1
Reporter: Raintung Li
 Fix For: 5.0

 Attachments: patch-5767.txt


 Many collections can be corresponding to one config, dataimport configuration 
 file should bundle into the collections not config.
 Data import module use SolrResourceLoader to load the config file, and write 
 the result into dataimport.properties file.
 For config file the path(ZK): /configs/[configname]/data-config.xml or 
 ClassPath
 For result path(ZK): /configs/[CollectionName]/ dataimport.properties
 it look like very confused that we maybe can update the same design 
 Like this as below.
 /configs/[configname]/dataimport/[CollectionName]/data-config.xml
 /configs/[configname]/dataimport/[CollectionName]/dataimport.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2014-12-15 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246466#comment-14246466
 ] 

Alan Woodward commented on SOLR-6840:
-

I don't think you can just remove cores entries, as there were a whole bunch 
of other attributes specified on it in addition to listing the cores.

Persistence makes no sense with the new setup, as the information in ConfigSolr 
is immutable for the lifetime of the container.  So really isPersistent() 
should just be removed.

Ideally you shouldn't have to write properties files anywhere for tests (unless 
you're explicitly testing the core discovery logic).  TestHarness and/or 
SolrTestCaseJ4 should have their own CoresLocator implementation that returns a 
CoreDescriptor with the appropriate schema and config settings.  The whole 
point of the CoresLocator abstraction is that you're not tied to any particular 
file format for testing.

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1632) Distributed IDF

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246473#comment-14246473
 ] 

Anshum Gupta commented on SOLR-1632:


I think we should get this in now. This would not be enabled by default i.e. 
LocalStatsCache impl would be used anyways.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Anshum Gupta
 Fix For: 5.0, Trunk

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1632) Distributed IDF

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246473#comment-14246473
 ] 

Anshum Gupta edited comment on SOLR-1632 at 12/15/14 9:08 AM:
--

Unless there are objections in the next few days, I think we should get this in 
now. This would not be enabled by default i.e. LocalStatsCache impl would be 
used anyways.


was (Author: anshumg):
I think we should get this in now. This would not be enabled by default i.e. 
LocalStatsCache impl would be used anyways.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Anshum Gupta
 Fix For: 5.0, Trunk

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6675) Solr webapp deployment is very slow with jmx/ in solrconfig.xml

2014-12-15 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246497#comment-14246497
 ] 

Forest Soup commented on SOLR-6675:
---

Looks like thread searcherExecutor-5-thread-1 and searcherExecutor-6-thread-1 
blocking the coreLoadExecutor-4-thread-1 and coreLoadExecutor-4-thread-2. 
And searcherExecutor-5-thread-1 and searcherExecutor-6-thread-1 are like 
suggester code.
[~hossman] Could you please help to make sure? Thanks!

 Solr webapp deployment is very slow with jmx/ in solrconfig.xml
 -

 Key: SOLR-6675
 URL: https://issues.apache.org/jira/browse/SOLR-6675
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Linux Redhat 64bit
Reporter: Forest Soup
Priority: Critical
  Labels: performance
 Attachments: 1014.zip, callstack.png


 We have a SolrCloud with Solr version 4.7 with Tomcat 7. And our solr 
 index(cores) are big(50~100G) each core. 
 When we start up tomcat, the solr webapp deployment is very slow. From 
 tomcat's catalina log, every time it takes about 10 minutes to get deployed. 
 After we analyzing java core dump, we notice it's because the loading process 
 cannot finish until the MBean calculation for large index is done.
  
 So we tried to remove the jmx/ from solrconfig.xml, after that, the loading 
 of solr webapp only take about 1 minute. So we can sure the MBean calculation 
 for large index is the root cause.
 Could you please point me if there is any async way to do statistic 
 monitoring without jmx/ in solrconfig.xml, or let it do calculation after 
 the deployment? Thanks!
 The callstack.png file in the attachment is the call stack of the long 
 blocking thread which is doing statistics calculation.
 The catalina log of tomcat:
 INFO: Starting Servlet Engine: Apache Tomcat/7.0.54
 Oct 13, 2014 2:00:29 AM org.apache.catalina.startup.HostConfig deployWAR
 INFO: Deploying web application archive 
 /opt/ibm/solrsearch/tomcat/webapps/solr.war
 Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployWAR
 INFO: Deployment of web application archive 
 /opt/ibm/solrsearch/tomcat/webapps/solr.war has finished in 594,325 ms 
  Time taken for solr app Deployment is about 10 minutes 
 ---
 Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deploying web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/manager
 Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deployment of web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/manager has finished in 2,035 ms
 Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deploying web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/examples
 Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deployment of web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/examples has finished in 1,789 ms
 Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deploying web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/docs
 Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deployment of web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/docs has finished in 1,037 ms
 Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deploying web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/ROOT
 Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deployment of web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/ROOT has finished in 948 ms
 Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deploying web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/host-manager
 Oct 13, 2014 2:10:30 AM org.apache.catalina.startup.HostConfig deployDirectory
 INFO: Deployment of web application directory 
 /opt/ibm/solrsearch/tomcat/webapps/host-manager has finished in 951 ms
 Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
 INFO: Starting ProtocolHandler [http-bio-8080]
 Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
 INFO: Starting ProtocolHandler [ajp-bio-8009]
 Oct 13, 2014 2:10:31 AM org.apache.catalina.startup.Catalina start
 INFO: Server startup in 601506 ms



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr numeric highlighting

2014-12-15 Thread Pawel Rog
Hi,
I need highlighting for Trie* fields (TrieLong and TrieInteger). I realized
that a few lines of code in DefaultSolrHighlighter prevent from doing this.
I Removed those lines and highlighting works fine for integers. All unit
tests passed too. Can you take a look at those issues and put a comment if
it makes sense to remove lines from code and unblock integer highlighting?

https://issues.apache.org/jira/browse/SOLR-2497
https://issues.apache.org/jira/browse/LUCENE-3080

--
Paweł Róg


[jira] [Commented] (SOLR-6359) Allow customization of the number of records and logs kept by UpdateLog

2014-12-15 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246505#comment-14246505
 ] 

Forest Soup commented on SOLR-6359:
---

Is the patch only available for Solr 5.0? For Solr 4.7, can we apply the patch? 
Thanks!

 Allow customization of the number of records and logs kept by UpdateLog
 ---

 Key: SOLR-6359
 URL: https://issues.apache.org/jira/browse/SOLR-6359
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{UpdateLog}} hardcodes the number of logs and records it keeps, 
 and the hardcoded numbers (100 records, 10 logs) can be quite low (esp. the 
 records) in an heavily indexing setup, leading to full recovery even if Solr 
 was just stopped and restarted.
 These values should be customizable (even if only present as expert options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Determining NumericType for a field

2014-12-15 Thread Toke Eskildsen
On Wed, 2014-12-10 at 15:27 +0100, Michael McCandless wrote:
 No, Lucene does not store numeric type nor multi-valued-ness today;
 it's frustrating.

At least I now know not to dig too deep for non-existing answers,
thanks. Out current code requires the user to be explicit about how the
content of the fields should be treated. Until a more fundamental
change, such as LUCENE-6005, we will leave it at that.

 In the meantime, maybe you could model your tool after
 UninvertingReader?  It faces the same issue (lack of schema) and lets
 the user specify the type.

Yes, that is what we're doing. Unfortunately we cannot use the
UninvertingReader directly due to its restrictions on facet structure
size: We have too many references in our shards so it hits an internal
16M(?) limit. 

Unfortunately our current mapping code from stored multi value String to
DocValues seems to be much very slow: It took nearly 2 days to convert a
single-segment 900GB index, where a standard optimize is only 8 hours.

 Also, see (the confusingly named) TestDemoParallelLeafReader?  It lets
 you partially reindex, e.g. derive new indexed fields or DV fields,
 etc., from existing stored/DV fields, in an NRT manner.

Thanks for the pointer. As far as I can see, the demo is very explicit
about the type of DocValues being long, so no auto-guessing there. It's
a very interesting idea though, with seamless DV-enabling.

Thank you,
Toke Eskildsen, State and University Library, Denmark



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6107) Add statistics to LRUFilterCache

2014-12-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6107:
-
Attachment: LUCENE-6107.patch

New patch which fixes the definition of the miss count and adds some other 
useful statistics, I think it's ready:

 * statistics about reads:
 ** hit count: number of lookups that found a DocIdSet
 ** miss count: number of lookups that did NOT find a DocIdSet
 ** total count: number of lookups, the sum of the two above numbers
 * statistics about writes:
 ** cache count: number of generated cache entries
 ** eviction count: number of evicted cache entries
 ** cache size: number of entries in the cache, equal to the {{cache count}} 
minus the {{eviction count}}

 Add statistics to LRUFilterCache
 

 Key: LUCENE-6107
 URL: https://issues.apache.org/jira/browse/LUCENE-6107
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6107.patch, LUCENE-6107.patch, LUCENE-6107.patch


 It would be useful to have statistics about the usage of the filter cache to 
 figure out whether the cache is useful at all and to help tune filter caching 
 policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



GitHub pull requests vs. Jira issues

2014-12-15 Thread Vanlerberghe, Luc
Hi.

I recently created two pull requests via GitHub that arrived on the dev list 
automatically.
(They may have ended up in spam since I hadn't configured my name and email 
yet, so the From: field was set to LucVL g...@git.apache.org)
I repeated the contents below just in case.

Do I need to set up corresponding JIRA issues to make sure they don't get lost 
(or at least to know if they are rejected...) or are GitHub pull requests also 
reviewed regularly?

Thanks,

Luc


https://github.com/apache/lucene-solr/pull/108

o.a.l.queryparser.flexible.standard.StandardQueryParser cleanup

* Removed unused, but confusing code (CONJ_AND == CONJ_OR == 2 ???). 
Unfortunately, the code generated by JavaCC from the updated 
StandardSyntaxParser.jj differs in more places than necessary.
* Replaced Vector by List/ArrayList.
* Corrected the javadoc for StandardQueryParser.setLowercaseExpandedTerms

ant test in the queryparser directory runs successfully



https://github.com/apache/lucene-solr/pull/113

BitSet fixes

* `LongBitSet.ensureCapacity` overflows on `numBits  Integer.MaxValue`
* `Fixed-/LongBitSet`: Avoid conditional branch in `bits2words` (with a 
comment explaining the formula)

TODO:
* Harmonize the use of `numWords` vs. `bits.length` vs. `numBits`
 * E.g.: `cardinality` scans up to `bits.length`, while `or` asserts on 
`indexnumBits`
* If a `BitSet` is allocated with `n` bits, `ensureCapacity` with the same 
number `n` shouldn't grow the `BitSet`
 * Either both should allocate a larger array than really needed or neither.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6106) Improve FilterCachingPolicy statistics computation

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246529#comment-14246529
 ] 

ASF subversion and git services commented on LUCENE-6106:
-

Commit 1645613 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1645613 ]

LUCENE-6106: Improve tracking of filter usage in LRUFilterCache.

 Improve FilterCachingPolicy statistics computation
 --

 Key: LUCENE-6106
 URL: https://issues.apache.org/jira/browse/LUCENE-6106
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6106.patch


 Currently FilterCachingPolicy.onCache is supposed to be called every time 
 that FilterCache.onCache is used. However, this does not necessarily reflect 
 how much a filter is used. For instance you can call cache and not use the 
 filter, or call cache once and then use it a hundred times. It would be more 
 useful to know how many times a filter has been used on a top level reader, 
 and I think we can do this by doing something like below in the caching 
 wrapper filter?
 {code}
 @Override
 public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) 
 throws IOException {
   if (context.ord == 0) {
 // increment counter
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246530#comment-14246530
 ] 

Shalin Shekhar Mangar commented on SOLR-6849:
-

+1

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Determining NumericType for a field

2014-12-15 Thread Michael McCandless
On Mon, Dec 15, 2014 at 4:53 AM, Toke Eskildsen t...@statsbiblioteket.dk 
wrote:

 In the meantime, maybe you could model your tool after
 UninvertingReader?  It faces the same issue (lack of schema) and lets
 the user specify the type.

 Yes, that is what we're doing. Unfortunately we cannot use the
 UninvertingReader directly due to its restrictions on facet structure
 size: We have too many references in our shards so it hits an internal
 16M(?) limit.

Hmm that's probably the DocTermOrds 16 MB internal addressing limit?

 Unfortunately our current mapping code from stored multi value String to
 DocValues seems to be much very slow: It took nearly 2 days to convert a
 single-segment 900GB index, where a standard optimize is only 8 hours.

That's awful.  Profile it?  But, how long did it take to index in the
first place?

 Also, see (the confusingly named) TestDemoParallelLeafReader?  It lets
 you partially reindex, e.g. derive new indexed fields or DV fields,
 etc., from existing stored/DV fields, in an NRT manner.

 Thanks for the pointer. As far as I can see, the demo is very explicit
 about the type of DocValues being long, so no auto-guessing there. It's
 a very interesting idea though, with seamless DV-enabling.

The DVs can be arbitrary (not just long); it's only that the test
cases focuses on long.

Have a look @ the LUCENE-6005 branch: I broke this test out as a
separate ReindexingReader + test.  I think we could do a better
integration between that and the schema...

I also added a simpler testSwitchToDocValues test case.  It still
uses only long DVs but you can easily see how you could do other types
to ... I'll add an example of SortedSet.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6106) Improve FilterCachingPolicy statistics computation

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246534#comment-14246534
 ] 

ASF subversion and git services commented on LUCENE-6106:
-

Commit 1645614 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1645614 ]

LUCENE-6106: Fix test.

 Improve FilterCachingPolicy statistics computation
 --

 Key: LUCENE-6106
 URL: https://issues.apache.org/jira/browse/LUCENE-6106
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6106.patch


 Currently FilterCachingPolicy.onCache is supposed to be called every time 
 that FilterCache.onCache is used. However, this does not necessarily reflect 
 how much a filter is used. For instance you can call cache and not use the 
 filter, or call cache once and then use it a hundred times. It would be more 
 useful to know how many times a filter has been used on a top level reader, 
 and I think we can do this by doing something like below in the caching 
 wrapper filter?
 {code}
 @Override
 public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) 
 throws IOException {
   if (context.ord == 0) {
 // increment counter
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6106) Improve FilterCachingPolicy statistics computation

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246538#comment-14246538
 ] 

ASF subversion and git services commented on LUCENE-6106:
-

Commit 1645618 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645618 ]

LUCENE-6106: Improve tracking of filter usage in LRUFilterCache.

 Improve FilterCachingPolicy statistics computation
 --

 Key: LUCENE-6106
 URL: https://issues.apache.org/jira/browse/LUCENE-6106
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6106.patch


 Currently FilterCachingPolicy.onCache is supposed to be called every time 
 that FilterCache.onCache is used. However, this does not necessarily reflect 
 how much a filter is used. For instance you can call cache and not use the 
 filter, or call cache once and then use it a hundred times. It would be more 
 useful to know how many times a filter has been used on a top level reader, 
 and I think we can do this by doing something like below in the caching 
 wrapper filter?
 {code}
 @Override
 public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) 
 throws IOException {
   if (context.ord == 0) {
 // increment counter
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246561#comment-14246561
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645622 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1645622 ]

SOLR-6849: RemoteSolrException should report its source host

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6106) Improve FilterCachingPolicy statistics computation

2014-12-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6106.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 Improve FilterCachingPolicy statistics computation
 --

 Key: LUCENE-6106
 URL: https://issues.apache.org/jira/browse/LUCENE-6106
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6106.patch


 Currently FilterCachingPolicy.onCache is supposed to be called every time 
 that FilterCache.onCache is used. However, this does not necessarily reflect 
 how much a filter is used. For instance you can call cache and not use the 
 filter, or call cache once and then use it a hundred times. It would be more 
 useful to know how many times a filter has been used on a top level reader, 
 and I think we can do this by doing something like below in the caching 
 wrapper filter?
 {code}
 @Override
 public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) 
 throws IOException {
   if (context.ord == 0) {
 // increment counter
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246564#comment-14246564
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645624 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645624 ]

SOLR-6849: RemoteSolrException should report its source host

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-6849.
-
Resolution: Fixed

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6359) Allow customization of the number of records and logs kept by UpdateLog

2014-12-15 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246569#comment-14246569
 ] 

Ramkumar Aiyengar commented on SOLR-6359:
-

You might have to resolve conflicts but yeah, nothing in there should be 
specific to 5.0..

 Allow customization of the number of records and logs kept by UpdateLog
 ---

 Key: SOLR-6359
 URL: https://issues.apache.org/jira/browse/SOLR-6359
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{UpdateLog}} hardcodes the number of logs and records it keeps, 
 and the hardcoded numbers (100 records, 10 logs) can be quite low (esp. the 
 records) in an heavily indexing setup, leading to full recovery even if Solr 
 was just stopped and restarted.
 These values should be customizable (even if only present as expert options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5882) Support scoreMode parameter for BlockJoinParentQParser

2014-12-15 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246579#comment-14246579
 ] 

Andrey Kudryavtsev commented on SOLR-5882:
--

Use one of Windows GUI utilities for example. More details - 
http://stackoverflow.com/questions/517257/how-do-i-apply-a-diff-patch-on-windows
 

 Support scoreMode parameter for BlockJoinParentQParser
 --

 Key: SOLR-5882
 URL: https://issues.apache.org/jira/browse/SOLR-5882
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.8
Reporter: Andrey Kudryavtsev
 Attachments: SOLR-5882.patch


 Today BlockJoinParentQParser creates queries with hardcoded _scoring mode_ 
 None: 
 {code:borderStyle=solid}
   protected Query createQuery(Query parentList, Query query) {
 return new ToParentBlockJoinQuery(query, getFilter(parentList), 
 ScoreMode.None);
   }
 {code} 
 Analogically BlockJoinChildQParser creates queries with hardcoded _doScores_ 
 false:
 {code:borderStyle=solid}
   protected Query createQuery(Query parentListQuery, Query query) {
 return new ToChildBlockJoinQuery(query, getFilter(parentListQuery), 
 false);
   }
 {code}
 I propose to have ability to configure this scoring options via query syntax.
 Syntax for parent queries can be like:
 {code:borderStyle=solid}
 {!parent which=type:parent scoreMode=None|Avg|Max|Total}
 {code} 
 For child query:
 {code:borderStyle=solid}
 {!child of=type:parent doScores=true|false}
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6359) Allow customization of the number of records and logs kept by UpdateLog

2014-12-15 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246609#comment-14246609
 ] 

Forest Soup commented on SOLR-6359:
---

When could we get the official build with that patch in 4.x or 5.0?

 Allow customization of the number of records and logs kept by UpdateLog
 ---

 Key: SOLR-6359
 URL: https://issues.apache.org/jira/browse/SOLR-6359
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{UpdateLog}} hardcodes the number of logs and records it keeps, 
 and the hardcoded numbers (100 records, 10 logs) can be quite low (esp. the 
 records) in an heavily indexing setup, leading to full recovery even if Solr 
 was just stopped and restarted.
 These values should be customizable (even if only present as expert options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6359) Allow customization of the number of records and logs kept by UpdateLog

2014-12-15 Thread Forest Soup (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246618#comment-14246618
 ] 

Forest Soup commented on SOLR-6359:
---

And where should I set the numRecordsToKeep and maxNumLogsToKeep values? 
Thanks!

 Allow customization of the number of records and logs kept by UpdateLog
 ---

 Key: SOLR-6359
 URL: https://issues.apache.org/jira/browse/SOLR-6359
 Project: Solr
  Issue Type: Improvement
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{UpdateLog}} hardcodes the number of logs and records it keeps, 
 and the hardcoded numbers (100 records, 10 logs) can be quite low (esp. the 
 records) in an heavily indexing setup, leading to full recovery even if Solr 
 was just stopped and restarted.
 These values should be customizable (even if only present as expert options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Determining NumericType for a field

2014-12-15 Thread Toke Eskildsen
On Mon, 2014-12-15 at 11:33 +0100, Michael McCandless wrote:
 On Mon, Dec 15, 2014 at 4:53 AM, Toke Eskildsen t...@statsbiblioteket.dk 
 wrote:

[Toke: Limit on faceting with many references]

 Hmm that's probably the DocTermOrds 16 MB internal addressing limit?

Yes, we've hit that one before. If we did not have DocValues, I would
consider it a serious deficiency of Solr.

For one of the fields in the shard I tested, we had 675M references from
256M documents to 3M unique values, with the most popular value having
18M references.

(all of which works perfectly fine  fast with DocValues, yay!)

[2 days for conversion of 900GB index]

 That's awful.  Profile it?  But, how long did it take to index in the
 first place?

Full index takes 8 days with 24 CPUs going full tilt ~=192 CPU days.
Conversion is (sadly) single threaded, so measured in total CPU time, it
is just the 2 days. Still, we can't scale parallel conversions of
multiple shards very high due to limited local storage space.

I'll put a lot more timing debug logging into the code to investigate
where the time is spend.

[TestDemoParallelLeafReader]

 The DVs can be arbitrary (not just long); it's only that the test
 cases focuses on long.

My point was that there does not seem to be any auto-guessing of field
type (especially NumericsType for numeric values) in the code. Anyway,
since that would not guarantee correct results, it seems that it is
better anyway to require the user to be specific about what should
happen.

 Have a look @ the LUCENE-6005 branch: I broke this test out as a
 separate ReindexingReader + test.  I think we could do a better
 integration between that and the schema...

Down to practicalities, we need Lucene 4.8 as our DocValues are Disk
based and that support was removed in 4.9. I hope to find the time to
look at your better solution in January.

Regards,
Toke Eskildsen, State and University Library, Denmark



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Determining NumericType for a field

2014-12-15 Thread david.w.smi...@gmail.com
 Down to practicalities, we need Lucene 4.8 as our DocValues are Disk
 based and that support was removed in 4.9.


I assume you’re referring to the “Disk” DV format/Codec?  The standard
format has the data on disk too, it’s just that there’s some “small”
(relative to the disk data) lookup references in heap/memory whereas the
codec you’re using doesn’t.  Are you sure the standard codec isn’t
sufficient?  If your use-case shows that there’s a need for the disk codec,
I think it could be brought back, perhaps into the codecs module.  You
could copy the code too to use newer Lucene versions… although I recall
some push vs pull API changes so I don’t know what it would take to bring
it up to date.  I’m curious what Rob Muir says about this.

~ David


Re: Determining NumericType for a field

2014-12-15 Thread Toke Eskildsen
On Mon, 2014-12-15 at 14:23 +0100, david.w.smi...@gmail.com wrote:

Toke:
 Down to practicalities, we need Lucene 4.8 as our DocValues
 are Disk
 based and that support was removed in 4.9.

 I assume you’re referring to the “Disk” DV format/Codec?  The standard
 format has the data on disk too, it’s just that there’s some
 “small” (relative to the disk data) lookup references in heap/memory
 whereas the codec you’re using doesn’t.  Are you sure the standard
 codec isn’t sufficient?

As we have not tried anything else than Disk for our Net Archive
index, we have no comparison with standard (or whatever it is called).
We have no real preference and our next shards will be build with
standard. Only reason for Disk is that it seemed like a good idea at
the time and now we have 20TB of index with it.

We would like to convert away from Disk too, but we would like to
avoid having to do a two-pass upgrade (Disk - standard followed by
non-DV - DV), so the DVEnabling code should preferably support
Disk for reading and do it all as single-pass.

   If your use-case shows that there’s a need for the disk codec, I
 think it could be brought back, perhaps into the codecs module.

I think the removal of Disk during a minor version increase was not in
line with the backwards compatibility spirit of Solr. But I am sure it
was marked Experimental somewhere in the code and that the removal
obeyed the stated rules.

Anyway, done is done and as we have no future need for Disk. But
thanks for the suggested fix.

   You could copy the code too to use newer Lucene versions…

We looked at that sometime back and the code tentacles reached too far
for us to dare grapple with.

Regards,
Toke Eskildsen, State and University Library, Denmark




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2014-12-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246697#comment-14246697
 ] 

Erick Erickson commented on SOLR-6840:
--

Alan:

Right, thanks. Before I went off the deep end I wanted to be sure of the 
intent. 

bq: I don't think you can just remove cores entries, as there were a whole 
bunch of other attributes specified on it in addition to listing the cores.

Of course I can't blindly remove all cores... entries and expect it to just 
work. But by doing so I've unambiguously found all of the places where we need 
to do something to get tests to pass though ;)

 bq: isPersistent() should just be removed.

Great, that was the big question for me, I'll give that a whirl tonight. Which 
will have the consequence of finding/modifying anything that uses it. Which 
_should_ prevent these things from being written in the future. Which _should_ 
cause all the tests that pass with copying junk around to pass And on to 
the next failing tests...

bq: Ideally you shouldn't have to write properties files anywhere for tests
Agreed, and with the env var substitution trick I don't have to, just have to 
work out the isPersistent bit.

Anyway, thanks. The crucial bit was your statement that Persistence makes no 
sense. And glad I am that it's gone, getting it right was a major pain. I'm 
not entirely sure that continues to work on the individual core.properties 
files, but theoretically anything that was changing them had to create a tmp 
directory first someplace b/c the test framework wouldn't let them write to the 
source tree.

Anyway, probably a day or two before I can get any more done...

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0

2014-12-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But there is one disadvantage, the lookup from docId to 
top-level ordinals is slower using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
affected also? My testing has shown that the faceting performance is affected 
much less then collapsing. 

One possible reason for this may be that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
affects field collapsing much more then faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm use 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new hint 
parameter. If the hint parameter is set to FAST_QUERY then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}






 







 






  was:
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But the lookup from docId to top-level ordinals is slower 
using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
effected also? My testing has shown that the faceting performance is effected 
much less then collapsing. 

One possible reason for this is that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
effects field collapsing much more the faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm us 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new hint 
parameter. If the hint parameter is set to FAST_QUERY then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}






 







 







 Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
 ---

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch


 *Background*
 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 There are some major advantages of using the MultiDocValues rather then 

[jira] [Commented] (SOLR-6606) In cloud mode the leader should distribute autoCommits to it's replicas

2014-12-15 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246712#comment-14246712
 ] 

Varun Thacker commented on SOLR-6606:
-

This is how I started testing on my local machine:
1. created a collection with one shard and two replicas
2. Keep indexing new documents. Batch size=1000. autoCommit every 10k docs.
3. Now every few minutes, I ran this command - 
./solr stop -p 7574; sleep 100;./solr start -cloud -d node2 -p 7574 -z 
localhost:9983
4. Stopped indexing.

When we kill a server and bring it back up, replication handler will pull all 
the missing segment files which are missing, so both replicas will have same 
segment files after recovery. Now both replicas keep creating segment files in 
a similar fashion even without the leader distributing auto-commmit.

From what I understand since replication checks if the file name and size is 
the same ( and not segment ids or anything like that ) we get away with it.

I think since moving replication to use segment ids is something we are 
considering given SOLR-6640 , I am tempted to explore that first and revisit 
this. 

Any thought? Am I missing something during the tests or interpreting 
incorrectly?

 In cloud mode the leader should distribute autoCommits to it's replicas
 ---

 Key: SOLR-6606
 URL: https://issues.apache.org/jira/browse/SOLR-6606
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
 Fix For: 5.0, Trunk

 Attachments: SOLR-6606.patch, SOLR-6606.patch


 Today in SolrCloud different replicas of a shard can trigger auto (hard) 
 commits at different times. Although the documents which get added to the 
 system remain consistent the way the segments gets formed can be different 
 because of this.
 The downside of segments not getting formed in an identical fashion across 
 replicas is that when a replica goes into recovery chances are that it has to 
 do a full index replication from the leader. This is time consuming and we 
 can possibly avoid this if the leader forwards auto (hard) commit commands to 
 it's replicas and the replicas never explicitly trigger an auto (hard) commit.
 I am working on a patch. Should have it up shortly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1996 - Still Failing!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1996/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

2 tests failed.
FAILED:  org.apache.solr.cloud.RemoteQueryErrorTest.testDistribSearch

Error Message:
expected:[]Document is missing ... but was:[Error from server at 
http://127.0.0.1:53799/pno/y/collection1: ]Document is missing ...

Stack Trace:
org.junit.ComparisonFailure: expected:[]Document is missing ... but 
was:[Error from server at http://127.0.0.1:53799/pno/y/collection1: ]Document 
is missing ...
at 
__randomizedtesting.SeedInfo.seed([D1827C2132C477C2:5064F239459B17FE]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.RemoteQueryErrorTest.doTest(RemoteQueryErrorTest.java:63)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6104) simplify internals of Lucene50NormsProducer

2014-12-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246817#comment-14246817
 ] 

Adrien Grand commented on LUCENE-6104:
--

+1 to the patch. Thanks for fixing the generics of the Accountable interface!

 simplify internals of Lucene50NormsProducer
 ---

 Key: LUCENE-6104
 URL: https://issues.apache.org/jira/browse/LUCENE-6104
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6104.patch, LUCENE-6104.patch


 This is tracking additional data structures, and has a lot of complexity, 
 when we could just refactor the internal structure to be a bit cleaner.
 as a bonus, its less memory overhead, but a more thorough memory tree: it 
 works like the docvalues one now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246821#comment-14246821
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645695 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1645695 ]

SOLR-6849: Fix @Slow test

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246825#comment-14246825
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645696 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645696 ]

SOLR-6849: Fix @Slow test

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #788: POMs out of sync

2014-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/788/

5 tests failed.
REGRESSION:  org.apache.solr.hadoop.MorphlineGoLiveMiniMRTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:60576/collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:60576/collection1
at 
__randomizedtesting.SeedInfo.seed([97D00D0E4BAEE90A:163683163CF18936]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.hadoop.MorphlineGoLiveMiniMRTest.doTest(MorphlineGoLiveMiniMRTest.java:410)


FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([1110DE8A6DDF1766]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:105)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {
  collection1:{
shards:{
  shard1:{
range:8000-,
state:active,
replicas:{
  core_node1:{
state:active,
core:collection1,
node_name:127.0.0.1:29325__pnt%2Fl,
base_url:http://127.0.0.1:29325/_pnt/l;,
leader:true},
  core_node3:{
state:active,
core:collection1,
node_name:127.0.0.1:35990__pnt%2Fl,
base_url:http://127.0.0.1:35990/_pnt/l}}},
  shard2:{
range:0-7fff,
state:active,
replicas:{core_node2:{
state:active,
core:collection1,
node_name:127.0.0.1:36917__pnt%2Fl,
base_url:http://127.0.0.1:36917/_pnt/l;,
leader:true,
maxShardsPerNode:1,
router:{name:compositeId},
replicationFactor:1,
autoAddReplicas:false,
autoCreated:true},
  c8n_1x2:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
state:active,
core:c8n_1x2_shard1_replica1,
node_name:127.0.0.1:36917__pnt%2Fl,
base_url:http://127.0.0.1:36917/_pnt/l;,
leader:true},
  core_node2:{
state:recovering,
core:c8n_1x2_shard1_replica2,
node_name:127.0.0.1:35990__pnt%2Fl,
base_url:http://127.0.0.1:35990/_pnt/l,
maxShardsPerNode:1,
router:{name:compositeId},
replicationFactor:2,
autoAddReplicas:false},
  control_collection:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
state:active,
core:collection1,
node_name:127.0.0.1:16461__pnt%2Fl,
base_url:http://127.0.0.1:16461/_pnt/l;,
leader:true,
maxShardsPerNode:1,
router:{name:compositeId},
replicationFactor:1,
autoAddReplicas:false,
autoCreated:true}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  collection1:{
shards:{
  shard1:{
range:8000-,
state:active,
replicas:{
  core_node1:{

[jira] [Commented] (LUCENE-6104) simplify internals of Lucene50NormsProducer

2014-12-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246873#comment-14246873
 ] 

Uwe Schindler commented on LUCENE-6104:
---

+1. Thanks for removing generics bullshit :-) The last time I have seen those 
Iterable docs I was really confused about the reason for the {{? extends}}. 
Thanks for simply removing it!

 simplify internals of Lucene50NormsProducer
 ---

 Key: LUCENE-6104
 URL: https://issues.apache.org/jira/browse/LUCENE-6104
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6104.patch, LUCENE-6104.patch


 This is tracking additional data structures, and has a lot of complexity, 
 when we could just refactor the internal structure to be a bit cleaner.
 as a bonus, its less memory overhead, but a more thorough memory tree: it 
 works like the docvalues one now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2330 - Failure

2014-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2330/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at http://127.0.0.1:60020/collection1: ]Input 
stream list wa... but was:[]Input stream list wa...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
http://127.0.0.1:60020/collection1: ]Input stream list wa... but was:[]Input 
stream list wa...
at 
__randomizedtesting.SeedInfo.seed([BA9451B29C99BE50:3B72DFAAEBC6DE6C]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:51)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6104) simplify internals of Lucene50NormsProducer

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246920#comment-14246920
 ] 

ASF subversion and git services commented on LUCENE-6104:
-

Commit 1645711 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1645711 ]

LUCENE-6104: simplify internals of Lucene50NormsProducer

 simplify internals of Lucene50NormsProducer
 ---

 Key: LUCENE-6104
 URL: https://issues.apache.org/jira/browse/LUCENE-6104
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6104.patch, LUCENE-6104.patch


 This is tracking additional data structures, and has a lot of complexity, 
 when we could just refactor the internal structure to be a bit cleaner.
 as a bonus, its less memory overhead, but a more thorough memory tree: it 
 works like the docvalues one now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246962#comment-14246962
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645712 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1645712 ]

SOLR-6849: Fix another @Slow test

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6849) RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the remote host

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246964#comment-14246964
 ] 

ASF subversion and git services commented on SOLR-6849:
---

Commit 1645713 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645713 ]

SOLR-6849: Fix another @Slow test

 RemoteSolrExceptions thrown from HttpSolrServer should include the URL of the 
 remote host
 -

 Key: SOLR-6849
 URL: https://issues.apache.org/jira/browse/SOLR-6849
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6849.patch


 All very well telling me there was an error on a remote host, but it's 
 difficult to work out what's wrong if it doesn't tell me *which* host the 
 error was on...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6104) simplify internals of Lucene50NormsProducer

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246988#comment-14246988
 ] 

ASF subversion and git services commented on LUCENE-6104:
-

Commit 1645718 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645718 ]

LUCENE-6104: simplify internals of Lucene50NormsProducer

 simplify internals of Lucene50NormsProducer
 ---

 Key: LUCENE-6104
 URL: https://issues.apache.org/jira/browse/LUCENE-6104
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6104.patch, LUCENE-6104.patch


 This is tracking additional data structures, and has a lot of complexity, 
 when we could just refactor the internal structure to be a bit cleaner.
 as a bonus, its less memory overhead, but a more thorough memory tree: it 
 works like the docvalues one now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6104) simplify internals of Lucene50NormsProducer

2014-12-15 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6104.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 simplify internals of Lucene50NormsProducer
 ---

 Key: LUCENE-6104
 URL: https://issues.apache.org/jira/browse/LUCENE-6104
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6104.patch, LUCENE-6104.patch


 This is tracking additional data structures, and has a lot of complexity, 
 when we could just refactor the internal structure to be a bit cleaner.
 as a bonus, its less memory overhead, but a more thorough memory tree: it 
 works like the docvalues one now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1952 - Failure!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1952/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false)

3 tests failed.
FAILED:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([34FBDCA1467722B2:B51D52B93128428E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-6679) disable/remove suggester from stock solrconfig

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247015#comment-14247015
 ] 

ASF subversion and git services commented on SOLR-6679:
---

Commit 1645721 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1645721 ]

SOLR-6679: uncomment /suggest, but tie it to an sysprop so you have to go out 
of your way to enable on startup

 disable/remove suggester from stock solrconfig
 --

 Key: SOLR-6679
 URL: https://issues.apache.org/jira/browse/SOLR-6679
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Yonik Seeley
 Fix For: 4.10.3, 5.0

 Attachments: SOLR-6679_disabled_via_sysprop.patch


 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6679) disable/remove suggester from stock solrconfig

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247021#comment-14247021
 ] 

ASF subversion and git services commented on SOLR-6679:
---

Commit 1645722 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645722 ]

SOLR-6679: uncomment /suggest, but tie it to an sysprop so you have to go out 
of your way to enable on startup (merge r1645721)

 disable/remove suggester from stock solrconfig
 --

 Key: SOLR-6679
 URL: https://issues.apache.org/jira/browse/SOLR-6679
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Yonik Seeley
 Fix For: 4.10.3, 5.0

 Attachments: SOLR-6679_disabled_via_sysprop.patch


 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6845) figure out why suggester causes slow startup - even when not used

2014-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247039#comment-14247039
 ] 

Hoss Man commented on SOLR-6845:


bq. We should just comment out the  str name=qstatic firstSearcher warming 
in solrconfig.xml/str query ...

This is still side stepping the root problem this issue was opened to address: 
why is hte /suggest handler so damn slow?

We don't need more baind-aid fixes that can be applied to the techproducts 
example configs to work-arround whatever fundemental problem exists - SOLR-6679 
already applied enough of a band-aid for that.

 what we need is to understand:
* why the hell is this suggester so damn slow to build it's dictionary even 
when the fields aren't used at all in the index?
* why the does this suggester auto-register a firstSearcher/newSearcher event 
listener to build the dict w/o there being any sort of configuration option 
indicating that the solr-admin has *requested* it to build on firstSearcher (or 
on every searcher open if that's what/why this is happening) 

 figure out why suggester causes slow startup - even when not used
 -

 Key: SOLR-6845
 URL: https://issues.apache.org/jira/browse/SOLR-6845
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 SOLR-6679 was filed to track the investigation into the following problem...
 {panel}
 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.
 ...
 This is what I did:
 1) indexed 10M very small docs (only takes a few minutes).
 2) shut down Solr
 3) start up Solr and watch it be unresponsive for over 4 minutes!
 I didn't even use any of the fields specified in the suggester config and I 
 never called the suggest request handler.
 {panel}
 ..but ultimately focused on removing/disabling the suggester from the sample 
 configs.
 Opening this new issue to focus on actually trying to identify the root 
 problem  fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6833) bin/solr -e foo should not use server/solr as the SOLR_HOME

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247154#comment-14247154
 ] 

Anshum Gupta commented on SOLR-6833:


I think this change started bundling 'gettingstarted' collection out of the box.

 bin/solr -e foo should not use server/solr as the SOLR_HOME
 ---

 Key: SOLR-6833
 URL: https://issues.apache.org/jira/browse/SOLR-6833
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6833.patch


 i think it's weird right now that running bin/solr with the -e (example) 
 option causes it to create example solr instances inside the server directory.
 i think that's fine for running solr normally (ie: start) but if you use 
 -e that seems like the solr.solr.home for those example should instead be 
 created under $SOLR_TIP/example.
 I would even go so far as to suggest that the *log* files created should live 
 in that directory as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6840) Remove legacy solr.xml mode

2014-12-15 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-6840:

Attachment: SOLR-6840.patch

I've tried out an alternative approach, by just removing the ConfigSolrXmlOld 
code and everything depending on it, and then seeing what fails.  Here's a 
checkpoint patch.  So far all the tests in org.apache.solr.core are passing, 
but all of the distributed tests currently fail because the default solr.xml 
created a 'collection1' core for them.  Trying to work out how to get them 
passing now.

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch, SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6833) bin/solr -e foo should not use server/solr as the SOLR_HOME

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247165#comment-14247165
 ] 

Anshum Gupta commented on SOLR-6833:


It's just the build and my fault. I ran the example from here once... and then 
did an ant clean package... that seems to have bundled /cloud with the 
gettingstarted data with it.

 bin/solr -e foo should not use server/solr as the SOLR_HOME
 ---

 Key: SOLR-6833
 URL: https://issues.apache.org/jira/browse/SOLR-6833
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6833.patch


 i think it's weird right now that running bin/solr with the -e (example) 
 option causes it to create example solr instances inside the server directory.
 i think that's fine for running solr normally (ie: start) but if you use 
 -e that seems like the solr.solr.home for those example should instead be 
 created under $SOLR_TIP/example.
 I would even go so far as to suggest that the *log* files created should live 
 in that directory as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6833) bin/solr -e foo should not use server/solr as the SOLR_HOME

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247172#comment-14247172
 ] 

ASF subversion and git services commented on SOLR-6833:
---

Commit 1645737 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1645737 ]

SOLR-6833: clean should remove example directories created by running bin/solr 
-e foo

 bin/solr -e foo should not use server/solr as the SOLR_HOME
 ---

 Key: SOLR-6833
 URL: https://issues.apache.org/jira/browse/SOLR-6833
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6833.patch


 i think it's weird right now that running bin/solr with the -e (example) 
 option causes it to create example solr instances inside the server directory.
 i think that's fine for running solr normally (ie: start) but if you use 
 -e that seems like the solr.solr.home for those example should instead be 
 created under $SOLR_TIP/example.
 I would even go so far as to suggest that the *log* files created should live 
 in that directory as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6833) bin/solr -e foo should not use server/solr as the SOLR_HOME

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247187#comment-14247187
 ] 

ASF subversion and git services commented on SOLR-6833:
---

Commit 1645741 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645741 ]

SOLR-6833: clean should remove example directories created by running bin/solr 
-e foo

 bin/solr -e foo should not use server/solr as the SOLR_HOME
 ---

 Key: SOLR-6833
 URL: https://issues.apache.org/jira/browse/SOLR-6833
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6833.patch


 i think it's weird right now that running bin/solr with the -e (example) 
 option causes it to create example solr instances inside the server directory.
 i think that's fine for running solr normally (ie: start) but if you use 
 -e that seems like the solr.solr.home for those example should instead be 
 created under $SOLR_TIP/example.
 I would even go so far as to suggest that the *log* files created should live 
 in that directory as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6851) oom_solr.sh problems

2014-12-15 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6851:
--

 Summary: oom_solr.sh problems
 Key: SOLR-6851
 URL: https://issues.apache.org/jira/browse/SOLR-6851
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Fix For: 5.0


noticed 2 problems with the oom_solr.sh script...

1) the script is only being run with the port of hte solr instance to 
terminate, so the log messages aren't getting writen to the correct directory 
-- if we change hte script to take a log dir/file as an argument, we can ensure 
the logs are written to the correct place

2) on my ubuntu linux machine (where /bin/sh is a symlink to /bin/dash), the 
console log is recording a script error when java runs oom_solr.sh...

{noformat}
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError=/home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh 8983
#   Executing /bin/sh -c /home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh 
8983...
/home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh: 20: [: 14305: unexpected 
operator
Running OOM killer script for process 14305 for Solr on port 8983
Killed process 14305
{noformat}

steps to reproduce: {{bin/solr -e techproducts -m 10m}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6851) oom_solr.sh problems

2014-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247242#comment-14247242
 ] 

Hoss Man commented on SOLR-6851:


the script error seems to be an sh vs bash vs dash portability issue with 
using = for string comparisons.

from a portibility standpoint, probably safer to just use -z to check if the 
string is empty...

{code}
if [ -z $SOLR_PID ]; then
{code}

 oom_solr.sh problems
 

 Key: SOLR-6851
 URL: https://issues.apache.org/jira/browse/SOLR-6851
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Fix For: 5.0


 noticed 2 problems with the oom_solr.sh script...
 1) the script is only being run with the port of hte solr instance to 
 terminate, so the log messages aren't getting writen to the correct directory 
 -- if we change hte script to take a log dir/file as an argument, we can 
 ensure the logs are written to the correct place
 2) on my ubuntu linux machine (where /bin/sh is a symlink to /bin/dash), 
 the console log is recording a script error when java runs oom_solr.sh...
 {noformat}
 #
 # java.lang.OutOfMemoryError: Java heap space
 # -XX:OnOutOfMemoryError=/home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh 
 8983
 #   Executing /bin/sh -c /home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh 
 8983...
 /home/hossman/lucene/5x_dev/solr/bin/oom_solr.sh: 20: [: 14305: unexpected 
 operator
 Running OOM killer script for process 14305 for Solr on port 8983
 Killed process 14305
 {noformat}
 steps to reproduce: {{bin/solr -e techproducts -m 10m}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-6852:
--

 Summary: SimplePostTool should no longer default to collection1
 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0


Solr no longer would be bootstrapped with collection1 and so it no longer 
makes sense for the SimplePostTool to default to collection1 either.
Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247247#comment-14247247
 ] 

Hoss Man commented on SOLR-6852:


+1

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2331 - Still Failing

2014-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2331/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([F3F9910F335E3722:721F1F174401571E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247270#comment-14247270
 ] 

Anshum Gupta commented on SOLR-6852:


With that, there's another question. Should the default URL be also dropped? 
I'd like to drop the default behavior and force users to specify the 
collection/core name.

It currently defaults to:
http://localhost:8983/solr/collection1/update.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247278#comment-14247278
 ] 

Hoss Man commented on SOLR-6852:


given that post.jar's primary goal is making things simple for new users -- 
particularly users trying out hte examples  tutorial -- i think that as long 
as the user specifies a collection name, it's find to have default assumptions 
about http, localhost, 8983, /solr, and /update.

if any of those things aren't what the user wants then they can use the full 
URL, just like with curl.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247288#comment-14247288
 ] 

Anshum Gupta commented on SOLR-6852:


right, that's what I'm checking on.
{code}
if(url==null  core== null) {
 fatal();
}
{code}

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6852:
---
Attachment: SOLR-6852.patch

Patch for SimplePostTool and README.txt.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247350#comment-14247350
 ] 

Anshum Gupta commented on SOLR-6852:


Fixing the failing test.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6852:
---
Attachment: SOLR-6852.patch

Fixed the test to set a dummy collection when testing the SimplePostTool so it 
doesn't fail fast.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247399#comment-14247399
 ] 

Joel Bernstein commented on SOLR-6813:
--

My initial thoughts...

In distrib.singlePass mode the ExpandComponent will be returning more documents 
then are needed to satisfy the query.

Here is the basic logic:

1) In non-distributed mode: Return expanded groups for all documents in the 
docList.
2) In distributed mode: Return expanded groups for all documents referenced in 
the ID parameter. This ensured that only documents in the current page were 
expanded.

With distrib.singlePass mode the ExpandComponent will behave like #1. So if the 
page size is 10 and there are ten shards, each shard will return 10 expanded 
groups. So there will be 100 expanded groups in the output. 

To resolve this issue the handleResponses method in the ExpandComponent is 
going to have to remove expanded groups that are not in the final merged 
docList. 













 distrib.singlePass does not work for expand-request - start/rows included
 -

 Key: SOLR-6813
 URL: https://issues.apache.org/jira/browse/SOLR-6813
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Reporter: Per Steffensen
Assignee: Joel Bernstein
  Labels: distributed_search, search
 Attachments: test_that_reveals_the_problem.patch


 Using distrib.singlePass does not work for expand-requests. Even after the 
 fix provided to SOLR-6812, it does not work for requests where you add start 
 and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247399#comment-14247399
 ] 

Joel Bernstein edited comment on SOLR-6813 at 12/15/14 11:11 PM:
-

My initial thoughts...

In distrib.singlePass mode the ExpandComponent will be returning more expanded 
groups then are needed to satisfy the query.

Here is the basic logic:

1) In non-distributed mode: Return expanded groups for all documents in the 
docList.
2) In distributed mode: Return expanded groups for all documents referenced in 
the ID parameter. This ensured that only documents in the current page were 
expanded.

With distrib.singlePass mode the ExpandComponent will behave like #1. So if the 
page size is 10 and there are ten shards, each shard will return 10 expanded 
groups. So there will be 100 expanded groups in the output. 

To resolve this issue the handleResponses method in the ExpandComponent is 
going to have to remove expanded groups that are not in the final merged 
docList. 














was (Author: joel.bernstein):
My initial thoughts...

In distrib.singlePass mode the ExpandComponent will be returning more documents 
then are needed to satisfy the query.

Here is the basic logic:

1) In non-distributed mode: Return expanded groups for all documents in the 
docList.
2) In distributed mode: Return expanded groups for all documents referenced in 
the ID parameter. This ensured that only documents in the current page were 
expanded.

With distrib.singlePass mode the ExpandComponent will behave like #1. So if the 
page size is 10 and there are ten shards, each shard will return 10 expanded 
groups. So there will be 100 expanded groups in the output. 

To resolve this issue the handleResponses method in the ExpandComponent is 
going to have to remove expanded groups that are not in the final merged 
docList. 













 distrib.singlePass does not work for expand-request - start/rows included
 -

 Key: SOLR-6813
 URL: https://issues.apache.org/jira/browse/SOLR-6813
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Reporter: Per Steffensen
Assignee: Joel Bernstein
  Labels: distributed_search, search
 Attachments: test_that_reveals_the_problem.patch


 Using distrib.singlePass does not work for expand-requests. Even after the 
 fix provided to SOLR-6812, it does not work for requests where you add start 
 and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40-ea-b09) - Build # 4492 - Failure!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4492/
Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseConcMarkSweepGC (asserts: false)

2 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at https://127.0.0.1:55984/ut/g/collection1: ]No 
file name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
https://127.0.0.1:55984/ut/g/collection1: ]No file name specifi... but 
was:[]No file name specifi...
at 
__randomizedtesting.SeedInfo.seed([7B191D48C311C66A:FAFF9350B44EA656]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247423#comment-14247423
 ] 

Joel Bernstein commented on SOLR-6813:
--

Another thought...

We also appear to have a deep paging issue to consider with distrib.singlePass 
on when the ExpandComponent is in play. 

The ExpandComponent will fetch groups for all documents in the docList when 
distrib.singlePass is on. With distributed deep paging the docList continues to 
grow as the user pages deeper into the result set. This means that more 
expanded groups will be fetched, making the deep paging problems much worse. 

In normal two pass distributed mode, the ExpandComponent can use the ID list to 
eliminate  the deep paging issue.

So, in a nutshell we may be slowing things down quite a bit when using 
distrib.singlePass with the ExpandComponent. We should consider turning off 
distrib.singlePass if the ExpandComponent is in use.




 distrib.singlePass does not work for expand-request - start/rows included
 -

 Key: SOLR-6813
 URL: https://issues.apache.org/jira/browse/SOLR-6813
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Reporter: Per Steffensen
Assignee: Joel Bernstein
  Labels: distributed_search, search
 Attachments: test_that_reveals_the_problem.patch


 Using distrib.singlePass does not work for expand-requests. Even after the 
 fix provided to SOLR-6812, it does not work for requests where you add start 
 and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247423#comment-14247423
 ] 

Joel Bernstein edited comment on SOLR-6813 at 12/15/14 11:32 PM:
-

Another thought...

We also appear to have a deep paging issue to consider with distrib.singlePass 
on when the ExpandComponent is in play. 

The ExpandComponent will fetch groups for all documents in the docList when 
distrib.singlePass is on. With distributed deep paging the docList continues to 
grow as the user pages deeper into the result set. This means that more 
expanded groups will be fetched, making the deep paging problems much worse. 

In normal two pass distributed mode, the ExpandComponent uses the ID list to 
eliminate  the deep paging issue.

So, in a nutshell we may be slowing things down quite a bit when using 
distrib.singlePass with the ExpandComponent.

The ExpandComponent was designed to work very effiiciently with the two pass 
distributed mode. Perhaps we should consider turning off distrib.singlePass if 
the ExpandComponent is in use.





was (Author: joel.bernstein):
Another thought...

We also appear to have a deep paging issue to consider with distrib.singlePass 
on when the ExpandComponent is in play. 

The ExpandComponent will fetch groups for all documents in the docList when 
distrib.singlePass is on. With distributed deep paging the docList continues to 
grow as the user pages deeper into the result set. This means that more 
expanded groups will be fetched, making the deep paging problems much worse. 

In normal two pass distributed mode, the ExpandComponent can use the ID list to 
eliminate  the deep paging issue.

So, in a nutshell we may be slowing things down quite a bit when using 
distrib.singlePass with the ExpandComponent. We should consider turning off 
distrib.singlePass if the ExpandComponent is in use.




 distrib.singlePass does not work for expand-request - start/rows included
 -

 Key: SOLR-6813
 URL: https://issues.apache.org/jira/browse/SOLR-6813
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Reporter: Per Steffensen
Assignee: Joel Bernstein
  Labels: distributed_search, search
 Attachments: test_that_reveals_the_problem.patch


 Using distrib.singlePass does not work for expand-requests. Even after the 
 fix provided to SOLR-6812, it does not work for requests where you add start 
 and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6813) distrib.singlePass does not work for expand-request - start/rows included

2014-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247423#comment-14247423
 ] 

Joel Bernstein edited comment on SOLR-6813 at 12/15/14 11:34 PM:
-

Another thought...

We also appear to have a deep paging issue to consider with distrib.singlePass 
when the ExpandComponent is in play. 

The ExpandComponent will fetch groups for all documents in the docList when 
distrib.singlePass is on. With distributed deep paging the docList continues to 
grow as the user pages deeper into the result set. This means that more 
expanded groups will be fetched, making the deep paging problems much worse. 

In normal two pass distributed mode, the ExpandComponent uses the ID list to 
eliminate  the deep paging issue.

So, in a nutshell we may be slowing things down quite a bit when using 
distrib.singlePass with the ExpandComponent.

The ExpandComponent was designed to work very effiiciently with the two pass 
distributed mode. Perhaps we should consider turning off distrib.singlePass if 
the ExpandComponent is in use.





was (Author: joel.bernstein):
Another thought...

We also appear to have a deep paging issue to consider with distrib.singlePass 
on when the ExpandComponent is in play. 

The ExpandComponent will fetch groups for all documents in the docList when 
distrib.singlePass is on. With distributed deep paging the docList continues to 
grow as the user pages deeper into the result set. This means that more 
expanded groups will be fetched, making the deep paging problems much worse. 

In normal two pass distributed mode, the ExpandComponent uses the ID list to 
eliminate  the deep paging issue.

So, in a nutshell we may be slowing things down quite a bit when using 
distrib.singlePass with the ExpandComponent.

The ExpandComponent was designed to work very effiiciently with the two pass 
distributed mode. Perhaps we should consider turning off distrib.singlePass if 
the ExpandComponent is in use.




 distrib.singlePass does not work for expand-request - start/rows included
 -

 Key: SOLR-6813
 URL: https://issues.apache.org/jira/browse/SOLR-6813
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Reporter: Per Steffensen
Assignee: Joel Bernstein
  Labels: distributed_search, search
 Attachments: test_that_reveals_the_problem.patch


 Using distrib.singlePass does not work for expand-requests. Even after the 
 fix provided to SOLR-6812, it does not work for requests where you add start 
 and/or rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247435#comment-14247435
 ] 

Jack Krupansky commented on SOLR-6852:
--

Is this really for 5.0 only and not trunk/6.0 as well?

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: GitHub pull requests vs. Jira issues

2014-12-15 Thread Chris Hostetter

It helps if you create a Jira as well -- particularly if you then refer to 
the jira ID in your pull request, because then that way it's autolinked in 
jira -- because we strive to have a jira tracking any non-trivial change 
so it's got an easy refrence point for tracking in CHANGES.txt, and for 
reading history/context about why any change happened.

And yes ... even for trivial changes, pull request emails may get lost in 
the shuffle as busy people skim their email ... but Jira issues last 
forever.



: Date: Mon, 15 Dec 2014 10:16:55 +
: From: Vanlerberghe, Luc luc.vanlerber...@bvdinfo.com
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org dev@lucene.apache.org
: Subject: GitHub pull requests vs. Jira issues
: 
: Hi.
: 
: I recently created two pull requests via GitHub that arrived on the dev list 
automatically.
: (They may have ended up in spam since I hadn't configured my name and email 
yet, so the From: field was set to LucVL g...@git.apache.org)
: I repeated the contents below just in case.
: 
: Do I need to set up corresponding JIRA issues to make sure they don't get 
lost (or at least to know if they are rejected...) or are GitHub pull requests 
also reviewed regularly?
: 
: Thanks,
: 
: Luc
: 
: 
: https://github.com/apache/lucene-solr/pull/108
: 
: o.a.l.queryparser.flexible.standard.StandardQueryParser cleanup
: 
: * Removed unused, but confusing code (CONJ_AND == CONJ_OR == 2 ???). 
Unfortunately, the code generated by JavaCC from the updated 
StandardSyntaxParser.jj differs in more places than necessary.
: * Replaced Vector by List/ArrayList.
: * Corrected the javadoc for StandardQueryParser.setLowercaseExpandedTerms
: 
: ant test in the queryparser directory runs successfully
: 
: 
: 
: https://github.com/apache/lucene-solr/pull/113
: 
: BitSet fixes
: 
: * `LongBitSet.ensureCapacity` overflows on `numBits  Integer.MaxValue`
: * `Fixed-/LongBitSet`: Avoid conditional branch in `bits2words` (with a 
comment explaining the formula)
: 
: TODO:
: * Harmonize the use of `numWords` vs. `bits.length` vs. `numBits`
:  * E.g.: `cardinality` scans up to `bits.length`, while `or` asserts on 
`indexnumBits`
: * If a `BitSet` is allocated with `n` bits, `ensureCapacity` with the 
same number `n` shouldn't grow the `BitSet`
:  * Either both should allocate a larger array than really needed or 
neither.
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2332 - Still Failing

2014-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2332/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at http://127.0.0.1:43205/collection1: ]No file 
name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
http://127.0.0.1:43205/collection1: ]No file name specifi... but was:[]No 
file name specifi...
at 
__randomizedtesting.SeedInfo.seed([FBF65D3A2D51C3ED:7A10D3225A0EA3D1]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [VOTE] Release 4.10.3 RC1

2014-12-15 Thread Mark Miller
This Vote has passed. I’ll start the process tomorrow.

- Mark

http://about.me/markrmiller

 On Dec 10, 2014, at 9:16 PM, Yonik Seeley yo...@heliosearch.com wrote:
 
 +1
 
 -Yonik
 http://heliosearch.org - native code faceting, facet functions,
 sub-facets, off-heap data
 
 On Wed, Dec 10, 2014 at 10:19 AM, Mark Miller markrmil...@gmail.com wrote:
 Please review and vote for the following RC:
 
 Artifacts: 
 http://people.apache.org/~markrmiller/staging_area/lucene-solr-4.10.3-RC1-rev1644336
 
 
 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py 
 http://people.apache.org/~markrmiller/staging_area/lucene-solr-4.10.3-RC1-rev1644336
  1644336 4.10.3 /tmp/smoke1 True
 
 
 SUCCESS! [0:46:37.882812]
 
 
 Here's my +1
 
 
 - Mark
 
 http://about.me/markrmiller
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4387 - Failure!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4387/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at http://127.0.0.1:57833/sev/u/collection1: ]No 
file name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
http://127.0.0.1:57833/sev/u/collection1: ]No file name specifi... but 
was:[]No file name specifi...
at 
__randomizedtesting.SeedInfo.seed([FC410011E60B30EC:7DA78E09915450D0]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2014-12-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247614#comment-14247614
 ] 

Erick Erickson commented on SOLR-6840:
--

You're doing yeoman's duty on this, a mere 174K patch... so far ;)

I might recommend for this phase, leaving the failIfFound tests in 
ConfigSolrXml for a little on the theory that there are all sorts of nooks and 
crannies that exist. Probably take them out when before checking things in, but 
between now and then it'd be useful to fail there I think. There are still a 
lot of solr.xml files out there after the patch that have cores... It's 
actually an open question for me whether they are useful, maybe they should 
just be deleted wholesale...

If you get to a point where you want to take a break, I can take a whack at 
working on some of the test cases working.


 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch, SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6113) ReferenceManager.release uses assertion to expect argument not null, also expects argument to be not null

2014-12-15 Thread ryan rawson (JIRA)
ryan rawson created LUCENE-6113:
---

 Summary: ReferenceManager.release uses assertion to expect 
argument not null, also expects argument to be not null
 Key: LUCENE-6113
 URL: https://issues.apache.org/jira/browse/LUCENE-6113
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.1
Reporter: ryan rawson


A common use pattern for the Reference Manager looks like so:

{code}
IndexSearcher searcher = null;
try {
searcher = searcherManager.acquire();
// do real work
} finally {
   searcherManager.release(searcher);
}
{code}

The problem with this code is if 'acquire' throws an exception, the finally 
block is called with a null reference for 'searcher'.  There are two issues, 
one is this call release() uses assertion to check for argument validity, which 
is not recommended 
(http://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html) 
and secondly to fix this, we need to guard all calls to release with an if 
clause.

Why not have release() be a noop if it is passed null, instead of triggering an 
NPE?  It would support this API usage pattern w/o any changes on the behalf of 
users.

Looking at the code, it appears that it is very unlikely that the acquire() 
call throws an exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247641#comment-14247641
 ] 

Anshum Gupta commented on SOLR-6852:


No reason for this to be not for trunk too.

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6114) Remove bw compat cruft from packedints

2014-12-15 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6114:
---

 Summary: Remove bw compat cruft from packedints
 Key: LUCENE-6114
 URL: https://issues.apache.org/jira/browse/LUCENE-6114
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk
 Attachments: LUCENE-6114.patch

In trunk we have some old logic that is not needed (versions 0 and 1). So we 
can remove support for structures that aren't byte-aligned, zigzag-encoded 
monotonics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6114) Remove bw compat cruft from packedints

2014-12-15 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6114:

Attachment: LUCENE-6114.patch

 Remove bw compat cruft from packedints
 --

 Key: LUCENE-6114
 URL: https://issues.apache.org/jira/browse/LUCENE-6114
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6114.patch


 In trunk we have some old logic that is not needed (versions 0 and 1). So we 
 can remove support for structures that aren't byte-aligned, zigzag-encoded 
 monotonics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1997 - Still Failing!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1997/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

1 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at https://127.0.0.1:54454/eh/gx/collection1: ]No 
file name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
https://127.0.0.1:54454/eh/gx/collection1: ]No file name specifi... but 
was:[]No file name specifi...
at 
__randomizedtesting.SeedInfo.seed([D0C92CC15E1D2286:512FA2D9294242BA]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2333 - Still Failing

2014-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2333/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at https://127.0.0.1:30923/lxp/collection1: ]No 
file name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
https://127.0.0.1:30923/lxp/collection1: ]No file name specifi... but 
was:[]No file name specifi...
at 
__randomizedtesting.SeedInfo.seed([EEE03A5A0FCA6D73:6F06B44278950D4F]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-5706) Inconsistent results in a distributed configuration

2014-12-15 Thread liyang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247811#comment-14247811
 ] 

liyang commented on SOLR-5706:
--

{
  core1:{
shards:{
  shard1:{
range:8000-d554,
state:active,
replicas:{
  core_node2:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core1_shard1_replica2,
node_name:10.16.236.72:_solr,
leader:true},
  core_node5:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core1_shard1_replica3,
node_name:10.16.238.75:_solr},
  core_node8:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core1_shard1_replica1,
node_name:10.16.238.76:_solr}}},
  shard2:{
range:d555-2aa9,
state:active,
replicas:{
  core_node3:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core1_shard2_replica2,
node_name:10.16.236.72:_solr,
leader:true},
  core_node4:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core1_shard2_replica1,
node_name:10.16.238.76:_solr},
  core_node9:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core1_shard2_replica3,
node_name:10.16.238.75:_solr}}},
  shard3:{
range:2aaa-7fff,
state:active,
replicas:{
  core_node1:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core1_shard3_replica2,
node_name:10.16.236.72:_solr,
leader:true},
  core_node6:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core1_shard3_replica1,
node_name:10.16.238.76:_solr},
  core_node7:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core1_shard3_replica3,
node_name:10.16.238.75:_solr,
maxShardsPerNode:3,
router:{name:compositeId},
replicationFactor:3},
  core0:{
shards:{
  shard1:{
range:8000-d554,
state:active,
replicas:{
  core_node3:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core0_shard1_replica3,
node_name:10.16.236.72:_solr,
leader:true},
  core_node6:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core0_shard1_replica1,
node_name:10.16.238.76:_solr},
  core_node8:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core0_shard1_replica2,
node_name:10.16.238.75:_solr}}},
  shard2:{
range:d555-2aa9,
state:active,
replicas:{
  core_node1:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core0_shard2_replica3,
node_name:10.16.236.72:_solr,
leader:true},
  core_node4:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core0_shard2_replica1,
node_name:10.16.238.76:_solr},
  core_node9:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core0_shard2_replica2,
node_name:10.16.238.75:_solr}}},
  shard3:{
range:2aaa-7fff,
state:active,
replicas:{
  core_node2:{
state:active,
base_url:http://10.16.236.72:/solr;,
core:core0_shard3_replica3,
node_name:10.16.236.72:_solr,
leader:true},
  core_node5:{
state:active,
base_url:http://10.16.238.76:/solr;,
core:core0_shard3_replica1,
node_name:10.16.238.76:_solr},
  core_node7:{
state:active,
base_url:http://10.16.238.75:/solr;,
core:core0_shard3_replica2,
node_name:10.16.238.75:_solr,
maxShardsPerNode:3,
router:{name:compositeId},
replicationFactor:3}}

we query and get different data every time

 Inconsistent results in a distributed configuration
 ---

 Key: SOLR-5706
 URL: https://issues.apache.org/jira/browse/SOLR-5706
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6
Reporter: Felipe Fonseca Ribeiro

 I´m getting inconsistent results in a distributed configuration. 
 Using 

Re: Interesting blog on G1 GC improvemnts u25 - u60

2014-12-15 Thread Shawn Heisey
On 12/6/2014 3:00 PM, Shawn Heisey wrote:
 On 12/5/2014 2:42 PM, Erick Erickson wrote:
 Saw this on the Cloudera website:

 http://blog.cloudera.com/blog/2014/12/tuning-java-garbage-collection-for-hbase/

 Original post here:
 https://software.intel.com/en-us/blogs/2014/06/18/part-1-tuning-java-garbage-collection-for-hbase

 Although it's for hbase, I thought the presentation went into enough
 detail about what improvements they'd seen that I can see it being
 useful for Solr folks. And we have some people on this list who are
 interested in this sort of thing
 
 Very interesting.  My own experiences with G1 and Solr (which I haven't
 repeated since early Java 7 releases, something like 7u10 or 7u13) would
 show even worse spikes compared to the blue lines on those graphs ...
 and my heap isn't anywhere even CLOSE to 100GB.  Solr probably has
 different garbage creation characteristics than hbase.

Followup with graphs.  I've cc'd Rory at Oracle too, with hopes that
this info will ultimately reach those who work on G1.  I can provide the
actual GC logs as well.

Here's a graph of a GC log lasting over two weeks with a tuned CMS
collector and Oracle Java 7u25 and a 6GB heap.

https://www.dropbox.com/s/mygjeviyybqqnqd/cms-7u25.png?dl=0

CMS was tuned using these settings:

http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning

This graph shows that virtually all collection pauses were a little
under half a second.  There were exactly three full garbage collections,
and each one took around six seconds.  While that is a significant
pause, having only three such collections over a period of 16 days
sounds pretty good to me.

Here's about half as much runtime (8 days) on the same server running G1
with Oracle 7u72 and the same 6GB heap.  G1 is untuned, because I do not
know how:

https://www.dropbox.com/s/2kgx60gj988rflj/g1-7u72.png?dl=0

Most of these collections were around a tenth of a second ... which is
certainly better than nearly half a second ... but there are a LOT of
collections that take longer than a second, and a fair number of them
that took between 3 and 5 seconds.

It's difficult to say which of these graphs is actually better.  The CMS
graph is certainly more consistent, and does a LOT fewer full GCs ...
but is the 4 to 1 improvement in a typical GC enough to reveal
significantly better performance?  My instinct says that it would NOT be
enough for that, especially with so many collections taking 1-3 seconds.

If the server was really busy (mine isn't), I wonder whether the GC
graph would look similar, or whether it would be really different.  A
busy server would need to collect a lot more garbage, so I fear that the
yellow and black parts of the G1 graph would dominate more than they do
in my graph, which would be overall a bad thing.  Only real testing on
busy servers can tell us that.

I can tell you for sure that the G1 graph looks a lot better than it did
in early Java 7 releases, but additional work by Oracle (and perhaps
some G1 tuning options) might significantly improve it.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247880#comment-14247880
 ] 

ASF subversion and git services commented on SOLR-6852:
---

Commit 1645866 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1645866 ]

SOLR-6852: SimplePostTool no longer defaults to collection1, also there's no 
default update URL

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247881#comment-14247881
 ] 

ASF subversion and git services commented on SOLR-6852:
---

Commit 1645867 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645867 ]

SOLR-6852: SimplePostTool no longer defaults to collection1, also there's no 
default update URL (merge from trunk)

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247884#comment-14247884
 ] 

ASF subversion and git services commented on SOLR-6852:
---

Commit 1645868 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1645868 ]

SOLR-6852: Adding the CHANGES.txt entry

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6852) SimplePostTool should no longer default to collection1

2014-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247888#comment-14247888
 ] 

ASF subversion and git services commented on SOLR-6852:
---

Commit 1645869 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1645869 ]

SOLR-6852: Adding the CHANGES.txt entry (Merging from trunk)

 SimplePostTool should no longer default to collection1
 --

 Key: SOLR-6852
 URL: https://issues.apache.org/jira/browse/SOLR-6852
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.0

 Attachments: SOLR-6852.patch, SOLR-6852.patch


 Solr no longer would be bootstrapped with collection1 and so it no longer 
 makes sense for the SimplePostTool to default to collection1 either.
 Without an explicit collection/core/url value, the call should just fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1953 - Still Failing!

2014-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1953/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.TestModifyConfFiles.testDistribSearch

Error Message:
expected:[Error from server at http://127.0.0.1:57071/collection1: ]No file 
name specifi... but was:[]No file name specifi...

Stack Trace:
org.junit.ComparisonFailure: expected:[Error from server at 
http://127.0.0.1:57071/collection1: ]No file name specifi... but was:[]No 
file name specifi...
at 
__randomizedtesting.SeedInfo.seed([EE18DF1D8D0FA780:6FFE5105FA50C7BC]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestModifyConfFiles.doTest(TestModifyConfFiles.java:65)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at