Tika error

2012-12-06 Thread Arkadi Colson

Anybody an idea?

Dec 5, 2012 3:52:32 PM org.apache.solr.client.solrj.impl.HttpClientUtil 
createClient
INFO: Creating new http client, 
config:maxConnections=500maxConnectionsPerHost=16
Dec 5, 2012 3:52:33 PM 
org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: [intradesk] webapp=/solr path=/update/extract 
params={literal.smsc_ssid=1499commit=trueliteral.id=1354722015literal.smsc_date_edited=2012-11-05T15:09:47Zliteral.smsc_courseID=0literal.smsc_date_created=2012-1

1-05T15:09:47Zwt=jsonliteral.smsc_module=intradesk} {} 0 313
Dec 5, 2012 3:52:33 PM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
org/mozilla/universalchardet/CharsetListener
at 
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:469)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:297)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:931)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NoClassDefFoundError: 
org/mozilla/universalchardet/CharsetListener

at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at 
org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2904)
at 
org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1173)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1681)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1559)
at 
org.apache.tika.parser.txt.UniversalEncodingDetector.detect(UniversalEncodingDetector.java:40)
at 
org.apache.tika.detect.AutoDetectReader.detect(AutoDetectReader.java:51)
at 
org.apache.tika.detect.AutoDetectReader.init(AutoDetectReader.java:92)
at 
org.apache.tika.detect.AutoDetectReader.init(AutoDetectReader.java:98)

at org.apache.tika.parser.txt.TXTParser.parse(TXTParser.java:70)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)

... 15 more
Caused by: java.lang.ClassNotFoundException: 
org.mozilla.universalchardet.CharsetListener
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1714)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1559)

... 38 more

Dec 6, 2012 7:58:02 AM 
org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: [intradesk] webapp=/solr path=/update/extract 

RE: Synomyns.txt not working with wildcards in query

2012-12-06 Thread Markus Jelsma
Hi

Query's with wildcards or fuzzy operators are called multi term queries and do 
not pass through the field's analyzer as you might expect.

See: http://wiki.apache.org/solr/MultitermQueryAnalysis
 
 
-Original message-
 From:Pratyul Kapoor praty...@gmail.com
 Sent: Thu 06-Dec-2012 06:28
 To: solr-user@lucene.apache.org
 Subject: Synomyns.txt not working with wildcards in query
 
 Hi,
 
 I am appending wildcards(*) at the end of my query. But to my amazement
 synomyns.txt stops working after using wildcards.
 
 Is there any way by which this can be handled in solr.
 
 Regards
 Pratyul
 


RE: Disable term frequency for some fields in solr

2012-12-06 Thread Markus Jelsma
Hi,

You can either use omitTermFreqAndPositions on that field or set a custom 
similarity for that field that returns 1 for tf  0.

http://wiki.apache.org/solr/SchemaXml#Common_field_options
http://wiki.apache.org/solr/SchemaXml#Similarity
 
 
-Original message-
 From:Amit Jha shanuu@gmail.com
 Sent: Thu 06-Dec-2012 08:13
 To: solr-user@lucene.apache.org
 Subject: Disable term frequency for some fields in solr
 
 Hi,
 
 In my case I would like to disable term frequency for some fields. These
 field should return a constant term frequency irrespective of how many
 times a query term occur in those fields.
 
 Regards
 Shanu
 


Re: Restricting search results by field value

2012-12-06 Thread Tom Mortimer
Sounds like it's worth a try! Thanks Andre.
Tom

On 5 Dec 2012, at 17:49, Andre Bois-Crettez andre.b...@kelkoo.com wrote:

 If you do grouping on source_id, it should be enough to request 3 times
 more documents than you need, then reorder and drop the bottom.
 
 Is a 3x overhead acceptable ?
 
 
 
 On 12/05/2012 12:04 PM, Tom Mortimer wrote:
 Hi everyone,
 
 I've got a problem where I have docs with a source_id field, and there can 
 be many docs from each source. Searches will typically return docs from many 
 sources. I want to restrict the number of docs from each source in results, 
 so there will be no more than (say) 3 docs from source_id=123 etc.
 
 Field collapsing is the obvious approach, but I want to get the results back 
 in relevancy order, not grouped by source_id. So it looks like I'll have to 
 fetch more docs than I need to and re-sort them. It might even be better to 
 count source_ids in the client code and drop excess docs that way, but the 
 potential overhead is large.
 
 Is there any way of doing this in Solr without hacking in a custom Lucene 
 Collector? (which doesn't look all that straightforward).
 
 cheers,
 Tom
 
 
 --
 André Bois-Crettez
 
 Search technology, Kelkoo
 http://www.kelkoo.com/
 
 Kelkoo SAS
 Société par Actions Simplifiée
 Au capital de € 4.168.964,30
 Siège social : 8, rue du Sentier 75002 Paris
 425 093 069 RCS Paris
 
 Ce message et les pièces jointes sont confidentiels et établis à l'attention 
 exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
 message, merci de le détruire et d'en avertir l'expéditeur.



RE: FW: Replication error and Shard Inconsistencies..

2012-12-06 Thread Annette Newton
Hi,

The file descriptor count is always quite low..  At the moment after heavy 
usage for a few days file descriptor counts are between 100-150 and I don't 
have any errors in the logs.  My worry at the moment is around all the 
CLOSE_WAIT connections I am seeing.  This is particularly true on the boxes 
marked as leaders, the replicas have a few but nowhere near as many.

Thanks for the response.

-Original Message-
From: Andre Bois-Crettez [mailto:andre.b...@kelkoo.com] 
Sent: 05 December 2012 17:57
To: solr-user@lucene.apache.org
Subject: Re: FW: Replication error and Shard Inconsistencies..

Not sure but, maybe you are running out of file descriptors ?
On each solr instance, look at the dashboard admin page, there is a bar with 
File Descriptor Count.

However if this was the case, I would expect to see lots of errors in the solr 
logs...

André


On 12/05/2012 06:41 PM, Annette Newton wrote:
 Sorry to bombard you - final update of the day...

 One thing that I have noticed is that we have a lot of connections 
 between the solr boxes with the connection set to CLOSE_WAIT and they 
 hang around for ages.

 -Original Message-
 From: Annette Newton [mailto:annette.new...@servicetick.com]
 Sent: 05 December 2012 13:55
 To: solr-user@lucene.apache.org
 Subject: FW: Replication error and Shard Inconsistencies..

 Update:

 I did a full restart of the solr cloud setup, stopped all the 
 instances, cleared down zookeeper and started them up individually.  I 
 then removed the index from one of the replicas, restarted solr and it 
 replicated ok.  So I'm wondering whether this is something that happens over 
 a period of time.

 Also just to let you know I changed the schema a couple of times and 
 reloaded the cores on all instances previous to the problem.  Don't 
 know if this could have contributed to the problem.

 Thanks.

 -Original Message-
 From: Annette Newton [mailto:annette.new...@servicetick.com]
 Sent: 05 December 2012 09:04
 To: solr-user@lucene.apache.org
 Subject: RE: Replication error and Shard Inconsistencies..

 Hi Mark,

 Thanks so much for the reply.

 We are using the release version of 4.0..

 It's very strange replication appears to be underway, but no files are 
 being copied across.  I have attached both the log from the new node 
 that I tried to bring up and the Schema and config we are using.

 I think it's probably something weird with our config, so I'm going to 
 play around with it today.  If I make any progress I'll send an update.

 Thanks again.

 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: 05 December 2012 00:04
 To: solr-user@lucene.apache.org
 Subject: Re: Replication error and Shard Inconsistencies..

 Hey Annette,

 Are you using Solr 4.0 final? A version of 4x or 5x?

 Do you have the logs for when the replica tried to catch up to the leader?

 Stopping and starting the node is actually a fine thing to do. Perhaps 
 you can try it again and capture the logs.

 If a node is not listed as live but is in the clusterstate, that is 
 fine. It shouldn't be consulted. To remove it, you either have to 
 unload it with the core admin api or you could manually delete it's 
 registered state under the node states node that the Overseer looks at.

 Also, it would be useful to see the logs of the new node coming 
 up.there should be info about what happens when it tries to replicate.

 It almost sounds like replication is just not working for your setup 
 at all and that you have to tweak some configuration. You shouldn't 
 see these nodes as active then though - so we should get to the bottom of 
 this.

 - Mark

 On Dec 4, 2012, at 4:37 AM, Annette 
 Newtonannette.new...@servicetick.com
 wrote:

 Hi all,

 I have a quite weird issue with Solr cloud.  I have a 4 shard, 2 
 replica
 setup, yesterday one of the nodes lost communication with the cloud 
 setup, which resulted in it trying to run replication, this failed, 
 which has left me with a Shard (Shard 4) that has one node with 
 2,833,940 documents on the leader and 409,837 on the follower - 
 obviously a big discrepancy and this leads to queries returning 
 differing results depending on which of these nodes it gets the data 
 from.  There is no indication of a problem on the admin site other 
 than the big discrepancy in the number of documents.  They are all marked as 
 active etc.

 So I thought that I would force replication to happen again, by 
 stopping
 and starting solr (probably the wrong thing to do) but this resulted 
 in no change.  So I turned off that node and replaced it with a new 
 one.  In zookeeper live nodes doesn't list that machine but it is 
 still being shown as active on in the ClusterState.json, I have attached 
 images showing this.
 This means the new node hasn't replaced the old node but is now a 
 replica on Shard 1!  Also that node doesn't appear to have replicated 
 Shard 1's data anyway, it didn't get marked with replicating 

Minimum HA Setup with SolrCloud

2012-12-06 Thread Thomas Heigl
Hey all,

I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
setup for availability reasons.

After studying the wiki page for SolrCloud I'm not sure what the absolute
minimum setup is that would allow for one machine to go down.

Would it be enough to have one shard with one leader and one replica? Could
this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?

Cheers,

Thomas


Put straight to a copyfield

2012-12-06 Thread Spadez
Hi,

I currently have this setup:

Bring in data into the description schema and then have this code:

copyField source=description dest=truncated_description
maxChars=168/

To then truncate the description and move it to truncated_description.
This works fine.

I was wondering, is it possible so that when I bring in data from another
source I actually bring it straight in to truncated_description, like
this:

DIH Source 1: Bring into description and copyfield moves it to
truncated_description
DIH Source 2: Bring pre-truncated description straight into
truncated_description



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Put-straight-to-a-copyfield-tp4024761.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: The shard called `properties`

2012-12-06 Thread Markus Jelsma
 
-Original message-
 From:Mark Miller markrmil...@gmail.com
 Sent: Wed 05-Dec-2012 23:23
 To: solr-user@lucene.apache.org
 Subject: Re: The shard called `properties`
 
 See the custom hashing issue - the UI has to be updated to ignore this.

Ah yes, i see it in clusterstate.json.

Thanks for the pointer to the issue.

 
 Unfortunately, it seems that clients have to be hard coded to realize 
 properties is not a shard unless we add another nested layer.
 
 Should be 100% harmless.
 
 - Mark
 
 On Dec 5, 2012, at 5:05 AM, Markus Jelsma markus.jel...@openindex.io wrote:
 
  Hi,
  
  We're suddenly seeing a shard called `properties` in the cloud graph page 
  when testing today's trunk with a clean Zookeeper data directory. Any idea 
  where it comes from? We have not changed the solr.xml on any node.
  
  Thanks
 
 


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Mark Miller
It depends on if you are running embedded zk or an external zk ensemble.

One leader and a replica is all you need for Solr to allow on machine to go 
down - but if those same machines are running zookeeper, you need 3.

You could also run zookeeper on one external machine and then it would be fine 
if you lost one solr node - but if you the one external zk node went down you 
would lose the ability to do updates until it was brought back up.

- Mark

On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

 Hey all,
 
 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.
 
 After studying the wiki page for SolrCloud I'm not sure what the absolute
 minimum setup is that would allow for one machine to go down.
 
 Would it be enough to have one shard with one leader and one replica? Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?
 
 Cheers,
 
 Thomas



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky

but if those same machines are running zookeeper, you need 3.

And one of those 3 can go down? I thought 3 was the minimum number of 
zookeepers.


-- Jack Krupansky

-Original Message- 
From: Mark Miller

Sent: Thursday, December 06, 2012 9:30 AM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

It depends on if you are running embedded zk or an external zk ensemble.

One leader and a replica is all you need for Solr to allow on machine to go 
down - but if those same machines are running zookeeper, you need 3.


You could also run zookeeper on one external machine and then it would be 
fine if you lost one solr node - but if you the one external zk node went 
down you would lose the ability to do updates until it was brought back up.


- Mark

On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:


Hey all,

I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
setup for availability reasons.

After studying the wiki page for SolrCloud I'm not sure what the absolute
minimum setup is that would allow for one machine to go down.

Would it be enough to have one shard with one leader and one replica? 
Could

this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?

Cheers,

Thomas 




RE: Minimum HA Setup with SolrCloud

2012-12-06 Thread Markus Jelsma
The quorum is the minimun, so it depends on how many you have running in the 
ensemble. If it's three or four, then two is the quorum and therefore the 
minumum. Three is regarded as a minumum in the ensemble because two makes no 
sense.
 
-Original message-
 From:Jack Krupansky j...@basetechnology.com
 Sent: Thu 06-Dec-2012 15:53
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 but if those same machines are running zookeeper, you need 3.
 
 And one of those 3 can go down? I thought 3 was the minimum number of 
 zookeepers.
 
 -- Jack Krupansky
 
 -Original Message- 
 From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 It depends on if you are running embedded zk or an external zk ensemble.
 
 One leader and a replica is all you need for Solr to allow on machine to go 
 down - but if those same machines are running zookeeper, you need 3.
 
 You could also run zookeeper on one external machine and then it would be 
 fine if you lost one solr node - but if you the one external zk node went 
 down you would lose the ability to do updates until it was brought back up.
 
 - Mark
 
 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:
 
  Hey all,
 
  I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
  setup for availability reasons.
 
  After studying the wiki page for SolrCloud I'm not sure what the absolute
  minimum setup is that would allow for one machine to go down.
 
  Would it be enough to have one shard with one leader and one replica? 
  Could
  this setup tolerate one of the instances going down? Or do I need three
  instance because Zookeeper needs a quorum of instances?
 
  Cheers,
 
  Thomas 
 
 


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 9:56 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
 The quorum is the minimun, so it depends on how many you have running in the 
 ensemble. If it's three or four, then two is the quorum

I think that for 4 ZK servers, then 3 would be the quorum?

-Yonik
http://lucidworks.com


solr performance tuning issue

2012-12-06 Thread venkataramana.mangena
Hi users,

Could you please help us on tuning the solr search performance. we have tried 
to do some PT on solr instance with 8GB RAM and 50,000 record in index. and we 
got 33 concurrent usr hitting the instance with on avg of 17.5 hits per second 
with response time 2 seconds. as it is very high value .


we are looking for 0.5 sec responce time for the same.


Regards
Venkat Mangena
7893633833





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Mark Miller

On Dec 6, 2012, at 6:54 AM, Yonik Seeley yo...@lucidworks.com wrote:

 On Thu, Dec 6, 2012 at 9:56 AM, Markus Jelsma
 markus.jel...@openindex.io wrote:
 The quorum is the minimun, so it depends on how many you have running in the 
 ensemble. If it's three or four, then two is the quorum
 
 I think that for 4 ZK servers, then 3 would be the quorum?
 

Yup - 4 requires 3 as a quorum, same as 5. So 4 allows one to go down, 5 allows 
2 to go down.

- Mark



RE: Minimum HA Setup with SolrCloud

2012-12-06 Thread Markus Jelsma

-Original message-
 From:Yonik Seeley yo...@lucidworks.com
 Sent: Thu 06-Dec-2012 16:01
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 On Thu, Dec 6, 2012 at 9:56 AM, Markus Jelsma
 markus.jel...@openindex.io wrote:
  The quorum is the minimun, so it depends on how many you have running in 
  the ensemble. If it's three or four, then two is the quorum
 
 I think that for 4 ZK servers, then 3 would be the quorum?

Yes, my wrong. I meant to say that four nodes does not allow an extra ZK to go 
down because of the quorum.

 
 -Yonik
 http://lucidworks.com
 


Re: how to assign dedicated server for indexing and add more shard in SolrCloud

2012-12-06 Thread Erick Erickson
First, forget about master/slave with SolrCloud! Leaders really exist to
resolve conflicts, the old notion of M/S replication is largely irrelevant.

Updates can go to any node in the cluster, leader, replica, whatever. The
node forwards the doc to the correct leader based on a hash of the
uniqueKey, which then forwards the raw document to all replicas. Then all
the replicas index the document separately. Note that this is true on
mutli-document packets too. You can't get NRT with the old-style
replication process where the master indexes the doc and then the _index_
is replicated...

As for your second question, it sounds like you want to go from
numShards=2, say to numShards=3. You can't do that as it stands. There are
two approaches:
1 shard splitting which would redistribute the documents to a new set of
shards
2 pluggable hashing which allows you to specify the code that does the
shard assignment.
Neither of these are available yet, although 2 is imminent. There is
active work on 1, but I don't think that will be ready as soon.

Best
Erick


On Tue, Dec 4, 2012 at 11:21 PM, Jason hialo...@gmail.com wrote:

 I'm using master and slave server for scaling.
 Master is dedicated for indexing and slave is for searching.
 Now, I'm planning to move SolrCloud.
 It has leader and replicas.
 Leader acts like master and replicas acts like slave. Is it right?
 so, I'm wondering two things.

 First,
 How can I assign dedicated server for indexing in SolrCloud?

 Second,
 Consider I'm using  two shard cluster with shard replicas
 
 http://wiki.apache.org/solr/SolrCloud#Example_B:_Simple_two_shard_cluster_with_shard_replicas
 
 and I need to extend one more shard with replicas.
 In this case, existing two shards and replicas will already have many docs.
 so, I want to add indexing docs in new one only.
 How can I do this?

 Actually, I don't understand perfectly about SolrCloud.
 So, my questions can be ridiculous.
 Any inputs are welcome.
 Thanks,



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/how-to-assign-dedicated-server-for-indexing-and-add-more-shard-in-SolrCloud-tp4024404.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Thomas Heigl
Thanks a lot guys!

On Thu, Dec 6, 2012 at 4:22 PM, Markus Jelsma markus.jel...@openindex.iowrote:


 -Original message-
  From:Yonik Seeley yo...@lucidworks.com
  Sent: Thu 06-Dec-2012 16:01
  To: solr-user@lucene.apache.org
  Subject: Re: Minimum HA Setup with SolrCloud
 
  On Thu, Dec 6, 2012 at 9:56 AM, Markus Jelsma
  markus.jel...@openindex.io wrote:
   The quorum is the minimun, so it depends on how many you have running
 in the ensemble. If it's three or four, then two is the quorum
 
  I think that for 4 ZK servers, then 3 would be the quorum?

 Yes, my wrong. I meant to say that four nodes does not allow an extra ZK
 to go down because of the quorum.

 
  -Yonik
  http://lucidworks.com
 



Fwd: Tika error

2012-12-06 Thread Arkadi Colson

However the tomcat logs are reporting:

INFO: Adding 
'file:/opt/solr/contrib/extraction/lib/juniversalchardet-1.0.3.jar' to 
classloader
Dec 6, 2012 3:42:57 PM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader




 Original Message 
Subject:Tika error
Date:   Thu, 06 Dec 2012 09:02:14 +0100
From:   Arkadi Colson ark...@smartbit.be
Reply-To:   ark...@smartbit.be
Organization:   Smartbit bvba
To: solr-user@lucene.apache.org solr-user@lucene.apache.org



Anybody an idea?

Dec 5, 2012 3:52:32 PM org.apache.solr.client.solrj.impl.HttpClientUtil 
createClient
INFO: Creating new http client, 
config:maxConnections=500maxConnectionsPerHost=16
Dec 5, 2012 3:52:33 PM 
org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: [intradesk] webapp=/solr path=/update/extract 
params={literal.smsc_ssid=1499commit=trueliteral.id=1354722015literal.smsc_date_edited=2012-11-05T15:09:47Zliteral.smsc_courseID=0literal.smsc_date_created=2012-1

1-05T15:09:47Zwt=jsonliteral.smsc_module=intradesk} {} 0 313
Dec 5, 2012 3:52:33 PM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
org/mozilla/universalchardet/CharsetListener
at 
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:469)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:297)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:931)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NoClassDefFoundError: 
org/mozilla/universalchardet/CharsetListener

at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at 
org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2904)
at 
org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1173)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1681)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1559)
at 
org.apache.tika.parser.txt.UniversalEncodingDetector.detect(UniversalEncodingDetector.java:40)
at 
org.apache.tika.detect.AutoDetectReader.detect(AutoDetectReader.java:51)
at 
org.apache.tika.detect.AutoDetectReader.init(AutoDetectReader.java:92)
at 
org.apache.tika.detect.AutoDetectReader.init(AutoDetectReader.java:98)

at org.apache.tika.parser.txt.TXTParser.parse(TXTParser.java:70)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)

... 15 more
Caused by: 

Re: solr searchHandler/searchComponent for query statistics

2012-12-06 Thread Andy Lester

On Dec 6, 2012, at 9:50 AM, joe.cohe...@gmail.com joe.cohe...@gmail.com 
wrote:

 Is there an out-of-the-box or have anyone already implemented a feature for
 collecting statistics on queries?


What sort of statistics are you talking about?  Are you talking about 
collecting information in aggregate about queries over time?  Or for giving 
statistics about individual queries, like time breakouts for benchmarking?

For the latter, you want debugQuery=true and you get a raft of stats down in 
lst name=debug.

xoa

--
Andy Lester = a...@petdance.com = www.petdance.com = AIM:petdance



Re: The shard called `properties`

2012-12-06 Thread Yonik Seeley
On Wed, Dec 5, 2012 at 5:17 PM, Mark Miller markrmil...@gmail.com wrote:
 See the custom hashing issue - the UI has to be updated to ignore this.

 Unfortunately, it seems that clients have to be hard coded to realize 
 properties is not a shard unless we add another nested layer.

Yeah, I talked about this a while back, but no one bit...
https://issues.apache.org/jira/browse/SOLR-3815?focusedCommentId=13452611page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452611

At this point, I suppose we could still add it, but retain the ability
to read older cluster states?

-Yonik
http://lucidworks.com


minPrefix attribute of DirectSolrSpellChecker

2012-12-06 Thread Nalini Kartha
Hi,

In most of the examples I have seen for configuring the
DirectSolrSpellChecker the minPrefix attribute is set to 1 (and this is the
default value as well).

Is there any specific reason for this - would performance take a hit if it
was set to 0? We'd like to support returning corrections which don't start
with the same letter so just wanted to confirm that there aren't any issues
with changing this.

Thanks,
Nalini


Re: The shard called `properties`

2012-12-06 Thread Mark Miller
Yeah, the main problem with it didn't really occur to me until I saw the 
properties shard in the cluster view.

I started working on the UI to ignore it the other day and then never got there 
because I was getting all sorts of weird 'busy' errors from svn for a while and 
didn't have a clean checkout.

- mark

On Dec 6, 2012, at 8:16 AM, Yonik Seeley yo...@lucidworks.com wrote:

 On Wed, Dec 5, 2012 at 5:17 PM, Mark Miller markrmil...@gmail.com wrote:
 See the custom hashing issue - the UI has to be updated to ignore this.
 
 Unfortunately, it seems that clients have to be hard coded to realize 
 properties is not a shard unless we add another nested layer.
 
 Yeah, I talked about this a while back, but no one bit...
 https://issues.apache.org/jira/browse/SOLR-3815?focusedCommentId=13452611page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452611
 
 At this point, I suppose we could still add it, but retain the ability
 to read older cluster states?
 
 -Yonik
 http://lucidworks.com



Re: Solr 4 : Optimize very slow

2012-12-06 Thread Sandeep Mestry
Hi All,

I followed the advice Michael and the timings reduced to couple of hours
now from 6-8 hours :-)
I have attached the solrconfig.xml we're using, can you let me know if I'm
missing something..

Thanks,
Sandeep
?xml version=1.0 encoding=UTF-8 ?
!--
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
 The ASF licenses this file to You under the Apache License, Version 2.0
 (the License); you may not use this file except in compliance with
 the License.  You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an AS IS BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
--
!-- 
 For more details about configurations options that may appear in this 
 file, see http://wiki.apache.org/solr/SolrConfigXml.

 Specifically, the Solr Config can support XInclude, which may make it easier to manage
 the configuration.  See https://issues.apache.org/jira/browse/SOLR-1167
--
config
	luceneMatchVersionLUCENE_40/luceneMatchVersion
  !-- Set this to 'false' if you want solr to continue working after it has 
   encountered an severe configuration error.  In a production environment, 
   you may want solr to keep working even if one handler is mis-configured.

   You may also set this to false using by setting the system property:
 -Dsolr.abortOnConfigurationError=false
 --
  abortOnConfigurationError${solr.abortOnConfigurationError:true}/abortOnConfigurationError

  !-- lib directives can be used to instruct Solr to load an Jars identified
   and use them to resolve any plugins specified in your solrconfig.xml or
   schema.xml (ie: Analyzers, Request Handlers, etc...).

   All directories and paths are resolved relative the instanceDir.

   If a ./lib directory exists in your instanceDir, all files found in it
   are included as if you had used the following syntax...
   
  lib dir=./lib /
--
  !-- A dir option by itself adds any files found in the directory to the
   classpath, this is useful for including all jars in a directory.
--
  lib dir=../../contrib/extraction/lib /
  !-- When a regex is specified in addition to a directory, only the files in that
   directory which completely match the regex (anchored on both ends)
   will be included.
--
  lib dir=../../dist/ regex=apache-solr-cell-\d.*\.jar /
  lib dir=../../dist/ regex=apache-solr-clustering-\d.*\.jar /
  !-- If a dir option (with or without a regex) is used and nothing is found
   that matches, it will be ignored
--
  lib dir=../../contrib/clustering/lib/downloads/ /
  lib dir=../../contrib/clustering/lib/ /
  lib dir=/total/crap/dir/ignored / 
  !-- an exact path can be used to specify a specific file.  This will cause
   a serious error to be logged if it can't be loaded.
  lib path=../a-jar-that-does-not-exist.jar / 
  --

  
  !-- Used to specify an alternate directory to hold all index data
   other than the default ./data under the Solr home.
   If replication is in use, this should match the replication configuration. --
  dataDir${solr.data.dir:./solr/data}/dataDir

  directoryFactory name=DirectoryFactory class=${solr.directoryFactory:solr.NIOFSDirectory}/ 


  !-- WARNING: this indexDefaults section only provides defaults for index writers
   in general. See also the mainIndex section after that when changing parameters
   for Solr's main Lucene index. --
  indexConfig
   !-- Values here affect all index writers and act as a default unless overridden. --
mergeFactor30/mergeFactor
	mergeScheduler class=org.apache.lucene.index.ConcurrentMergeScheduler/
	mergePolicy class=org.apache.lucene.index.TieredMergePolicy
		int name=maxMergeAtOnce15/int
		int name=segmentsPerTier15/int
	/mergePolicy
	
	!-- options specific to the main on-disk lucene index --
ramBufferSizeMB32/ramBufferSizeMB

!--
Custom deletion policies can specified here. The class must
implement org.apache.lucene.index.IndexDeletionPolicy.

http://lucene.apache.org/java/2_3_2/api/org/apache/lucene/index/IndexDeletionPolicy.html

The standard Solr IndexDeletionPolicy implementation supports deleting
index commit points on number of commits, age of commit point and
optimized status.

The latest commit point should always be preserved regardless
of the criteria.
--
deletionPolicy class=solr.SolrDeletionPolicy
  !-- The number of commit points to be kept --
  str name=maxCommitsToKeep1/str
  !-- The number of optimized commit 

Re: How to make a plugin SchemaAware or XAware, runtime wise? (Solr 3.6.1)

2012-12-06 Thread Chris Hostetter

: I'm sorry, I don't see how the resource loader awareness is relevant to
: schema awareness? Or perhaps you didn't imply that? Good to know about the

No, my mistake ... typed one when i ment the other one.

: I guess I use the core to get to the schema then. Hmm, I may recall trying
: that at some point but that I hit some problem with that, something like
: the schema not being present when getting core inform call.

the schema (and the rest of the SolrCore) should be fully constructed and 
populated by the time inform(SolrCore) is called -- that's the entire 
point of these FooAware interfaces and the delayed inform(Foo) about 
the them.

-Hoss


Re: Problem occur in searchComponent while moving from solr3.6 to solrcloud (solr4)

2012-12-06 Thread Chris Hostetter

You'll need to tell us more about your custom component so that we can 
make some suggestions as to how to update it to work with SolrCloud.

In particular: what exactly are you doing with the result from 
getConfigDir() ? ... if you are just using it to build a path to a File 
that you open to configure your component, just change it to use 
openConfig (or openResource)...

https://lucene.apache.org/solr/4_0_0/solr-core/org/apache/solr/core/SolrResourceLoader.html#openConfig%28java.lang.String%29


: My application has some custom search component build using solr3.6. Now i
: am moving to Solrcloud and got following exception :
: 
: *org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
: ZkSolrResourceLoader does not support getConfigDir() - likely, what you are
: trying to do is not supported in ZooKeeper mode*
: 
: It seems searchComponent in 3.6 doesnot work for solr cloud and since
: zookeeper keeps central config so that would be read. How to resolve the
: same? Please reply asap.
: 
: Thanks
: 
: 
: 
: 
: --
: View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-occur-in-searchComponent-while-moving-from-solr3-6-to-solrcloud-solr4-tp4023394.html
: Sent from the Solr - User mailing list archive at Nabble.com.
: 

-Hoss


ANNOUNCE: CFP Open For Lucene Revolution 2013: San Diego (April 29 - May 2)

2012-12-06 Thread Chris Hostetter


http://lucenerevolution.org/

Lucene Revolution 2013 will take place at The Westin San Diego on April 29 
- May 2, 2013. Many of the brightest minds in open source search will 
convene at this 4th annual Lucene Revolution to discuss topics and trends 
driving the next generation of search. The conference will be preceded by 
two days of Apache Lucene, Solr and Big Data training.


The event’s agenda will be comprised of Apache Lucene/Solr and Big Data 
tutorials and speaker sessions, creating opportunities for developers, 
technologists and business leaders to explore and gain deeper 
understandings of the technologies connected with open source search.


Individuals are encouraged to submit proposals for technical talks that 
focus on Apache Lucene and Solr in the enterprise, Big Data, case studies, 
large-scale search, and data integration.


Guidelines for submissions...

http://lucenerevolution.org/2013/call-for-papers


-Hoss

RE: positions and qf parameter in (e)dismax

2012-12-06 Thread Chris Hostetter

: Hi - no we're not getting any errors because we enabled positions on all 
: fields that are also listed in the qf-parameter. If we don't, and send a 
: phrase query we would get an error such as:
: 
: java.lang.IllegalStateException: field h1 was indexed without position 
data; cannot run
: PhraseQuery (term=a)

I'm clearly still missunderstanding something -- probably because i'm 
still not entirely clear on how to reproduce this (you said you are 
omiting positions, but you didn't provide the details on what 
edismax request options you were using were to cause that error).

If you can open a Jira with more details on how to reproduce, we can 
certainly look into it.

:  I'm not understanding the problem ... is there a specific error you are 
:  getting? can you please post that error along with your schema and an 
:  example of a request that triggers the problem?


-Hoss


Re: Restricting search results by field value

2012-12-06 Thread Way Cool
Grouping should work:
group=truegroup.field=source_idgroup.limit=3group.main=true

On Thu, Dec 6, 2012 at 2:35 AM, Tom Mortimer bano...@gmail.com wrote:

 Sounds like it's worth a try! Thanks Andre.
 Tom

 On 5 Dec 2012, at 17:49, Andre Bois-Crettez andre.b...@kelkoo.com wrote:

  If you do grouping on source_id, it should be enough to request 3 times
  more documents than you need, then reorder and drop the bottom.
 
  Is a 3x overhead acceptable ?
 
 
 
  On 12/05/2012 12:04 PM, Tom Mortimer wrote:
  Hi everyone,
 
  I've got a problem where I have docs with a source_id field, and there
 can be many docs from each source. Searches will typically return docs from
 many sources. I want to restrict the number of docs from each source in
 results, so there will be no more than (say) 3 docs from source_id=123 etc.
 
  Field collapsing is the obvious approach, but I want to get the results
 back in relevancy order, not grouped by source_id. So it looks like I'll
 have to fetch more docs than I need to and re-sort them. It might even be
 better to count source_ids in the client code and drop excess docs that
 way, but the potential overhead is large.
 
  Is there any way of doing this in Solr without hacking in a custom
 Lucene Collector? (which doesn't look all that straightforward).
 
  cheers,
  Tom
 
 
  --
  André Bois-Crettez
 
  Search technology, Kelkoo
  http://www.kelkoo.com/
 
  Kelkoo SAS
  Société par Actions Simplifiée
  Au capital de € 4.168.964,30
  Siège social : 8, rue du Sentier 75002 Paris
  425 093 069 RCS Paris
 
  Ce message et les pièces jointes sont confidentiels et établis à
 l'attention exclusive de leurs destinataires. Si vous n'êtes pas le
 destinataire de ce message, merci de le détruire et d'en avertir
 l'expéditeur.




Re: Restricting search results by field value

2012-12-06 Thread Tom Mortimer
Thanks, but even with group.main=true the results are not in relevancy (score) 
order, they are in group order. Which is why I can't use it as is.

Tom


On 6 Dec 2012, at 19:00, Way Cool way1.wayc...@gmail.com wrote:

 Grouping should work:
 group=truegroup.field=source_idgroup.limit=3group.main=true
 
 On Thu, Dec 6, 2012 at 2:35 AM, Tom Mortimer bano...@gmail.com wrote:
 
 Sounds like it's worth a try! Thanks Andre.
 Tom
 
 On 5 Dec 2012, at 17:49, Andre Bois-Crettez andre.b...@kelkoo.com wrote:
 
 If you do grouping on source_id, it should be enough to request 3 times
 more documents than you need, then reorder and drop the bottom.
 
 Is a 3x overhead acceptable ?
 
 
 
 On 12/05/2012 12:04 PM, Tom Mortimer wrote:
 Hi everyone,
 
 I've got a problem where I have docs with a source_id field, and there
 can be many docs from each source. Searches will typically return docs from
 many sources. I want to restrict the number of docs from each source in
 results, so there will be no more than (say) 3 docs from source_id=123 etc.
 
 Field collapsing is the obvious approach, but I want to get the results
 back in relevancy order, not grouped by source_id. So it looks like I'll
 have to fetch more docs than I need to and re-sort them. It might even be
 better to count source_ids in the client code and drop excess docs that
 way, but the potential overhead is large.
 
 Is there any way of doing this in Solr without hacking in a custom
 Lucene Collector? (which doesn't look all that straightforward).
 
 cheers,
 Tom
 
 
 --
 André Bois-Crettez
 
 Search technology, Kelkoo
 http://www.kelkoo.com/
 
 Kelkoo SAS
 Société par Actions Simplifiée
 Au capital de € 4.168.964,30
 Siège social : 8, rue du Sentier 75002 Paris
 425 093 069 RCS Paris
 
 Ce message et les pièces jointes sont confidentiels et établis à
 l'attention exclusive de leurs destinataires. Si vous n'êtes pas le
 destinataire de ce message, merci de le détruire et d'en avertir
 l'expéditeur.
 
 



Re: how to assign dedicated server for indexing and add more shard in SolrCloud

2012-12-06 Thread Mikhail Khludnev
Jason,
Thanks for raising it!

Erick,
That's what I want to discuss for a long time. Frankly speaking, the
question is:

if old-school (master/slave) search deployments doesn't comply to vision by
SolrCloud/ElasticSearch, does it mean that they are wrong?

Let me enumerate kinds of 'old-school search':
- number of docs is not so dramatic to make sharding profitable from search
latency's POV;
- index updates are not frequent, they are rather rare nightly bulks;
- search index is not a SOR (system of records) - it's a secondary system,
provides the search service, still significant for the enterprise;
- there is an SOR - primary system, which is kind of CMS or RDBMS or CMS
with publish through RDBMS, etc;

Does it look like your system? No, - click Delete button!

// for few people who still read this:

That's what I have with Solr Cloud in this case:
- I can decide don't deal with sharding. Good! put numShards=0, and buy
more (VM) instances to have more replicas to increase throughput;
- start nightly reindex - delQ *:* , add(), commit()
- in this case all my instances will spend resources to indexing same docs,
instead of handling search requests - BAD#1;
- even I'm able to supply long IterableSolrInputDocument,
DistribudedUpdateProcessor will throw documents one by one, not by huge
chunks, that leads to many small segments - eg. if I have 100Mb RAM buffer,
and 10 servlet container threads I'll have sequence of 10Mb segments;
- every of these flushes also flushes some part of current index mapped to
the RAM that impacts search latency BAD#2;
- when indexing is over I have a many small segments, and then The Merge
starts, which also flushes current index from RAM BAD#3.

In summary: I waste resources for indexing same stuff on searcher nodes, as
a side effect I have longer period of latency impact.

How I want to do it:
 - in the cloud I add small instances as replicas on demand to adjust for
work load dynamically;
 - when I need to reindex (full import) I can rent super cool VM instance
with manyway-CPU, run indexing on it;
 - if it blows off, no problem I can run full import from my CMS/DB again
from the beginning - or i can run two imports simultaneously;
 - after indexing  finished, I can push index to searchers or start new
ones mounting index to them.

Please tell me where I'm wrong, whether it SolrCloud features, 'cloud'
economy, hard/VMware architecture or Lucene internals. Can Jason and myself
adjust SolrCloud for our 'old-school' pattern?

Thanks for sharing your opinion!



On Thu, Dec 6, 2012 at 7:19 PM, Erick Erickson erickerick...@gmail.comwrote:

 First, forget about master/slave with SolrCloud! Leaders really exist to
 resolve conflicts, the old notion of M/S replication is largely irrelevant.

 Updates can go to any node in the cluster, leader, replica, whatever. The
 node forwards the doc to the correct leader based on a hash of the
 uniqueKey, which then forwards the raw document to all replicas. Then all
 the replicas index the document separately. Note that this is true on
 mutli-document packets too. You can't get NRT with the old-style
 replication process where the master indexes the doc and then the _index_
 is replicated...

 As for your second question, it sounds like you want to go from
 numShards=2, say to numShards=3. You can't do that as it stands. There are
 two approaches:
 1 shard splitting which would redistribute the documents to a new set of
 shards
 2 pluggable hashing which allows you to specify the code that does the
 shard assignment.
 Neither of these are available yet, although 2 is imminent. There is
 active work on 1, but I don't think that will be ready as soon.

 Best
 Erick


 On Tue, Dec 4, 2012 at 11:21 PM, Jason hialo...@gmail.com wrote:

  I'm using master and slave server for scaling.
  Master is dedicated for indexing and slave is for searching.
  Now, I'm planning to move SolrCloud.
  It has leader and replicas.
  Leader acts like master and replicas acts like slave. Is it right?
  so, I'm wondering two things.
 
  First,
  How can I assign dedicated server for indexing in SolrCloud?
 
  Second,
  Consider I'm using  two shard cluster with shard replicas
  
 
 http://wiki.apache.org/solr/SolrCloud#Example_B:_Simple_two_shard_cluster_with_shard_replicas
  
  and I need to extend one more shard with replicas.
  In this case, existing two shards and replicas will already have many
 docs.
  so, I want to add indexing docs in new one only.
  How can I do this?
 
  Actually, I don't understand perfectly about SolrCloud.
  So, my questions can be ridiculous.
  Any inputs are welcome.
  Thanks,
 
 
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/how-to-assign-dedicated-server-for-indexing-and-add-more-shard-in-SolrCloud-tp4024404.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
 

Re: solr performance tuning issue

2012-12-06 Thread Mikhail Khludnev
Hello,

What's you OS/cpu? is it a VM or real hardware? which jvm do you run? with
which parameters? have you checked GC log? what's the index size? what's a
typical query parameters? what's an average number of results in the
query?  have you tried to run query with debugQuery=true during hard load
time and looking into per component dump? have you tried to do the same in
a calm time, what's the latency in this case? these are an mandatory
initial seed for the type of question which you are asking.

Good Luck

On Thu, Dec 6, 2012 at 11:58 AM, venkataramana.mang...@bt.com wrote:

 Could you please help us on tuning the solr search performance. we have
 tried to do some PT on solr instance with 8GB RAM and 50,000 record in
 index. and we got 33 concurrent usr hitting the instance with on avg of
 17.5 hits per second with response time 2 seconds. as it is very high value
 .




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
 mkhlud...@griddynamics.com


Re: solr searchHandler/searchComponent for query statistics

2012-12-06 Thread Otis Gospodnetic
Hi Joe,

http://sematext.com/search-analytics/index.html is free and will give you a
bunch of reports about search (Solr or anything else).  Not queries by IP,
though - for that you better grep logs.

Yes, you could also implement your own SearchComponent, assuming the
servers/LBs in front of Solr pass in the original IP, but I wouldn't put
this stuff in the query path.

Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 6, 2012 at 10:50 AM, joe.cohe...@gmail.com 
joe.cohe...@gmail.com wrote:

 Hi
 Is there an out-of-the-box or have anyone already implemented a feature for
 collecting statistics on queries?


 I've tried to see if I can parse the log file for /select request, but I'm
 using solr cloud, and each such request goes to a single instance, so going
 this wasy, I'll have to collect all /select log line sfrom all the
 instances.

 Any other advice on doing this?

 What I eventually need is the queries per IP, so I'll have both the overall
 and per-user usage statistics.

 thanks.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/solr-searchHandler-searchComponent-for-query-statistics-tp4024837.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Otis Gospodnetic
1 is the minimum :)
2 makes no sense.
3 must be the most common number in the zoo.

Otis
--
Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky j...@basetechnology.comwrote:

 but if those same machines are running zookeeper, you need 3.

 And one of those 3 can go down? I thought 3 was the minimum number of
 zookeepers.

 -- Jack Krupansky

 -Original Message- From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud


 It depends on if you are running embedded zk or an external zk ensemble.

 One leader and a replica is all you need for Solr to allow on machine to
 go down - but if those same machines are running zookeeper, you need 3.

 You could also run zookeeper on one external machine and then it would be
 fine if you lost one solr node - but if you the one external zk node went
 down you would lose the ability to do updates until it was brought back up.

 - Mark

 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

  Hey all,

 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.

 After studying the wiki page for SolrCloud I'm not sure what the absolute
 minimum setup is that would allow for one machine to go down.

 Would it be enough to have one shard with one leader and one replica?
 Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?

 Cheers,

 Thomas





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Michael Della Bitta
One is the loneliest number that you'll ever do,
Two can be as bad as one, it's the loneliest number since the single Zoo.


Michael Della Bitta


Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Thu, Dec 6, 2012 at 4:39 PM, Otis Gospodnetic otis.gospodne...@gmail.com
 wrote:

 1 is the minimum :)
 2 makes no sense.
 3 must be the most common number in the zoo.

 Otis
 --
 Performance Monitoring - http://sematext.com/spm/index.html
 Search Analytics - http://sematext.com/search-analytics/index.html




 On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky j...@basetechnology.com
 wrote:

  but if those same machines are running zookeeper, you need 3.
 
  And one of those 3 can go down? I thought 3 was the minimum number of
  zookeepers.
 
  -- Jack Krupansky
 
  -Original Message- From: Mark Miller
  Sent: Thursday, December 06, 2012 9:30 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Minimum HA Setup with SolrCloud
 
 
  It depends on if you are running embedded zk or an external zk ensemble.
 
  One leader and a replica is all you need for Solr to allow on machine to
  go down - but if those same machines are running zookeeper, you need 3.
 
  You could also run zookeeper on one external machine and then it would be
  fine if you lost one solr node - but if you the one external zk node went
  down you would lose the ability to do updates until it was brought back
 up.
 
  - Mark
 
  On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:
 
   Hey all,
 
  I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
  setup for availability reasons.
 
  After studying the wiki page for SolrCloud I'm not sure what the
 absolute
  minimum setup is that would allow for one machine to go down.
 
  Would it be enough to have one shard with one leader and one replica?
  Could
  this setup tolerate one of the instances going down? Or do I need three
  instance because Zookeeper needs a quorum of instances?
 
  Cheers,
 
  Thomas
 
 
 



Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread sausarkar
We measured for just 3 nodes the overhead is around 100ms. We also noticed is
that CPU spikes to 100% and some queries get blocked, this happens only when
cloud has multiple nodes but does not happen on single node. All the nodes
has the exact same configuration and JVM setting and hardware configuration.

Any clues why this is happening?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4024941.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky

Rewind.

If 1 is the minimum, what is the 3 minimum all about?

The zk web page does say Three ZooKeeper servers is the minimum recommended 
size for an ensemble, and we also recommend that they run on separate 
machines - but it does say recommended.


But back to the original question - it sounds as if 4 nodes would the 
recommended minimum number of nodes if you want to tolerate one machine 
going down and maintaining that 3 zookeeper recommended minimum ensemble.



From the zookeeper web page:


For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster 
known as an ensemble. As long as a majority of the ensemble are up, the 
service will be available. Because Zookeeper requires a majority, it is best 
to use an odd number of machines. For example, with four machines ZooKeeper 
can only handle the failure of a single machine; if two machines fail, the 
remaining two machines do not constitute a majority. However, with five 
machines ZooKeeper can handle the failure of two machines.


See:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html

-- Jack Krupansky

-Original Message- 
From: Otis Gospodnetic

Sent: Thursday, December 06, 2012 4:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

1 is the minimum :)
2 makes no sense.
3 must be the most common number in the zoo.

Otis
--
Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky 
j...@basetechnology.comwrote:



but if those same machines are running zookeeper, you need 3.

And one of those 3 can go down? I thought 3 was the minimum number of
zookeepers.

-- Jack Krupansky

-Original Message- From: Mark Miller
Sent: Thursday, December 06, 2012 9:30 AM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud


It depends on if you are running embedded zk or an external zk ensemble.

One leader and a replica is all you need for Solr to allow on machine to
go down - but if those same machines are running zookeeper, you need 3.

You could also run zookeeper on one external machine and then it would be
fine if you lost one solr node - but if you the one external zk node went
down you would lose the ability to do updates until it was brought back 
up.


- Mark

On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

 Hey all,


I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
setup for availability reasons.

After studying the wiki page for SolrCloud I'm not sure what the absolute
minimum setup is that would allow for one machine to go down.

Would it be enough to have one shard with one leader and one replica?
Could
this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?

Cheers,

Thomas








Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky

Slightly more recent link:
http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html

-- Jack Krupansky

-Original Message- 
From: Jack Krupansky

Sent: Thursday, December 06, 2012 5:21 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

Rewind.

If 1 is the minimum, what is the 3 minimum all about?

The zk web page does say Three ZooKeeper servers is the minimum recommended
size for an ensemble, and we also recommend that they run on separate
machines - but it does say recommended.

But back to the original question - it sounds as if 4 nodes would the
recommended minimum number of nodes if you want to tolerate one machine
going down and maintaining that 3 zookeeper recommended minimum ensemble.


From the zookeeper web page:


For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster
known as an ensemble. As long as a majority of the ensemble are up, the
service will be available. Because Zookeeper requires a majority, it is best
to use an odd number of machines. For example, with four machines ZooKeeper
can only handle the failure of a single machine; if two machines fail, the
remaining two machines do not constitute a majority. However, with five
machines ZooKeeper can handle the failure of two machines.

See:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html

-- Jack Krupansky

-Original Message- 
From: Otis Gospodnetic

Sent: Thursday, December 06, 2012 4:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

1 is the minimum :)
2 makes no sense.
3 must be the most common number in the zoo.

Otis
--
Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky
j...@basetechnology.comwrote:


but if those same machines are running zookeeper, you need 3.

And one of those 3 can go down? I thought 3 was the minimum number of
zookeepers.

-- Jack Krupansky

-Original Message- From: Mark Miller
Sent: Thursday, December 06, 2012 9:30 AM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud


It depends on if you are running embedded zk or an external zk ensemble.

One leader and a replica is all you need for Solr to allow on machine to
go down - but if those same machines are running zookeeper, you need 3.

You could also run zookeeper on one external machine and then it would be
fine if you lost one solr node - but if you the one external zk node went
down you would lose the ability to do updates until it was brought back 
up.


- Mark

On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

 Hey all,


I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
setup for availability reasons.

After studying the wiki page for SolrCloud I'm not sure what the absolute
minimum setup is that would allow for one machine to go down.

Would it be enough to have one shard with one leader and one replica?
Could
this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?

Cheers,

Thomas






Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Walter Underwood
And that link includes this sentence: For example, with four machines 
ZooKeeper can only handle the failure of a single machine; if two machines 
fail, the remaining two machines do not constitute a majority.

wunder

On Dec 6, 2012, at 2:25 PM, Jack Krupansky wrote:

 Slightly more recent link:
 http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html
 
 -- Jack Krupansky
 
 -Original Message- From: Jack Krupansky
 Sent: Thursday, December 06, 2012 5:21 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 Rewind.
 
 If 1 is the minimum, what is the 3 minimum all about?
 
 The zk web page does say Three ZooKeeper servers is the minimum recommended
 size for an ensemble, and we also recommend that they run on separate
 machines - but it does say recommended.
 
 But back to the original question - it sounds as if 4 nodes would the
 recommended minimum number of nodes if you want to tolerate one machine
 going down and maintaining that 3 zookeeper recommended minimum ensemble.
 
 From the zookeeper web page:
 
 For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster
 known as an ensemble. As long as a majority of the ensemble are up, the
 service will be available. Because Zookeeper requires a majority, it is best
 to use an odd number of machines. For example, with four machines ZooKeeper
 can only handle the failure of a single machine; if two machines fail, the
 remaining two machines do not constitute a majority. However, with five
 machines ZooKeeper can handle the failure of two machines.
 
 See:
 http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html
 
 -- Jack Krupansky
 
 -Original Message- From: Otis Gospodnetic
 Sent: Thursday, December 06, 2012 4:39 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 1 is the minimum :)
 2 makes no sense.
 3 must be the most common number in the zoo.
 
 Otis
 --
 Performance Monitoring - http://sematext.com/spm/index.html
 Search Analytics - http://sematext.com/search-analytics/index.html
 
 
 
 
 On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky
 j...@basetechnology.comwrote:
 
 but if those same machines are running zookeeper, you need 3.
 
 And one of those 3 can go down? I thought 3 was the minimum number of
 zookeepers.
 
 -- Jack Krupansky
 
 -Original Message- From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 
 It depends on if you are running embedded zk or an external zk ensemble.
 
 One leader and a replica is all you need for Solr to allow on machine to
 go down - but if those same machines are running zookeeper, you need 3.
 
 You could also run zookeeper on one external machine and then it would be
 fine if you lost one solr node - but if you the one external zk node went
 down you would lose the ability to do updates until it was brought back up.
 
 - Mark
 
 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:
 
 Hey all,
 
 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.
 
 After studying the wiki page for SolrCloud I'm not sure what the absolute
 minimum setup is that would allow for one machine to go down.
 
 Would it be enough to have one shard with one leader and one replica?
 Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?
 
 Cheers,
 
 Thomas
 
 

--
Walter Underwood
wun...@wunderwood.org





Re: SolrCloud Zookeeper questions

2012-12-06 Thread Jack Krupansky
In case you missed the parallel thread running right now, a read of the main 
zookeeper admin web page is a good background to have:


http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html

-- Jack Krupansky

-Original Message- 
From: Jamie Johnson

Sent: Thursday, December 06, 2012 5:09 PM
To: solr-user@lucene.apache.org
Subject: SolrCloud Zookeeper questions

I was wondering if anyone had any recommendations for zookeeper
configurations when running SolrCloud?  I am really not 100% sure what
specifically to ask, but any initial thoughts to get the conversation
started I think would be useful to the community at large. 



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 5:21 PM, Jack Krupansky j...@basetechnology.com wrote:
 If 1 is the minimum, what is the 3 minimum all about?

The minimum for running an ensemble (a cluster) and having any sort of
fault tolerance?

 The zk web page does say Three ZooKeeper servers is the minimum recommended
 size for an ensemble, and we also recommend that they run on separate
 machines - but it does say recommended.

 But back to the original question - it sounds as if 4 nodes would the
 recommended minimum number of nodes if you want to tolerate one machine
 going down and maintaining that 3 zookeeper recommended minimum ensemble.

Nope - the ensemble are not the number of servers that are currently
up, but the number in the cluster.  This is important to prevent split
brain.
3 ZK servers means that a majority of the servers is 2, hence you can lose 1.
5 ZK servers means that a majority of the servers is 3, hence you can lose 2.

-Yonik
http://lucidworks.com


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Michael Della Bitta
Jack,

The recommended ensemble configured size takes into consideration that you
might have a node failure. You can still run with two while you replace the
third, so it's sort of like RAID-5.

If you run with four configured nodes, you're still running with
RAID-5-like failure survival characteristics. Two configured can't be
the minimum because the minimum requires an odd number of machines to break
ties, so your minimum is three when you configure for four.

Configuring for five nodes lets you survive two failures because you now
have a minimum of three

Michael Della Bitta


Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Thu, Dec 6, 2012 at 5:25 PM, Jack Krupansky j...@basetechnology.comwrote:

 Slightly more recent link:
 http://zookeeper.apache.org/**doc/r3.4.5/zookeeperAdmin.htmlhttp://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html

 -- Jack Krupansky

 -Original Message- From: Jack Krupansky
 Sent: Thursday, December 06, 2012 5:21 PM

 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud

 Rewind.

 If 1 is the minimum, what is the 3 minimum all about?

 The zk web page does say Three ZooKeeper servers is the minimum
 recommended
 size for an ensemble, and we also recommend that they run on separate
 machines - but it does say recommended.

 But back to the original question - it sounds as if 4 nodes would the
 recommended minimum number of nodes if you want to tolerate one machine
 going down and maintaining that 3 zookeeper recommended minimum ensemble.

 From the zookeeper web page:

 For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster
 known as an ensemble. As long as a majority of the ensemble are up, the
 service will be available. Because Zookeeper requires a majority, it is
 best
 to use an odd number of machines. For example, with four machines ZooKeeper
 can only handle the failure of a single machine; if two machines fail, the
 remaining two machines do not constitute a majority. However, with five
 machines ZooKeeper can handle the failure of two machines.

 See:
 http://zookeeper.apache.org/**doc/r3.1.2/zookeeperAdmin.htmlhttp://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html

 -- Jack Krupansky

 -Original Message- From: Otis Gospodnetic
 Sent: Thursday, December 06, 2012 4:39 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud

 1 is the minimum :)
 2 makes no sense.
 3 must be the most common number in the zoo.

 Otis
 --
 Performance Monitoring - 
 http://sematext.com/spm/index.**htmlhttp://sematext.com/spm/index.html
 Search Analytics - 
 http://sematext.com/search-**analytics/index.htmlhttp://sematext.com/search-analytics/index.html




 On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky
 j...@basetechnology.com**wrote:

  but if those same machines are running zookeeper, you need 3.

 And one of those 3 can go down? I thought 3 was the minimum number of
 zookeepers.

 -- Jack Krupansky

 -Original Message- From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud


 It depends on if you are running embedded zk or an external zk ensemble.

 One leader and a replica is all you need for Solr to allow on machine to
 go down - but if those same machines are running zookeeper, you need 3.

 You could also run zookeeper on one external machine and then it would be
 fine if you lost one solr node - but if you the one external zk node went
 down you would lose the ability to do updates until it was brought back
 up.

 - Mark

 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

  Hey all,


 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.

 After studying the wiki page for SolrCloud I'm not sure what the absolute
 minimum setup is that would allow for one machine to go down.

 Would it be enough to have one shard with one leader and one replica?
 Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?

 Cheers,

 Thomas






Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Michael Della Bitta
I just rethought what I wrote and it doesn't make any sense. :)

If you have two remaining nodes left when you have a three node ensemble,
how are ties broken? Or does Zookeeper not resolve ties since it doesn't
tolerate partitions?

Michael


Michael Della Bitta


Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Thu, Dec 6, 2012 at 5:30 PM, Michael Della Bitta 
michael.della.bi...@appinions.com wrote:

 Jack,

 The recommended ensemble configured size takes into consideration that you
 might have a node failure. You can still run with two while you replace the
 third, so it's sort of like RAID-5.

 If you run with four configured nodes, you're still running with
 RAID-5-like failure survival characteristics. Two configured can't be
 the minimum because the minimum requires an odd number of machines to break
 ties, so your minimum is three when you configure for four.

 Configuring for five nodes lets you survive two failures because you now
 have a minimum of three


 Michael Della Bitta

 
 Appinions
 18 East 41st Street, 2nd Floor
 New York, NY 10017-6271

 www.appinions.com

 Where Influence Isn’t a Game


 On Thu, Dec 6, 2012 at 5:25 PM, Jack Krupansky j...@basetechnology.comwrote:

 Slightly more recent link:
 http://zookeeper.apache.org/**doc/r3.4.5/zookeeperAdmin.htmlhttp://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html

 -- Jack Krupansky

 -Original Message- From: Jack Krupansky
 Sent: Thursday, December 06, 2012 5:21 PM

 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud

 Rewind.

 If 1 is the minimum, what is the 3 minimum all about?

 The zk web page does say Three ZooKeeper servers is the minimum
 recommended
 size for an ensemble, and we also recommend that they run on separate
 machines - but it does say recommended.

 But back to the original question - it sounds as if 4 nodes would the
 recommended minimum number of nodes if you want to tolerate one machine
 going down and maintaining that 3 zookeeper recommended minimum ensemble.

 From the zookeeper web page:

 For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster
 known as an ensemble. As long as a majority of the ensemble are up, the
 service will be available. Because Zookeeper requires a majority, it is
 best
 to use an odd number of machines. For example, with four machines
 ZooKeeper
 can only handle the failure of a single machine; if two machines fail, the
 remaining two machines do not constitute a majority. However, with five
 machines ZooKeeper can handle the failure of two machines.

 See:
 http://zookeeper.apache.org/**doc/r3.1.2/zookeeperAdmin.htmlhttp://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html

 -- Jack Krupansky

 -Original Message- From: Otis Gospodnetic
 Sent: Thursday, December 06, 2012 4:39 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud

 1 is the minimum :)
 2 makes no sense.
 3 must be the most common number in the zoo.

 Otis
 --
 Performance Monitoring - 
 http://sematext.com/spm/index.**htmlhttp://sematext.com/spm/index.html
 Search Analytics - 
 http://sematext.com/search-**analytics/index.htmlhttp://sematext.com/search-analytics/index.html




 On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky
 j...@basetechnology.com**wrote:

  but if those same machines are running zookeeper, you need 3.

 And one of those 3 can go down? I thought 3 was the minimum number of
 zookeepers.

 -- Jack Krupansky

 -Original Message- From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud


 It depends on if you are running embedded zk or an external zk ensemble.

 One leader and a replica is all you need for Solr to allow on machine to
 go down - but if those same machines are running zookeeper, you need 3.

 You could also run zookeeper on one external machine and then it would be
 fine if you lost one solr node - but if you the one external zk node went
 down you would lose the ability to do updates until it was brought back
 up.

 - Mark

 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

  Hey all,


 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.

 After studying the wiki page for SolrCloud I'm not sure what the
 absolute
 minimum setup is that would allow for one machine to go down.

 Would it be enough to have one shard with one leader and one replica?
 Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?

 Cheers,

 Thomas







Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Mark Miller
3 is the minimum if you want to allow a node to go down.

1 is the minimum if you want the thing to work at all - but if the 1 goes down, 
ZooKeeper may stop working…

- Mark

On Dec 6, 2012, at 2:21 PM, Jack Krupansky j...@basetechnology.com wrote:

 Rewind.
 
 If 1 is the minimum, what is the 3 minimum all about?
 
 The zk web page does say Three ZooKeeper servers is the minimum recommended 
 size for an ensemble, and we also recommend that they run on separate 
 machines - but it does say recommended.
 
 But back to the original question - it sounds as if 4 nodes would the 
 recommended minimum number of nodes if you want to tolerate one machine going 
 down and maintaining that 3 zookeeper recommended minimum ensemble.
 
 From the zookeeper web page:
 
 For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster 
 known as an ensemble. As long as a majority of the ensemble are up, the 
 service will be available. Because Zookeeper requires a majority, it is best 
 to use an odd number of machines. For example, with four machines ZooKeeper 
 can only handle the failure of a single machine; if two machines fail, the 
 remaining two machines do not constitute a majority. However, with five 
 machines ZooKeeper can handle the failure of two machines.
 
 See:
 http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html
 
 -- Jack Krupansky
 
 -Original Message- From: Otis Gospodnetic
 Sent: Thursday, December 06, 2012 4:39 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 1 is the minimum :)
 2 makes no sense.
 3 must be the most common number in the zoo.
 
 Otis
 --
 Performance Monitoring - http://sematext.com/spm/index.html
 Search Analytics - http://sematext.com/search-analytics/index.html
 
 
 
 
 On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky j...@basetechnology.comwrote:
 
 but if those same machines are running zookeeper, you need 3.
 
 And one of those 3 can go down? I thought 3 was the minimum number of
 zookeepers.
 
 -- Jack Krupansky
 
 -Original Message- From: Mark Miller
 Sent: Thursday, December 06, 2012 9:30 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 
 It depends on if you are running embedded zk or an external zk ensemble.
 
 One leader and a replica is all you need for Solr to allow on machine to
 go down - but if those same machines are running zookeeper, you need 3.
 
 You could also run zookeeper on one external machine and then it would be
 fine if you lost one solr node - but if you the one external zk node went
 down you would lose the ability to do updates until it was brought back up.
 
 - Mark
 
 On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:
 
 Hey all,
 
 I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
 setup for availability reasons.
 
 After studying the wiki page for SolrCloud I'm not sure what the absolute
 minimum setup is that would allow for one machine to go down.
 
 Would it be enough to have one shard with one leader and one replica?
 Could
 this setup tolerate one of the instances going down? Or do I need three
 instance because Zookeeper needs a quorum of instances?
 
 Cheers,
 
 Thomas
 
 
 



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Mark Miller

On Dec 6, 2012, at 2:32 PM, Michael Della Bitta 
michael.della.bi...@appinions.com wrote:

 I just rethought what I wrote and it doesn't make any sense. :)
 
 If you have two remaining nodes left when you have a three node ensemble,
 how are ties broken? Or does Zookeeper not resolve ties since it doesn't
 tolerate partitions?
 
 Michael

Tie breaking happens when you lose nodes or have partitions.

So if you have 3 nodes and its partitioned into 2 and 1, the 2 nodes wins and 
the 1 node is 'dead'.

You can't then survive a another partition of the 2 nodes - both 1 node 
ensembles would be 'dead'.

- Mark

Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky
I trust that you have the right answer, Mark, but maybe I'm just struggling 
to parse this statement: the remaining two machines do not constitute a 
majority.


If you start with 3 zk and lose one, you have an ensemble that does not 
constitute a majority.


So, the question is what can or can't you do once your ensemble no longer 
has a majority? Why does majority matter if you can run just fine? I mean, 
there must be some hard-core downside, other than that you can't lose any 
more nodes.


-- Jack Krupansky

-Original Message- 
From: Mark Miller

Sent: Thursday, December 06, 2012 5:49 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

3 is the minimum if you want to allow a node to go down.

1 is the minimum if you want the thing to work at all - but if the 1 goes 
down, ZooKeeper may stop working…


- Mark

On Dec 6, 2012, at 2:21 PM, Jack Krupansky j...@basetechnology.com wrote:


Rewind.

If 1 is the minimum, what is the 3 minimum all about?

The zk web page does say Three ZooKeeper servers is the minimum 
recommended size for an ensemble, and we also recommend that they run on 
separate machines - but it does say recommended.


But back to the original question - it sounds as if 4 nodes would the 
recommended minimum number of nodes if you want to tolerate one machine 
going down and maintaining that 3 zookeeper recommended minimum ensemble.


From the zookeeper web page:

For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster 
known as an ensemble. As long as a majority of the ensemble are up, the 
service will be available. Because Zookeeper requires a majority, it is 
best to use an odd number of machines. For example, with four machines 
ZooKeeper can only handle the failure of a single machine; if two machines 
fail, the remaining two machines do not constitute a majority. However, 
with five machines ZooKeeper can handle the failure of two machines.


See:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html

-- Jack Krupansky

-Original Message- From: Otis Gospodnetic
Sent: Thursday, December 06, 2012 4:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

1 is the minimum :)
2 makes no sense.
3 must be the most common number in the zoo.

Otis
--
Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 6, 2012 at 9:46 AM, Jack Krupansky 
j...@basetechnology.comwrote:



but if those same machines are running zookeeper, you need 3.

And one of those 3 can go down? I thought 3 was the minimum number of
zookeepers.

-- Jack Krupansky

-Original Message- From: Mark Miller
Sent: Thursday, December 06, 2012 9:30 AM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud


It depends on if you are running embedded zk or an external zk ensemble.

One leader and a replica is all you need for Solr to allow on machine to
go down - but if those same machines are running zookeeper, you need 3.

You could also run zookeeper on one external machine and then it would be
fine if you lost one solr node - but if you the one external zk node went
down you would lose the ability to do updates until it was brought back 
up.


- Mark

On Dec 6, 2012, at 2:46 AM, Thomas Heigl tho...@umschalt.com wrote:

Hey all,


I'm in the process of migrating a single Solr 4.0 instace to a SolrCloud
setup for availability reasons.

After studying the wiki page for SolrCloud I'm not sure what the 
absolute

minimum setup is that would allow for one machine to go down.

Would it be enough to have one shard with one leader and one replica?
Could
this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?

Cheers,

Thomas







Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Mark Miller

On Dec 6, 2012, at 2:55 PM, Jack Krupansky j...@basetechnology.com wrote:

 I mean, there must be some hard-core downside, other than that you can't lose 
 any more nodes.

Nope, not really. You just can't lose any more nodes.

Technically, you will also lose a bit of read performance - but write 
performance generally declines with more nodes, so you'd also gain a bit of 
write performance. But for Solr this should matter little to none.

- Mark

Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread sausarkar
I also did a test running a load directed to one single server in the cloud
and checked the CPU usage of other servers. It seems that even if there are
no load directed to those servers there is a CPU spike each minute. Did you
also di this test on the SolrCloud, any observations or suggestions?


In Reply To
Re: SolrCloud - Query performance degrades with multiple servers
Dec 05, 2012; 7:59pm — by   Mark Miller-3
This is just the std scatter gather distrib search stuff solr has been using
since around 1.4. 

There is some overhead to that, but generally not much. I've measured it at
around 30-50ms for a 100 machines, each with 10 million docs a few years
ago. 

So…that doesn't help you much…but FYI… 

- Mark 

On Dec 5, 2012, at 5:35 PM, sausarkar [hidden email] wrote: 

 We are using SolrCloud and trying to configure it for testing purposes, we 
 are seeing that the average query time is increasing if we have more than 
 one node in the SolrCloud cluster. We have a single shard 12 gigs 
 index.Example:1 node, average query time *~28 msec* , load 140 
 queries/second3 nodes, average query time *~110 msec*, load 420 
 queries/second distributed equally on three servers so essentially 140 qps 
 on each node.Is there any inter node communication going on for queries,
 is 
 there any setting on the Solrcloud for query tuning for a  cloud config
 with 
 multiple nodes.Please help. 
 
 
 
 -- 
 View this message in context:
 http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660.html
 Sent from the Solr - User mailing list archive at Nabble.com.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4024961.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com wrote:
 I trust that you have the right answer, Mark, but maybe I'm just struggling
 to parse this statement: the remaining two machines do not constitute a
 majority.

 If you start with 3 zk and lose one, you have an ensemble that does not
 constitute a majority.

I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com


Re: Solr 4 : Optimize very slow

2012-12-06 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 12:17 PM, Sandeep Mestry sanmes...@gmail.com wrote:
 I followed the advice Michael and the timings reduced to couple of hours now
 from 6-8 hours :-)

Just changing from mmap to NIO, eh?  What does your system look like?
operating system, JVM, drive, memory, etc?

-Yonik
http://lucidworks.com


Re: SOLR4 (sharded) and join query

2012-12-06 Thread Erick Erickson
see: http://wiki.apache.org/solr/DistributedSearch

joins aren't supported in distributed search. Any time you have more than
one shard in SolrCloud, you are, by definition, doing distributed search.

Best
Erick


On Wed, Dec 5, 2012 at 10:16 AM, adm1n evgeni.evg...@gmail.com wrote:

 Hi,

 I'm running some join query, let's say it looks as follows: {!join
 from=some_id to=another_id}(a_id:55 AND some_type_id:3). When I run it on
 single instance of SOLR I got the correct result, but when I'm running it
 on
 the sharded system (2 shards with replica for each shard (total index
 counts
 ~300K entries)) I got partial result.

 Is there any issue with supporting join queries on sharded system or may be
 there is some configuration tweak, that I'm missing?


 Thanlks.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SOLR4-sharded-and-join-query-tp4024547.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Occasional failed to respond errors

2012-12-06 Thread Erick Erickson
I suspect that you're seeing a timeout issue and the simplest fix might be
to up the timeouts, probably at the servlet-level.

You might get some evidence that this is the issue if your log files for
the time when this happens show some unusual activity, garbage collection
is a popular reason for this kind of thing.

Not all that helpful, but maybe a start.
Erick


On Wed, Dec 5, 2012 at 7:59 PM, Michael Ryan mr...@moreover.com wrote:

 We have a longstanding issue with failed to respond errors in Solr when
 our coordinator is querying our Solr shards.

 To elaborate further...  we're using the built-in distributed capabilities
 of Solr 3.6, and using Jetty as our server.  Occasionally, we will have a
 query fail due to an error like
 org.apache.commons.httpclient.NoHttpResponseException: The server
 solr-shard-13 failed to respond when the Solr coordinator is sending a
 request to one of its shards.  Over the long term, this happens for about 1
 out of 3000 queries.  The quick fix of simply retrying the query when such
 an intermittent error occurs works fine, but I'm trying to figure out what
 the root cause might be.

 I've got lots of theories and possible fixes, but was hoping someone had
 run into this before and knows the answer straight away :)

 -Michael



Re: SolrCloud - ClusterState says we are the leader,but locally ...

2012-12-06 Thread Erick Erickson
I've seen the Waiting until we see... message as well, it seems for me to
be an artifact of bouncing servers rapidly. It took a lot of patience to
wait until the timeoutin value got all the way to 0, but when it did the
system recovered.

As to your original problem, are you possibly getting page caching at the
servlet level?

Best
Erick


On Wed, Dec 5, 2012 at 9:41 PM, Sudhakar Maddineni
maddineni...@gmail.comwrote:

 Yep, after restarting, cluster came back to normal state.We will run
 couple of more tests and see if we could reproduce this issue.

 Btw, I am attaching the server logs before that 'INFO: *Waiting until we
 see more replicas*'  message.From the logs, we can see that leader
 election process started on 003 which was the replica for 001
 initially.That means leader 001 went down at that time?

 logs on 003:
 
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext
 runLeaderProcess
 INFO: Running the leader process.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext
 shouldIBeLeader
 INFO: Checking if I should try and be the leader.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext
 shouldIBeLeader
 INFO: My last published State was Active, it's okay to be the
 leader.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext
 runLeaderProcess
 INFO: I may be the new leader - try and sync
 12:11:16 PM org.apache.solr.cloud.RecoveryStrategy close
 WARNING: Stopping recovery for zkNodeName=003:8080_solr_core
 core=core1.
 12:11:16 PM org.apache.solr.cloud.SyncStrategy sync
 INFO: Sync replicas to http://003:8080/solr/core1/
 12:11:16 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr START
 replicas=[001:8080/solr/core1/] nUpdates=100
 12:11:16 PM org.apache.solr.common.cloud.ZkStateReader$3 process
 INFO: Updating live nodes - this message is on 002
 12:11:46 PM org.apache.solr.update.PeerSync handleResponse
 WARNING: PeerSync: core=core1 url=http://003:8080/solr
  exception talking to 001:8080/solr/core1/, failed
 org.apache.solr.client.solrj.SolrServerException: Timeout occured
 while waiting response from server at: 001:8080/solr/core1
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:409)
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 at
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:166)
 at
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:133)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
 Source)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
 Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.net.SocketTimeoutException: Read timed out
 at java.net.SocketInputStream.socketRead0(Native Method)
 at java.net.SocketInputStream.read(Unknown Source)
 12:11:46 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr DONE. sync
 failed
 12:11:46 PM org.apache.solr.common.SolrException log
 SEVERE: Sync Failed
 12:11:46 PM org.apache.solr.cloud.ShardLeaderElectionContext
 rejoinLeaderElection
 INFO: There is a better leader candidate than us - going back into
 recovery
 12:11:46 PM org.apache.solr.update.DefaultSolrCoreState doRecovery
 INFO: Running recovery - first canceling any ongoing recovery
 12:11:46 PM org.apache.solr.cloud.RecoveryStrategy run
 INFO: Starting recovery process.  core=core1
 recoveringAfterStartup=false
 12:11:46 PM org.apache.solr.cloud.RecoveryStrategy doRecovery
 INFO: Attempting to PeerSync from 001:8080/solr/core1/
 core=core1 - recoveringAfterStartup=false
 12:11:46 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr START
 replicas=[001:8080/solr/core1/] nUpdates=100
 12:11:46 PM org.apache.solr.cloud.ShardLeaderElectionContext
 runLeaderProcess
 INFO: Running the leader process.
 12:11:46 PM org.apache.solr.cloud.ShardLeaderElectionContext
 waitForReplicasToComeUp
 INFO: *Waiting until we see more replicas up: total=2 found=1
 timeoutin=17*
 12:11:47 PM org.apache.solr.cloud.ShardLeaderElectionContext
 waitForReplicasToComeUp
 INFO: *Waiting until we see more replicas up: total=2 found=1
 timeoutin=179495*
 12:11:48 PM org.apache.solr.cloud.ShardLeaderElectionContext
 waitForReplicasToComeUp
 INFO: *Waiting until we see 

Re: SOLR4 cluster - strange CPU spike on slave

2012-12-06 Thread Erick Erickson
Not quite. Too much memory for the JVM means that it starves the op system
and the caching that goes on there. Objects that consume memory are created
all the time in Solr. They won't be recovered until some threshold is
passed. So you can be sure that the more memory you allocate to the JVM,
the more will be used. And the GC pauses will eventually get quite long.

So over-allocating memory to the JVM is discouraged.

Best
Erick


On Wed, Dec 5, 2012 at 11:28 PM, John Nielsen j...@mcb.dk wrote:

 I'm not sure I understand why this is important. Too much memory would just
 be unused.

 This is what the heap looks now:

 Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize  = 17179869184 (16384.0MB)
NewSize  = 21757952 (20.75MB)
MaxNewSize   = 283508736 (270.375MB)
OldSize  = 65404928 (62.375MB)
NewRatio = 7
SurvivorRatio= 8
PermSize = 21757952 (20.75MB)
MaxPermSize  = 176160768 (168.0MB)

 Heap Usage:
 New Generation (Eden + 1 Survivor Space):
capacity = 255197184 (243.375MB)
used = 108828496 (103.78694152832031MB)
free = 146368688 (139.5880584716797MB)
42.644865548359654% used
 Eden Space:
capacity = 226885632 (216.375MB)
used = 83498424 (79.63030242919922MB)
free = 143387208 (136.74469757080078MB)
36.80198841326365% used
 From Space:
capacity = 28311552 (27.0MB)
used = 25330072 (24.156639099121094MB)
free = 2981480 (2.8433609008789062MB)
89.46903370044849% used
 To Space:
capacity = 28311552 (27.0MB)
used = 0 (0.0MB)
free = 28311552 (27.0MB)
0.0% used
 concurrent mark-sweep generation:
capacity = 16896360448 (16113.625MB)
used = 12452710200 (11875.829887390137MB)
free = 4443650248 (4237.795112609863MB)
73.70054775005708% used
 Perm Generation:
capacity = 70578176 (67.30859375MB)
used = 37652032 (35.90777587890625MB)
free = 32926144 (31.40081787109375MB)
53.347981109627995% used



 Med venlig hilsen / Best regards

 *John Nielsen*
 Programmer



 *MCB A/S*
 Enghaven 15
 DK-7500 Holstebro

 Kundeservice: +45 9610 2824
 p...@mcb.dk
 www.mcb.dk



 On Thu, Nov 29, 2012 at 4:08 AM, Otis Gospodnetic 
 otis.gospodne...@gmail.com wrote:

  If this is caused by index segment merging you should be able to see that
  very clearly on the Index report in SPM, where you would see sudden graph
  movement at the time of spike and corresponding to CPU and disk activity.
  I think uncommenting that infostream in solrconfig would also show it.
 
  Otis
  --
  SOLR Performance Monitoring - http://sematext.com/spm
  On Nov 28, 2012 9:20 PM, Erick Erickson erickerick...@gmail.com
 wrote:
 
   Am I reading this right? All you're doing on varnish1 is replicating to
  it?
   You're not searching or indexing? I'm sure I'm misreading this.
  
  
   The spike, which only lasts for a couple of minutes, sends the disks
   racing This _sounds_ suspiciously like segment merging, especially the
   disks racing bit. Or possibly replication. Neither of which make much
   sense. But is there any chance that somehow multiple commits are being
   issued? Of course if varnish1 is a slave, that shouldn't be happening
   either.
  
   And the whole bit about nothing going to the logs is just bizarre. I'm
   tempted to claim hardware gremlins, especially if you see nothing
 similar
   on varnish2. Or some other process is pegging the machine. All of which
  is
   a way of saying I have no idea
  
   Yours in bewilderment,
   Erick
  
  
  
   On Wed, Nov 28, 2012 at 6:15 AM, John Nielsen j...@mcb.dk wrote:
  
I apologize for the late reply.
   
The query load is more or less stable during the spikes. There are
  always
fluctuations, but nothing on the order of magnitude that could
 explain
   this
spike. In fact, the latest spike occured last night when there were
   almost
noone using it.
   
To test a hunch of mine, I tried to deactivate all caches by
 commenting
   out
all cache entries in solrconfig.xml. It still spikes, so I dont think
  it
has anything to do with cache warming or hits/misses or anything of
 the
sort.
   
One interesting thing GC though. This is our latest spike with cpu
 load
(this server has 8 cores, so a load higher than 8 is potentially
troublesome):
   
2012.Nov.27 19:58:182.27
2012.Nov.27 19:57:174.06
2012.Nov.27 19:56:188.95
2012.Nov.27 19:55:1719.97
2012.Nov.27 19:54:1732.27
2012.Nov.27 19:53:181.67
2012.Nov.27 19:52:171.6
2012.Nov.27 19:51:181.77
2012.Nov.27 19:50:171.89
   
This is what the GC was doing around that time:
   
2012-11-27T19:50:04.933+0100: [GC [PSYoungGen:
   4777586K-277641K(4969216K)]
8887542K-4387693K(9405824K), 0.0856360 secs] [Times: user=0.54
  sys=0.00,
real=0.09 secs]

Re: Restricting search results by field value

2012-12-06 Thread Erick Erickson
Why not do this at the app level? You can simply reorder the docs returned
in your groups by score and display it that way.

Or am I misunderstanding your requirement?

Best
Erick


On Thu, Dec 6, 2012 at 11:03 AM, Tom Mortimer bano...@gmail.com wrote:

 Thanks, but even with group.main=true the results are not in relevancy
 (score) order, they are in group order. Which is why I can't use it as is.

 Tom


 On 6 Dec 2012, at 19:00, Way Cool way1.wayc...@gmail.com wrote:

  Grouping should work:
  group=truegroup.field=source_idgroup.limit=3group.main=true
 
  On Thu, Dec 6, 2012 at 2:35 AM, Tom Mortimer bano...@gmail.com wrote:
 
  Sounds like it's worth a try! Thanks Andre.
  Tom
 
  On 5 Dec 2012, at 17:49, Andre Bois-Crettez andre.b...@kelkoo.com
 wrote:
 
  If you do grouping on source_id, it should be enough to request 3 times
  more documents than you need, then reorder and drop the bottom.
 
  Is a 3x overhead acceptable ?
 
 
 
  On 12/05/2012 12:04 PM, Tom Mortimer wrote:
  Hi everyone,
 
  I've got a problem where I have docs with a source_id field, and there
  can be many docs from each source. Searches will typically return docs
 from
  many sources. I want to restrict the number of docs from each source in
  results, so there will be no more than (say) 3 docs from source_id=123
 etc.
 
  Field collapsing is the obvious approach, but I want to get the
 results
  back in relevancy order, not grouped by source_id. So it looks like I'll
  have to fetch more docs than I need to and re-sort them. It might even
 be
  better to count source_ids in the client code and drop excess docs that
  way, but the potential overhead is large.
 
  Is there any way of doing this in Solr without hacking in a custom
  Lucene Collector? (which doesn't look all that straightforward).
 
  cheers,
  Tom
 
 
  --
  André Bois-Crettez
 
  Search technology, Kelkoo
  http://www.kelkoo.com/
 
  Kelkoo SAS
  Société par Actions Simplifiée
  Au capital de € 4.168.964,30
  Siège social : 8, rue du Sentier 75002 Paris
  425 093 069 RCS Paris
 
  Ce message et les pièces jointes sont confidentiels et établis à
  l'attention exclusive de leurs destinataires. Si vous n'êtes pas le
  destinataire de ce message, merci de le détruire et d'en avertir
  l'expéditeur.
 
 




Re: Cannot run Solr4 from Intellij Idea

2012-12-06 Thread Steve Rowe
+1 to using IntelliJ's remote debugging facilities.

I've done this with Tomcat too - just edit catalina.sh to add the parameters to 
the JVM invocation that the IntelliJ remote run configuration suggests.

With Tomcat you'll have to build the war using the Ant build, but that's more 
sensible anyway, since that's the official/supported build.

Steve

On Dec 6, 2012, at 7:06 PM, Erick Erickson erickerick...@gmail.com wrote:

 Why do this? It's trivial to attach IntelliJ to a running solr, just create
 remote configuration. When you do, it'll give you parameters you'll be
 able to start Solr with and attach from IntelliJ, set breakpoints, etc.
 Something like:
 java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5900
 -jar start.jar
 
 You'll get the parameters to start Solr with when you create the remote
 configuration in IntelliJ. Then, I do an ant example from
 solrhome/solr, go into the example dir and I'm off to the races. The
 suspend=y means that solr just sits there until you attach IntelliJ.
 
 I suspect that if you've copied things around, if you change Solr you'll
 have a world of problems getting the changed jars to the right places.
 
 It's a bit of a pain that when you do make changes to solr code, you have
 to do another ant example but if your goal is to simply step through Solr
 code, it's much easier to do a remote debugging session.
 
 Best
 Erick
 
 
 On Wed, Dec 5, 2012 at 11:55 PM, Artyom ice...@mail.ru wrote:
 
 See the screenshots:
 
 solr_idea1: adding an IDEA tomcat artifact
 solr_idea2: adding an IDEA facet
 solr_idea3: placing modules into the artifact (drag modules from the
 Available Elements to output root) and the created facet
 
 
 Среда,  5 декабря 2012, 7:28  от sarowe [via Lucene] 
 ml-node+s472066n402448...@n3.nabble.com:
 
 
 
 
 
 
 
 
 
 
 
Hi Artyom,
 
 I don't use IntelliJ artifacts - I just edit/compile/test.
 
 I can include this stuff in the IntelliJ configuration if you'll help me.
 Can you share screenshots of what you're talking about, and/or IntelliJ
 config files?
 
 Steve
 
 On Dec 5, 2012, at 8:24 AM, Artyom [hidden email] wrote:
 
 
 InelliJ IDEA is not so intelligent with Solr: to fix this problem I've
 dragged these modules into the IDEA's artifact (parent module is wrong):
 
 analysis-common
 analysis-extras
 analysis-uima
 clustering
 codecs
 codecs-resources
 dataimporthandler
 dataimporthandler-extras
 lucene-core
 lucene-core-resources
 solr-core
 
 
 
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024452.html
 Sent from the Solr - User mailing list archive at Nabble.com.
 
 
 
 
 
 
 
 
 
 
 --
 
 
 
 If you reply to this email, your message will be added to the discussion
 below:
 
 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024484.html
 
 
 
 
To unsubscribe from Cannot run Solr4 from Intellij Idea,
 click here.
 
NAML
 
 
 
 
 
 
 
 
 
 
 
 =?UTF-8?B?c29scl9pZGVhMS5wbmc=?= (102K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/0/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMS5wbmc%3D%3F%3D
 
 =?UTF-8?B?c29scl9pZGVhMi5wbmc=?= (117K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/1/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMi5wbmc%3D%3F%3D
 
 =?UTF-8?B?c29scl9pZGVhMy5wbmc=?= (148K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/2/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMy5wbmc%3D%3F%3D
 
 
 
 
 
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024723.html
 Sent from the Solr - User mailing list archive at Nabble.com.
 



Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky
But that is the context I was originally referring to - that with 4 zk you 
can lose only one, that you can't lose two. So, if you want to tolerate a 
loss on one, 4 zk would be the minimum... but then it was claimed that you 
COULD start with 3 zk and loss of one would be fine. I mean whether you 
start with 4 and lose 2 or start with 3 and lose 1 is the same, right?


-- Jack Krupansky

-Original Message- 
From: Yonik Seeley

Sent: Thursday, December 06, 2012 6:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
wrote:
I trust that you have the right answer, Mark, but maybe I'm just 
struggling

to parse this statement: the remaining two machines do not constitute a
majority.

If you start with 3 zk and lose one, you have an ensemble that does not
constitute a majority.


I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com 



Re: how to assign dedicated server for indexing and add more shard in SolrCloud

2012-12-06 Thread Erick Erickson
well, you could probably do what you want. Go ahead and index on the super
cool AWS instance, just don't bring the replicas up yet. All the indexing
is going to this machine. Once your index is constructed, bring up
replicas. Old-style replication will take place and you should be off to
the races.

Bur personally, I'd just stay with old-style master/replication in the
situation you describe. It's still perfectly possible with SolrCloud/Solr4,
none of that functionality has been taken away.

I guess you're talking about two different use cases here.
SolrCloud/ZooKeeper deals with the NRT issues, which are really difficult
with traditional master/slave setups. But static indexes are a bit of a
different situation.

But you're right, you get a lot of merging going on in the background with
NRT and frequent commits. As in all things, it's a tradeoff.

Best
Erick


On Thu, Dec 6, 2012 at 12:35 PM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Jason,
 Thanks for raising it!

 Erick,
 That's what I want to discuss for a long time. Frankly speaking, the
 question is:

 if old-school (master/slave) search deployments doesn't comply to vision by
 SolrCloud/ElasticSearch, does it mean that they are wrong?

 Let me enumerate kinds of 'old-school search':
 - number of docs is not so dramatic to make sharding profitable from search
 latency's POV;
 - index updates are not frequent, they are rather rare nightly bulks;
 - search index is not a SOR (system of records) - it's a secondary system,
 provides the search service, still significant for the enterprise;
 - there is an SOR - primary system, which is kind of CMS or RDBMS or CMS
 with publish through RDBMS, etc;

 Does it look like your system? No, - click Delete button!

 // for few people who still read this:

 That's what I have with Solr Cloud in this case:
 - I can decide don't deal with sharding. Good! put numShards=0, and buy
 more (VM) instances to have more replicas to increase throughput;
 - start nightly reindex - delQ *:* , add(), commit()
 - in this case all my instances will spend resources to indexing same docs,
 instead of handling search requests - BAD#1;
 - even I'm able to supply long IterableSolrInputDocument,
 DistribudedUpdateProcessor will throw documents one by one, not by huge
 chunks, that leads to many small segments - eg. if I have 100Mb RAM buffer,
 and 10 servlet container threads I'll have sequence of 10Mb segments;
 - every of these flushes also flushes some part of current index mapped to
 the RAM that impacts search latency BAD#2;
 - when indexing is over I have a many small segments, and then The Merge
 starts, which also flushes current index from RAM BAD#3.

 In summary: I waste resources for indexing same stuff on searcher nodes, as
 a side effect I have longer period of latency impact.

 How I want to do it:
  - in the cloud I add small instances as replicas on demand to adjust for
 work load dynamically;
  - when I need to reindex (full import) I can rent super cool VM instance
 with manyway-CPU, run indexing on it;
  - if it blows off, no problem I can run full import from my CMS/DB again
 from the beginning - or i can run two imports simultaneously;
  - after indexing  finished, I can push index to searchers or start new
 ones mounting index to them.

 Please tell me where I'm wrong, whether it SolrCloud features, 'cloud'
 economy, hard/VMware architecture or Lucene internals. Can Jason and myself
 adjust SolrCloud for our 'old-school' pattern?

 Thanks for sharing your opinion!



 On Thu, Dec 6, 2012 at 7:19 PM, Erick Erickson erickerick...@gmail.com
 wrote:

  First, forget about master/slave with SolrCloud! Leaders really exist to
  resolve conflicts, the old notion of M/S replication is largely
 irrelevant.
 
  Updates can go to any node in the cluster, leader, replica, whatever. The
  node forwards the doc to the correct leader based on a hash of the
  uniqueKey, which then forwards the raw document to all replicas. Then
 all
  the replicas index the document separately. Note that this is true on
  mutli-document packets too. You can't get NRT with the old-style
  replication process where the master indexes the doc and then the _index_
  is replicated...
 
  As for your second question, it sounds like you want to go from
  numShards=2, say to numShards=3. You can't do that as it stands. There
 are
  two approaches:
  1 shard splitting which would redistribute the documents to a new set
 of
  shards
  2 pluggable hashing which allows you to specify the code that does the
  shard assignment.
  Neither of these are available yet, although 2 is imminent. There is
  active work on 1, but I don't think that will be ready as soon.
 
  Best
  Erick
 
 
  On Tue, Dec 4, 2012 at 11:21 PM, Jason hialo...@gmail.com wrote:
 
   I'm using master and slave server for scaling.
   Master is dedicated for indexing and slave is for searching.
   Now, I'm planning to move SolrCloud.
   It has leader and replicas.
   Leader 

Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Walter Underwood
The Zookeeper ensemble knows the total size. It does not adjust it each time 
that a machine is partitioned or down.

Two machines is not a quorum for a four machine ensemble.

Why do you think that the documentation would get this wrong?

wunder

On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:

 But that is the context I was originally referring to - that with 4 zk you 
 can lose only one, that you can't lose two. So, if you want to tolerate a 
 loss on one, 4 zk would be the minimum... but then it was claimed that you 
 COULD start with 3 zk and loss of one would be fine. I mean whether you start 
 with 4 and lose 2 or start with 3 and lose 1 is the same, right?
 
 -- Jack Krupansky
 
 -Original Message- From: Yonik Seeley
 Sent: Thursday, December 06, 2012 6:34 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
 wrote:
 I trust that you have the right answer, Mark, but maybe I'm just struggling
 to parse this statement: the remaining two machines do not constitute a
 majority.
 
 If you start with 3 zk and lose one, you have an ensemble that does not
 constitute a majority.
 
 I think you took that out of context.  They were talking about losing
 2 nodes in a 4 node cluster.
 
 For example, with four machines ZooKeeper can only handle the failure
 of a single machine; if two machines fail, the remaining two machines
 do not constitute a majority.
 
 -Yonik
 http://lucidworks.com 

--
Walter Underwood
wun...@wunderwood.org





Re: SolrCloud - ClusterState says we are the leader,but locally ...

2012-12-06 Thread Mark Miller
Yes - it means that 001 went down (or more likely had it's connection to 
ZooKeeper interrupted? that's what I mean about a session timeout - if the 
solr-zk link is broken for longer than the session timeout that will trigger a 
leader election and when the connection is reestablished, the node will have to 
recover). That waiting should stop as soon as 001 came back up or reconnected 
to ZooKeeper.

In fact, this waiting should not happen in this case - but only on cluster 
restart. This is a bug that is fixed in 4.1 (hopefully coming very soon!):

* SOLR-3940: Rejoining the leader election incorrectly triggers the code path
  for a fresh cluster start rather than fail over. (Mark Miller)

- Mark

On Dec 5, 2012, at 9:41 PM, Sudhakar Maddineni maddineni...@gmail.com wrote:

 Yep, after restarting, cluster came back to normal state.We will run couple 
 of more tests and see if we could reproduce this issue.
 
 Btw, I am attaching the server logs before that 'INFO: Waiting until we see 
 more replicas'  message.From the logs, we can see that leader election 
 process started on 003 which was the replica for 001 initially.That means 
 leader 001 went down at that time?
 
 logs on 003:
 
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext runLeaderProcess
 INFO: Running the leader process.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext shouldIBeLeader
 INFO: Checking if I should try and be the leader.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext shouldIBeLeader
 INFO: My last published State was Active, it's okay to be the leader.
 12:11:16 PM org.apache.solr.cloud.ShardLeaderElectionContext runLeaderProcess
 INFO: I may be the new leader - try and sync
 12:11:16 PM org.apache.solr.cloud.RecoveryStrategy close
 WARNING: Stopping recovery for zkNodeName=003:8080_solr_core 
 core=core1.
 12:11:16 PM org.apache.solr.cloud.SyncStrategy sync
 INFO: Sync replicas to http://003:8080/solr/core1/
 12:11:16 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr START 
 replicas=[001:8080/solr/core1/] nUpdates=100
 12:11:16 PM org.apache.solr.common.cloud.ZkStateReader$3 process
 INFO: Updating live nodes - this message is on 002
 12:11:46 PM org.apache.solr.update.PeerSync handleResponse
 WARNING: PeerSync: core=core1 url=http://003:8080/solr  exception 
 talking to 001:8080/solr/core1/, failed
 org.apache.solr.client.solrj.SolrServerException: Timeout occured 
 while waiting response from server at: 001:8080/solr/core1
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:409)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
   at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:166)
   at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:133)
   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source)
   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
 Source)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(Unknown Source)
 12:11:46 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr DONE. sync 
 failed
 12:11:46 PM org.apache.solr.common.SolrException log
 SEVERE: Sync Failed
 12:11:46 PM org.apache.solr.cloud.ShardLeaderElectionContext 
 rejoinLeaderElection
 INFO: There is a better leader candidate than us - going back into 
 recovery
 12:11:46 PM org.apache.solr.update.DefaultSolrCoreState doRecovery
 INFO: Running recovery - first canceling any ongoing recovery
 12:11:46 PM org.apache.solr.cloud.RecoveryStrategy run
 INFO: Starting recovery process.  core=core1 
 recoveringAfterStartup=false
 12:11:46 PM org.apache.solr.cloud.RecoveryStrategy doRecovery
 INFO: Attempting to PeerSync from 001:8080/solr/core1/ core=core1 - 
 recoveringAfterStartup=false
 12:11:46 PM org.apache.solr.update.PeerSync sync
 INFO: PeerSync: core=core1 url=http://003:8080/solr START 
 replicas=[001:8080/solr/core1/] nUpdates=100
 12:11:46 PM org.apache.solr.cloud.ShardLeaderElectionContext runLeaderProcess
 INFO: Running the leader process.
 12:11:46 PM 

Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky

It's still an unresolved mystery, for now.

-- Jack Krupansky

-Original Message- 
From: Walter Underwood

Sent: Thursday, December 06, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

The Zookeeper ensemble knows the total size. It does not adjust it each time 
that a machine is partitioned or down.


Two machines is not a quorum for a four machine ensemble.

Why do you think that the documentation would get this wrong?

wunder

On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:

But that is the context I was originally referring to - that with 4 zk you 
can lose only one, that you can't lose two. So, if you want to tolerate a 
loss on one, 4 zk would be the minimum... but then it was claimed that you 
COULD start with 3 zk and loss of one would be fine. I mean whether you 
start with 4 and lose 2 or start with 3 and lose 1 is the same, right?


-- Jack Krupansky

-Original Message- From: Yonik Seeley
Sent: Thursday, December 06, 2012 6:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
wrote:
I trust that you have the right answer, Mark, but maybe I'm just 
struggling

to parse this statement: the remaining two machines do not constitute a
majority.

If you start with 3 zk and lose one, you have an ensemble that does not
constitute a majority.


I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com


--
Walter Underwood
wun...@wunderwood.org





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Walter Underwood
What is the mystery? Two is not more than half of four. Therefore, two machines 
is not a quorum for a four machine Zookeeper ensemble.

wunder

On Dec 6, 2012, at 4:50 PM, Jack Krupansky wrote:

 It's still an unresolved mystery, for now.
 
 -- Jack Krupansky
 
 -Original Message- From: Walter Underwood
 Sent: Thursday, December 06, 2012 7:30 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 The Zookeeper ensemble knows the total size. It does not adjust it each time 
 that a machine is partitioned or down.
 
 Two machines is not a quorum for a four machine ensemble.
 
 Why do you think that the documentation would get this wrong?
 
 wunder
 
 On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:
 
 But that is the context I was originally referring to - that with 4 zk you 
 can lose only one, that you can't lose two. So, if you want to tolerate a 
 loss on one, 4 zk would be the minimum... but then it was claimed that you 
 COULD start with 3 zk and loss of one would be fine. I mean whether you 
 start with 4 and lose 2 or start with 3 and lose 1 is the same, right?
 
 -- Jack Krupansky
 
 -Original Message- From: Yonik Seeley
 Sent: Thursday, December 06, 2012 6:34 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
 wrote:
 I trust that you have the right answer, Mark, but maybe I'm just struggling
 to parse this statement: the remaining two machines do not constitute a
 majority.
 
 If you start with 3 zk and lose one, you have an ensemble that does not
 constitute a majority.
 
 I think you took that out of context.  They were talking about losing
 2 nodes in a 4 node cluster.
 
 For example, with four machines ZooKeeper can only handle the failure
 of a single machine; if two machines fail, the remaining two machines
 do not constitute a majority.
 
 -Yonik
 http://lucidworks.com
 
 --
 Walter Underwood
 wun...@wunderwood.org
 
 
 

--
Walter Underwood
wun...@wunderwood.org





Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread sausarkar
Ok we think we found out the issue here. When solrcloud is started without
specifying numShards argument solrcloud starts with a single shard but still
thinks that there are multiple shards, so it forwards every single query to
all the nodes in the cloud. We did a tcpdump on the node where queries are
not targeted and found out that it is receiving POST requests from the node
where queries are started.
*start=0fsv=truedistrib=falseisShard=trueshard.url=serve1.com
*

We solved the issue by explicitly adding numShards=1 argument to the solr
start up script. Is this a bug?

Re: SolrCloud - Query performance degrades with multiple servers
Dec 06, 2012; 3:13pm — by   sausarkar
I also did a test running a load directed to one single server in the cloud
and checked the CPU usage of other servers. It seems that even if there are
no load directed to those servers there is a CPU spike each minute. Did you
also di this test on the SolrCloud, any observations or suggestions? 


In Reply To 
Re: SolrCloud - Query performance degrades with multiple servers 
Dec 05, 2012; 7:59pm — by   Mark Miller-3 
This is just the std scatter gather distrib search stuff solr has been using
since around 1.4. 

There is some overhead to that, but generally not much. I've measured it at
around 30-50ms for a 100 machines, each with 10 million docs a few years
ago. 

So…that doesn't help you much…but FYI… 

- Mark 

On Dec 5, 2012, at 5:35 PM, sausarkar [hidden email] wrote: 

 We are using SolrCloud and trying to configure it for testing purposes, we 
 are seeing that the average query time is increasing if we have more than 
 one node in the SolrCloud cluster. We have a single shard 12 gigs 
 index.Example:1 node, average query time *~28 msec* , load 140 
 queries/second3 nodes, average query time *~110 msec*, load 420 
 queries/second distributed equally on three servers so essentially 140 qps 
 on each node.Is there any inter node communication going on for queries,
 is 
 there any setting on the Solrcloud for query tuning for a  cloud config
 with 
 multiple nodes.Please help. 
 
 
 
 -- 
 View this message in context:
 http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660.html
 Sent from the Solr - User mailing list archive at Nabble.com.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4024986.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread Ryan Zezeski
There are some gains to be made in Solr's distributed search code.  A few
weeks about I spent time profiling dist search using dtrace/btrace and
found some areas for improvement.  I planned on writing up some blog posts
and providing patches but I'll list them off now in case others have input.

1) Disable the http client stale check.  It is known to cause latency
issues.  Doing this gave be a 4x increase in perf.

2) Disable nagle, many tiny packets are not being sent (to my knowledge),
so don't wait.

3) Use a single TermEnum for all external id-lucene id lookups.  This
seemed to reduce total bytes read according to dtrace.

4) Building off #3, cache a certain number of external id-lucene id.
 Avoding the TermEnum altogether.

5) If fl=id is present then dont' run the 2nd phase of the dist search.

I'm still very new to Solr so there could be issues with any of the patches
I propose above that I'm not aware of.  Would love to hear input.

-Z

On Wed, Dec 5, 2012 at 8:35 PM, sausarkar sausar...@ebay.com wrote:

 We are using SolrCloud and trying to configure it for testing purposes, we
 are seeing that the average query time is increasing if we have more than
 one node in the SolrCloud cluster. We have a single shard 12 gigs
 index.Example:1 node, average query time *~28 msec* , load 140
 queries/second3 nodes, average query time *~110 msec*, load 420
 queries/second distributed equally on three servers so essentially 140 qps
 on each node.Is there any inter node communication going on for queries, is
 there any setting on the Solrcloud for query tuning for a  cloud config
 with
 multiple nodes.Please help.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky
The part I still find confusing is that if you start with 3 and lose 1, your 
have 2, which means you can't always break a tie, right? How is this 
explained? As opposed to saying that 4 is the minimum if you need to 
tolerate a loss of 1.


-- Jack Krupansky

-Original Message- 
From: Walter Underwood

Sent: Thursday, December 06, 2012 7:51 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

What is the mystery? Two is not more than half of four. Therefore, two 
machines is not a quorum for a four machine Zookeeper ensemble.


wunder

On Dec 6, 2012, at 4:50 PM, Jack Krupansky wrote:


It's still an unresolved mystery, for now.

-- Jack Krupansky

-Original Message- From: Walter Underwood
Sent: Thursday, December 06, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

The Zookeeper ensemble knows the total size. It does not adjust it each 
time that a machine is partitioned or down.


Two machines is not a quorum for a four machine ensemble.

Why do you think that the documentation would get this wrong?

wunder

On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:

But that is the context I was originally referring to - that with 4 zk 
you can lose only one, that you can't lose two. So, if you want to 
tolerate a loss on one, 4 zk would be the minimum... but then it was 
claimed that you COULD start with 3 zk and loss of one would be fine. I 
mean whether you start with 4 and lose 2 or start with 3 and lose 1 is 
the same, right?


-- Jack Krupansky

-Original Message- From: Yonik Seeley
Sent: Thursday, December 06, 2012 6:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
wrote:
I trust that you have the right answer, Mark, but maybe I'm just 
struggling

to parse this statement: the remaining two machines do not constitute a
majority.

If you start with 3 zk and lose one, you have an ensemble that does not
constitute a majority.


I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com


--
Walter Underwood
wun...@wunderwood.org





--
Walter Underwood
wun...@wunderwood.org





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Walter Underwood
Configure an ensemble of three. When one goes down, you still have an ensemble 
of three, but with one down. The ensemble size is not reset after failures.

wunder

On Dec 6, 2012, at 5:20 PM, Jack Krupansky wrote:

 The part I still find confusing is that if you start with 3 and lose 1, your 
 have 2, which means you can't always break a tie, right? How is this 
 explained? As opposed to saying that 4 is the minimum if you need to tolerate 
 a loss of 1.
 
 -- Jack Krupansky
 
 -Original Message- From: Walter Underwood
 Sent: Thursday, December 06, 2012 7:51 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 What is the mystery? Two is not more than half of four. Therefore, two 
 machines is not a quorum for a four machine Zookeeper ensemble.
 
 wunder
 
 On Dec 6, 2012, at 4:50 PM, Jack Krupansky wrote:
 
 It's still an unresolved mystery, for now.
 
 -- Jack Krupansky
 
 -Original Message- From: Walter Underwood
 Sent: Thursday, December 06, 2012 7:30 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 The Zookeeper ensemble knows the total size. It does not adjust it each time 
 that a machine is partitioned or down.
 
 Two machines is not a quorum for a four machine ensemble.
 
 Why do you think that the documentation would get this wrong?
 
 wunder
 
 On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:
 
 But that is the context I was originally referring to - that with 4 zk you 
 can lose only one, that you can't lose two. So, if you want to tolerate a 
 loss on one, 4 zk would be the minimum... but then it was claimed that you 
 COULD start with 3 zk and loss of one would be fine. I mean whether you 
 start with 4 and lose 2 or start with 3 and lose 1 is the same, right?
 
 -- Jack Krupansky
 
 -Original Message- From: Yonik Seeley
 Sent: Thursday, December 06, 2012 6:34 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Minimum HA Setup with SolrCloud
 
 On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
 wrote:
 I trust that you have the right answer, Mark, but maybe I'm just struggling
 to parse this statement: the remaining two machines do not constitute a
 majority.
 
 If you start with 3 zk and lose one, you have an ensemble that does not
 constitute a majority.
 
 I think you took that out of context.  They were talking about losing
 2 nodes in a 4 node cluster.
 
 For example, with four machines ZooKeeper can only handle the failure
 of a single machine; if two machines fail, the remaining two machines
 do not constitute a majority.
 
 -Yonik
 http://lucidworks.com
 
 --
 Walter Underwood
 wun...@wunderwood.org
 
 
 
 
 --
 Walter Underwood
 wun...@wunderwood.org
 
 
 

--
Walter Underwood
wun...@wunderwood.org





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky
And this is precisely why the mystery remains - because you're only 
describing half the picture! Describe the rest of the picture - including 
what exactly those two zks can and can't do, including resolution of ties 
and the concept of constitu.


-- Jack Krupansky

-Original Message- 
From: Walter Underwood

Sent: Thursday, December 06, 2012 8:33 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

Configure an ensemble of three. When one goes down, you still have an 
ensemble of three, but with one down. The ensemble size is not reset after 
failures.


wunder

On Dec 6, 2012, at 5:20 PM, Jack Krupansky wrote:

The part I still find confusing is that if you start with 3 and lose 1, 
your have 2, which means you can't always break a tie, right? How is this 
explained? As opposed to saying that 4 is the minimum if you need to 
tolerate a loss of 1.


-- Jack Krupansky

-Original Message- From: Walter Underwood
Sent: Thursday, December 06, 2012 7:51 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

What is the mystery? Two is not more than half of four. Therefore, two 
machines is not a quorum for a four machine Zookeeper ensemble.


wunder

On Dec 6, 2012, at 4:50 PM, Jack Krupansky wrote:


It's still an unresolved mystery, for now.

-- Jack Krupansky

-Original Message- From: Walter Underwood
Sent: Thursday, December 06, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

The Zookeeper ensemble knows the total size. It does not adjust it each 
time that a machine is partitioned or down.


Two machines is not a quorum for a four machine ensemble.

Why do you think that the documentation would get this wrong?

wunder

On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:

But that is the context I was originally referring to - that with 4 zk 
you can lose only one, that you can't lose two. So, if you want to 
tolerate a loss on one, 4 zk would be the minimum... but then it was 
claimed that you COULD start with 3 zk and loss of one would be fine. I 
mean whether you start with 4 and lose 2 or start with 3 and lose 1 is 
the same, right?


-- Jack Krupansky

-Original Message- From: Yonik Seeley
Sent: Thursday, December 06, 2012 6:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
wrote:
I trust that you have the right answer, Mark, but maybe I'm just 
struggling
to parse this statement: the remaining two machines do not constitute 
a

majority.

If you start with 3 zk and lose one, you have an ensemble that does not
constitute a majority.


I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com


--
Walter Underwood
wun...@wunderwood.org





--
Walter Underwood
wun...@wunderwood.org





--
Walter Underwood
wun...@wunderwood.org





Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky

Oops...

And this is precisely why the mystery remains - because you're only 
describing half the picture! Describe the rest of the picture - including 
what exactly those two zks can and can't do, including resolution of ties 
and the concept of constituting a majority and a quorum.


I'm not saying the mystery CAN'T be solved or that you haven't resolved it 
in your own mind, but simply that it hasn't been resolved and clearly 
described in a complete and consistent manner in the narrative here today.


-- Jack Krupansky

-Original Message- 
From: Jack Krupansky

Sent: Thursday, December 06, 2012 8:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

And this is precisely why the mystery remains - because you're only
describing half the picture! Describe the rest of the picture - including
what exactly those two zks can and can't do, including resolution of ties
and the concept of constitu.

-- Jack Krupansky

-Original Message- 
From: Walter Underwood

Sent: Thursday, December 06, 2012 8:33 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

Configure an ensemble of three. When one goes down, you still have an
ensemble of three, but with one down. The ensemble size is not reset after
failures.

wunder

On Dec 6, 2012, at 5:20 PM, Jack Krupansky wrote:

The part I still find confusing is that if you start with 3 and lose 1, 
your have 2, which means you can't always break a tie, right? How is this 
explained? As opposed to saying that 4 is the minimum if you need to 
tolerate a loss of 1.


-- Jack Krupansky

-Original Message- From: Walter Underwood
Sent: Thursday, December 06, 2012 7:51 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

What is the mystery? Two is not more than half of four. Therefore, two 
machines is not a quorum for a four machine Zookeeper ensemble.


wunder

On Dec 6, 2012, at 4:50 PM, Jack Krupansky wrote:


It's still an unresolved mystery, for now.

-- Jack Krupansky

-Original Message- From: Walter Underwood
Sent: Thursday, December 06, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

The Zookeeper ensemble knows the total size. It does not adjust it each 
time that a machine is partitioned or down.


Two machines is not a quorum for a four machine ensemble.

Why do you think that the documentation would get this wrong?

wunder

On Dec 6, 2012, at 4:14 PM, Jack Krupansky wrote:

But that is the context I was originally referring to - that with 4 zk 
you can lose only one, that you can't lose two. So, if you want to 
tolerate a loss on one, 4 zk would be the minimum... but then it was 
claimed that you COULD start with 3 zk and loss of one would be fine. I 
mean whether you start with 4 and lose 2 or start with 3 and lose 1 is 
the same, right?


-- Jack Krupansky

-Original Message- From: Yonik Seeley
Sent: Thursday, December 06, 2012 6:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 5:55 PM, Jack Krupansky j...@basetechnology.com 
wrote:
I trust that you have the right answer, Mark, but maybe I'm just 
struggling
to parse this statement: the remaining two machines do not constitute 
a

majority.

If you start with 3 zk and lose one, you have an ensemble that does not
constitute a majority.


I think you took that out of context.  They were talking about losing
2 nodes in a 4 node cluster.

For example, with four machines ZooKeeper can only handle the failure
of a single machine; if two machines fail, the remaining two machines
do not constitute a majority.

-Yonik
http://lucidworks.com


--
Walter Underwood
wun...@wunderwood.org





--
Walter Underwood
wun...@wunderwood.org





--
Walter Underwood
wun...@wunderwood.org




Attention Solr 4.0 SolrCloud users

2012-12-06 Thread Mark Miller
I should have sent this some time ago:

https://issues.apache.org/jira/browse/SOLR-3940 Rejoining the leader election 
incorrectly triggers the code path for a fresh cluster start rather than fail 
over.

The above is a somewhat ugly bug.

It means that if you are playing around with recovery and you kill a replica in 
a shard, it will take 3 minutes before a new leader takes over.

This will be fixed in the upcoming 4.1 release (And has been fixed on 4x since 
early October).

This wait is only meant for cluster startup. The idea is that you might 
introduce some random, old, out of date shard and then start up your cluster - 
you don't want that shard to be a leader - so we wait around for all known 
shards to startup so they can all participate in the initial leader election 
and the best one can be chosen. It's meant as a protective measure against a 
fairly unlikely event. But it's kicking in when it shouldn't.

You can just accept the 3 minute wait, or you can lower the wait from 3 minutes 
(to like 10 seconds or to 0 seconds - just avoid the scenario I mention above 
if you do).

You can set the wait time in solr.xml by adding the attribute 
leaderVoteWait={whatever miliseconds} to the cores node.

Sorry about this - completely my fault.

- Mark

Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread Mark Miller
Ryan, my new best friend! Please, file JIRA issue(s) for these items!

I'm sure you will get some feedback.

- Mark

On Dec 6, 2012, at 5:09 PM, Ryan Zezeski rzeze...@gmail.com wrote:

 There are some gains to be made in Solr's distributed search code.  A few
 weeks about I spent time profiling dist search using dtrace/btrace and
 found some areas for improvement.  I planned on writing up some blog posts
 and providing patches but I'll list them off now in case others have input.
 
 1) Disable the http client stale check.  It is known to cause latency
 issues.  Doing this gave be a 4x increase in perf.
 
 2) Disable nagle, many tiny packets are not being sent (to my knowledge),
 so don't wait.
 
 3) Use a single TermEnum for all external id-lucene id lookups.  This
 seemed to reduce total bytes read according to dtrace.
 
 4) Building off #3, cache a certain number of external id-lucene id.
 Avoding the TermEnum altogether.
 
 5) If fl=id is present then dont' run the 2nd phase of the dist search.
 
 I'm still very new to Solr so there could be issues with any of the patches
 I propose above that I'm not aware of.  Would love to hear input.
 
 -Z
 
 On Wed, Dec 5, 2012 at 8:35 PM, sausarkar sausar...@ebay.com wrote:
 
 We are using SolrCloud and trying to configure it for testing purposes, we
 are seeing that the average query time is increasing if we have more than
 one node in the SolrCloud cluster. We have a single shard 12 gigs
 index.Example:1 node, average query time *~28 msec* , load 140
 queries/second3 nodes, average query time *~110 msec*, load 420
 queries/second distributed equally on three servers so essentially 140 qps
 on each node.Is there any inter node communication going on for queries, is
 there any setting on the Solrcloud for query tuning for a  cloud config
 with
 multiple nodes.Please help.
 
 
 
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: SolrCloud - Query performance degrades with multiple servers

2012-12-06 Thread Mark Miller

On Dec 6, 2012, at 5:08 PM, sausarkar sausar...@ebay.com wrote:

 We solved the issue by explicitly adding numShards=1 argument to the solr
 start up script. Is this a bug?

Sounds like it…perhaps related to SOLR-3971…not sure though.

- Mark

want to get Alert while “java.lang.OutOfMemoryError: PermGen space” error coming in SOLR Generation

2012-12-06 Thread aniljayanti
Hi,

Im generating SOLR using SOLR 3.3, Apache Tomcat 7.0.19. Some times my
Tomcat get hanged giving below error in log.


SEVERE: Full Import
failed:org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.OutOfMemoryError: PermGen space
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:664)
at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:267)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:186)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)
Caused by: java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)


i have given below memory size. 

--JvmMs 2048 --JvmMx 3096 -XX:MaxPermSize=1024m

So i want to get ALERT when ever SOLR strucked or gives OutOfMemory.

Thanks in Advance.

AnilJayanti.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/want-to-get-Alert-while-java-lang-OutOfMemoryError-PermGen-space-error-coming-in-SOLR-Generation-tp4025011.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 8:42 PM, Jack Krupansky j...@basetechnology.com wrote:
 And this is precisely why the mystery remains - because you're only
 describing half the picture! Describe the rest of the picture - including
 what exactly those two zks can and can't do, including resolution of ties
 and the concept of constituting a majority and a quorum.

The high level description is simple: 2 nodes that can still talk to
each other are more than 50% of the original 3, hence they know that
they can make decisions, and that it's impossible for another
partitioned group to make contrary decisions (since any partitioned
groups must be less than 50% of the original cluster).

The (very) low level is out of scope for here... I'd suggest starting
here though: http://en.wikipedia.org/wiki/Paxos_(computer_science)
and following up on the zookeeper lists.

-Yonik
http://lucidworks.com


Re: want to get Alert while “java.lang.OutOfMemoryError: PermGen space” error coming in SOLR Generation

2012-12-06 Thread Worthy LaFollette
You might consider implementing some jmx tooling.  Nagios is one of several
such engines.

wiki.apache.org/*tomcat*/FAQ/Monitoring



On Thursday, December 6, 2012, aniljayanti wrote:

 Hi,

 Im generating SOLR using SOLR 3.3, Apache Tomcat 7.0.19. Some times my
 Tomcat get hanged giving below error in log.


 SEVERE: Full Import
 failed:org.apache.solr.handler.dataimport.DataImportHandlerException:
 java.lang.OutOfMemoryError: PermGen space
 at

 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:664)
 at

 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:267)
 at
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:186)
 at

 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)
 at

 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)
 at

 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)
 Caused by: java.lang.OutOfMemoryError: PermGen space
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)


 i have given below memory size.

 --JvmMs 2048 --JvmMx 3096 -XX:MaxPermSize=1024m

 So i want to get ALERT when ever SOLR strucked or gives OutOfMemory.

 Thanks in Advance.

 AnilJayanti.




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/want-to-get-Alert-while-java-lang-OutOfMemoryError-PermGen-space-error-coming-in-SOLR-Generation-tp4025011.html
 Sent from the Solr - User mailing list archive at Nabble.com.



want to get Alert while “java.lang.OutOfMemoryError: PermGen space” error coming in SOLR Generation

2012-12-06 Thread Worthy LaFollette
You might consider implementing some jmx tooling.  Nagios is one of several
such engines.

wiki.apache.org/*tomcat*/FAQ/Monitoring



On Thursday, December 6, 2012, aniljayanti wrote:

 Hi,

 Im generating SOLR using SOLR 3.3, Apache Tomcat 7.0.19. Some times my
 Tomcat get hanged giving below error in log.


 SEVERE: Full Import
 failed:org.apache.solr.handler.dataimport.DataImportHandlerException:
 java.lang.OutOfMemoryError: PermGen space
 at

 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:664)
 at

 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:267)
 at
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:186)
 at

 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)
 at

 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)
 at

 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)
 Caused by: java.lang.OutOfMemoryError: PermGen space
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)


 i have given below memory size.

 --JvmMs 2048 --JvmMx 3096 -XX:MaxPermSize=1024m

 So i want to get ALERT when ever SOLR strucked or gives OutOfMemory.

 Thanks in Advance.

 AnilJayanti.




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/want-to-get-Alert-while-java-lang-OutOfMemoryError-PermGen-space-error-coming-in-SOLR-Generation-tp4025011.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re[4]: Cannot run Solr4 from Intellij Idea

2012-12-06 Thread Artyom
Thank you. I will read about these commands.
I don't copy anything anywhere. I just edit the code and click Run, IDEA does 
everything for me. I guess, IDEA's artifacts are exactly for these routines.

Anyway, there are no such instructions, you described, anywhere in the solr 
wiki, so it's hard to start developing Solr for novices like me.


Четверг,  6 декабря 2012, 16:06  от Erick Erickson [via Lucene] 
ml-node+s472066n4024970...@n3.nabble.com:
   










Why do this? It's trivial to attach IntelliJ to a running solr, just 
create
remote configuration. When you do, it'll give you parameters you'll be
able to start Solr with and attach from IntelliJ, set breakpoints, etc.
Something like:
java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5900
-jar start.jar

You'll get the parameters to start Solr with when you create the remote
configuration in IntelliJ. Then, I do an ant example from
solrhome/solr, go into the example dir and I'm off to the races. The
suspend=y means that solr just sits there until you attach IntelliJ.

I suspect that if you've copied things around, if you change Solr you'll
have a world of problems getting the changed jars to the right places.

It's a bit of a pain that when you do make changes to solr code, you have
to do another ant example but if your goal is to simply step through Solr
code, it's much easier to do a remote debugging session.

Best
Erick


On Wed, Dec 5, 2012 at 11:55 PM, Artyom [hidden email] wrote:


 See the screenshots:

 solr_idea1: adding an IDEA tomcat artifact
 solr_idea2: adding an IDEA facet
 solr_idea3: placing modules into the artifact (drag modules from the
 Available Elements to output root) and the created facet


 Среда,  5 декабря 2012, 7:28  от sarowe [via Lucene] 
 [hidden email]:
 
 
 


 



 

         Hi Artyom,
 
 I don't use IntelliJ artifacts - I just edit/compile/test.
 
 I can include this stuff in the IntelliJ configuration if you'll help me.
  Can you share screenshots of what you're talking about, and/or IntelliJ
 config files?
 
 Steve
 
 On Dec 5, 2012, at 8:24 AM, Artyom [hidden email] wrote:
 
 
  InelliJ IDEA is not so intelligent with Solr: to fix this problem I've
  dragged these modules into the IDEA's artifact (parent module is wrong):
 
  analysis-common
  analysis-extras
  analysis-uima
  clustering
  codecs
  codecs-resources
  dataimporthandler
  dataimporthandler-extras
  lucene-core
  lucene-core-resources
  solr-core
 
 
 
  --
  View this message in context:
 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024452.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 





 

 

 --
 
 

 If you reply to this email, your message will be added to the discussion
 below:

 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024484.html


 

                 To unsubscribe from Cannot run Solr4 from Intellij Idea,
 click here.
 
                 NAML

 





 



 =?UTF-8?B?c29scl9pZGVhMS5wbmc=?= (102K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/0/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMS5wbmc%3D%3F%3D
 
 =?UTF-8?B?c29scl9pZGVhMi5wbmc=?= (117K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/1/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMi5wbmc%3D%3F%3D
 
 =?UTF-8?B?c29scl9pZGVhMy5wbmc=?= (148K) 
 http://lucene.472066.n3.nabble.com/attachment/4024723/2/%3D%3FUTF-8%3FB%3Fc29scl9pZGVhMy5wbmc%3D%3F%3D
 




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024723.html
 Sent from the Solr - User mailing list archive at Nabble.com.











--
   


If you reply to this email, your message will be added to the discussion below:

http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4024970.html




To unsubscribe from Cannot run Solr4 from Intellij Idea, click 
here.

NAML

   











--
View this message in context: 
http://lucene.472066.n3.nabble.com/Cannot-run-Solr4-from-Intellij-Idea-tp4024233p4025020.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Minimum HA Setup with SolrCloud

2012-12-06 Thread Jack Krupansky
I think I've figured out how to express it: A zk node can offer its services 
if it is able to communicate with more than half of the specified ensemble 
size, which assures that there is no split brain, where two or more 
competing groups of inter-communicating nodes could offer services that 
conflict since only communicating nodes can agree on services.


1 zk = works but no HA - single-point of failure
2 zk = works, but no HA since that would allow split-brain
3 zk = allows 1 unreachable/down node since 2 is more than half of 3 and 
assures no split brain, but a single node could be split brain
4 zk = allows 1 unreachable/down node, but not 2 or 1 since that could mean 
split brain
5 zk = allows 1 or 2 unreachable/down since 3 is more than half of 5 and 
assures no split brain, but 2 or 1 nodes reachable could be split brain
6 zk = allows 1 or 2 unreachable/down since 4 is more than half of 5 and 
assures no split brain, but 3 or fewer nodes communicating could be split 
brain


And, finally, it is not the number of nodes that are down per se, but how 
many nodes a given node can communicate and whether that is a simple 
majority of the specified ensemble size.



-- Jack Krupansky

-Original Message- 
From: Yonik Seeley

Sent: Thursday, December 06, 2012 11:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Minimum HA Setup with SolrCloud

On Thu, Dec 6, 2012 at 8:42 PM, Jack Krupansky j...@basetechnology.com 
wrote:

And this is precisely why the mystery remains - because you're only
describing half the picture! Describe the rest of the picture - including
what exactly those two zks can and can't do, including resolution of ties
and the concept of constituting a majority and a quorum.


The high level description is simple: 2 nodes that can still talk to
each other are more than 50% of the original 3, hence they know that
they can make decisions, and that it's impossible for another
partitioned group to make contrary decisions (since any partitioned
groups must be less than 50% of the original cluster).

The (very) low level is out of scope for here... I'd suggest starting
here though: http://en.wikipedia.org/wiki/Paxos_(computer_science)
and following up on the zookeeper lists.

-Yonik
http://lucidworks.com 



Re: Is there any plugin/tool to import data from Excel to Solr

2012-12-06 Thread Gora Mohanty
On 7 December 2012 12:30, Zeng Lames lezhi.z...@gmail.com wrote:
 Hi,

 wanna to know is there any plugin / tool to import data from Excel to Solr.
[...]

You could export to CSV from Excel, and import the CSV into Solr:
http://wiki.apache.org/solr/UpdateCSV

Regards,
Gora