RE: Indexing problems

2013-04-19 Thread GASPARD Joel
Hello

Thank you for your answer.
We have solved our problem now. I describe it for someone who could encounter a 
similar problem. 

Some of our fields are dynamic, and the name of one of these fields was not 
correct : it was sent to Solr as a java object, eg 
solrInputDocument.addField(myObject, stringValue);

A string representation of this object was displayed in the Solr admin page, 
and that alerted us. We have replaced this wrong field name by the string we 
expect and no more OOME occur.

At least we could test diverse solr configurations.

Regards

Joel Gaspard 



-Message d'origine-
De : Erick Erickson [mailto:erickerick...@gmail.com] 
Envoyé : jeudi 31 janvier 2013 14:00
À : solr-user@lucene.apache.org
Objet : Re: Indexing problems

I'm really surprised you're hitting OOM errors, I suspect you have something 
else pathological in your system. So, I'd start checking things like
- how many concurrent warming searchers you allow
- How big your indexing RAM is set to (we find very little gain over 128M BTW).
- Other load on your Solr server. Are you, for instance, searching on it too?
- what your autocommit characterstics are (think about autocommitting fairly 
often with openSearcher=false).
- have you defined huge caches?
- .

How big are these documents anyway? With 12G of ram, they'd have to be 
absolutely _huge_ to matter much.

Multiple collections should work fine in ZK. I really think you have some 
innocent-looking configuration setting thats bollixing you up, this is not 
expected behavior.

If at all possible, I'd also go with 4.1. I don't really think it's relevant to 
your situation, but there have been a lot of improvements in the code

Best
Erick


RE: Indexing problems

2013-01-31 Thread GASPARD Joel
Hello,

After more tests, we could identify our problem in indexation (Solr 4.0.0).
Indeed our problems are OutOfMemoryErrors. Thinking about Zookeeper connection 
problems was a mistake. We have thought about this because OOME sometimes 
appear in logs after errors on Zookeeper leader election.

Indexing fails when we define several Solr schemas in Zookeeper.
When we define a single schema, indexation works well. It has been tested with 
a single Solr node in the cluster, or with two Solr nodes.
We are facing problems when we upload several configurations in Zookeeper : we 
can create an index for a single collection, but OutOfMemoryErrors are thrown 
when we try to create an index for a second collection with another schema.
Garbage collect logs show a rapid increase of memory consumption, then 
OutOfMemory errors.

Can we define a distinct schema for each collection ?

Thanks !

Joel Gaspard



De : GASPARD Joel [mailto:joel.gasp...@cegedim.com]
Envoyé : mardi 22 janvier 2013 16:30
À : solr-user@lucene.apache.orgmailto:solr-user@lucene.apache.org
Objet : Indexing problems

Hello,

We are facing some problems when indexing with Solr 4.0.0 with more than one 
server node and we can't find a way to solve them.
We have 2 nodes of Solr Cloud instances.
They are running in a Zookeeper ensemble (3.4.4 version) with 3 servers 
(another application is deployed on the third server).
We try to index a collection with 1 shard stored in the 2 nodes.
2 other collections with an only shard have already been indexed. The logs for 
this first indexing have been lost but maybe there was a single Solr node when 
the indexing has been made. Each collection contains about 3.000.000 documents 
(16 Go).

When we start adding documents, failures occur very fast, after maybe 2000 
documents, and the solr servers cannot be accessed anymore.
I add to this mail an attachment containing a part of the logs.

When we use Solr Cloud with only one node in a single zookeeper ensemble, we 
don't encounter any problem.



Some precisions on our configuration :
We send about 400 documents per minute.
The documents are added in Solr by two threads on our application, using the 
CloudSolrServer class.
These threads don't call the commit method. We use only the solr config to 
commit. The solrconfig.xml defines for now :
autoCommitmaxTime15000/maxTimeopenSearcherfalse/openSearcher/autoCommit
No soft commit
We have also tried :
autoCommitmaxTime60/maxTimeopenSearcherfalse/openSearcher/autoCommit
autoSoftCommitmaxTime1000/maxTime/autoSoftCommit

The Solr servers are launched with these options :
-Xmx12G -Xms4G
-XX:MaxPermSize=256m -XX:MaxNewSize=356m
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseParNewGC
-XX:+CMSClassUnloadingEnabled
-XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=25
-DzkHost=server1:2188,server2:2188,server3:2188

The solr.xml contains zkClientTimeout=6 and zoo.cfg defines a ticktime of 
3000 ms.

The Solr servers on which we are facing some problems contain old collections 
and old cores created for some tests.



Could you give some indications to me ?
Is this a problem in our solr or zookeeper config ?
How could we detect network problems ?
Is there a problem with the VM parameters ? Should we analyse some garbage 
collect logs ?

Thanks in advance.

Joel Gaspard


RE: Indexing problems

2013-01-31 Thread GASPARD Joel
Hello Erick,

Thanks for your answer.

After reading previous subjects on the user list, we had already tried to 
change the parameters we mentioned.

- concurrent warming searchers : we have set the maxWarmingSearchers attribute 
to 2 
maxWarmingSearchers2/maxWarmingSearchers

- we have tried 32 and 64 for the ramBufferSizeMB attribute

- there is no other load on the Solr server, or search when we index

- the autocommit is defined with openSearcher=false, maxTime=60ms, 
maxDocs=6000 - the autoSoftCommit is defined with maxTime=1000
We have already tried to change the softcommit and the commit parameters in 
several ways. We have also tried to commit on the client size.
Ok I try to commit more often.

- we have used cache sizes defined in the example : size=512

The documents size is not too big, I think : 1 million documents produce a 6Go 
index.

Thanks for your answer on multiple collections. I thought multiple collections 
should have the same schema in Zk after reading a wiki page : 
http://wiki.apache.org/solr/NewSolrCloudDesign : The entire cluster must have 
a single schema and solrconfig
Maybe is this page deprecated ?
I also thought that because OOM errors occur only when we index a second 
collection. There is no problem when indexing a single collection.

Going with 4.1 would not be easy for now... We'll think about it.

Thanks.

Joel


-Message d'origine-
De : Erick Erickson [mailto:erickerick...@gmail.com] 
Envoyé : jeudi 31 janvier 2013 14:00
À : solr-user@lucene.apache.org
Objet : Re: Indexing problems

I'm really surprised you're hitting OOM errors, I suspect you have something 
else pathological in your system. So, I'd start checking things like
- how many concurrent warming searchers you allow
- How big your indexing RAM is set to (we find very little gain over 128M BTW).
- Other load on your Solr server. Are you, for instance, searching on it too?
- what your autocommit characterstics are (think about autocommitting fairly 
often with openSearcher=false).
- have you defined huge caches?
- .

How big are these documents anyway? With 12G of ram, they'd have to be 
absolutely _huge_ to matter much.

Multiple collections should work fine in ZK. I really think you have some 
innocent-looking configuration setting thats bollixing you up, this is not 
expected behavior.

If at all possible, I'd also go with 4.1. I don't really think it's relevant to 
your situation, but there have been a lot of improvements in the code

Best
Erick


Indexing problems

2013-01-22 Thread GASPARD Joel
Hello,

We are facing some problems when indexing with Solr 4.0.0 with more than one 
server node and we can't find a way to solve them.
We have 2 nodes of Solr Cloud instances.
They are running in a Zookeeper ensemble (3.4.4 version) with 3 servers 
(another application is deployed on the third server).
We try to index a collection with 1 shard stored in the 2 nodes.
2 other collections with an only shard have already been indexed. The logs for 
this first indexing have been lost but maybe there was a single Solr node when 
the indexing has been made. Each collection contains about 3.000.000 documents 
(16 Go).

When we start adding documents, failures occur very fast, after maybe 2000 
documents, and the solr servers cannot be accessed anymore.
I add to this mail an attachment containing a part of the logs.

When we use Solr Cloud with only one node in a single zookeeper ensemble, we 
don't encounter any problem.



Some precisions on our configuration :
We send about 400 documents per minute.
The documents are added in Solr by two threads on our application, using the 
CloudSolrServer class.
These threads don't call the commit method. We use only the solr config to 
commit. The solrconfig.xml defines for now :
autoCommitmaxTime15000/maxTimeopenSearcherfalse/openSearcher/autoCommit
No soft commit
We have also tried :
autoCommitmaxTime60/maxTimeopenSearcherfalse/openSearcher/autoCommit
autoSoftCommitmaxTime1000/maxTime/autoSoftCommit

The Solr servers are launched with these options :
-Xmx12G -Xms4G
-XX:MaxPermSize=256m -XX:MaxNewSize=356m
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseParNewGC
-XX:+CMSClassUnloadingEnabled
-XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=25
-DzkHost=server1:2188,server2:2188,server3:2188

The solr.xml contains zkClientTimeout=6 and zoo.cfg defines a ticktime of 
3000 ms.

The Solr servers on which we are facing some problems contain old collections 
and old cores created for some tests.



Could you give some indications to me ?
Is this a problem in our solr or zookeeper config ?
How could we detect network problems ?
Is there a problem with the VM parameters ? Should we analyse some garbage 
collect logs ?

Thanks in advance.

Joel Gaspard
Problems often begin with errors like :

server.log on the current leader :
11:55:30,015 ERROR [STDERR] Jan 22, 2013 11:55:30 AM 
org.apache.solr.core.SolrCore execute
INFO: [collection2] webapp=/solr path=/replication 
params={file=_6pe_nrm.cfscommand=filecontentchecksum=trueoffset=1304428544qt=/replicationgeneration=1839wt=filestream}
 status=0 QTime=0 
11:55:30,047 ERROR [STDERR] Jan 22, 2013 11:55:30 AM 
org.apache.zookeeper.ClientCnxn$SendThread run
INFO: Client session timed out, have not heard from server in 62416ms for 
sessionid 0x13c61924ade, closing socket connection and attempting reconnect
11:55:30,099 ERROR [STDERR] Jan 22, 2013 11:55:30 AM 
org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/get 
params={getVersions=100distrib=falseqt=/getwt=javabinversion=2} status=0 
QTime=0 
11:55:30,139 ERROR [STDERR] Jan 22, 2013 11:55:30 AM 
org.apache.solr.handler.ReplicationHandler$FileStream write
WARNING: Exception while writing response for params: 
file=_6pe_nrm.cfscommand=filecontentchecksum=truegeneration=1839qt=/replicationwt=filestream
ClientAbortException:  java.net.SocketException: Broken pipe
at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366)
...
2013-01-22 11:55:31,764 ERROR [STDERR] Jan 22, 2013 11:55:31 AM 
org.apache.solr.common.cloud.ConnectionManager process
INFO: zkClient has disconnected
...
2013-01-22 11:55:32,429 ERROR [STDERR] Jan 22, 2013 11:55:32 AM 
org.apache.solr.cloud.Overseer$ClusterStateUpdater amILeader
WARNING: 
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)

zookeeper.log on the current leader :
2013-01-22 11:55:30,122 [myid:1] - INFO  [CommitProcessor:1:NIOServerCnxn@1001] 
- Closed socket connection for client /120.8.195.38:42931 which had sessionid 
0x13c61924ade
2013-01-22 11:55:30,122 [myid:1] - DEBUG [CommitProcessor:1:NIOServerCnxn@1017] 
- ignoring exception during output shutdown
java.net.SocketException: Transport endpoint is not connected
at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)

server.log on the current replicate :
INFO: [collection2] webapp=/solr path=/update 
params={distrib.from=http://server1:8080/solr/collection2/update.distrib=FROMLEADERwt=javabinversion=2}
 status=0 QTime=5 
11:54:48,301 ERROR [STDERR] Jan 22, 2013 11:54:48 AM 
org.apache.solr.handler.SnapPuller$FileFetcher fetchPackets
WARNING: Error in fetching packets 
java.net.SocketTimeoutException: Read timed out
...
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)
11:55:08,311 ERROR [STDERR] Jan 22, 2013