how to remove documents which have been deleted from the database

2013-12-16 Thread kaustubh147

Hi,

Glassfish 3.1.2.2 
Solr 4.5 
Zookeeper 3.4.5 

We have set up a SolrCloud with 4 Solr nodes and 3 zookeeper instances.
I have 5 cores with  1 shard/4 replica setup on each of them.


One of our core is very small, and it takes less than one minute to index.
We run full import on it every hour with commit= true and clear =false but
full import does not delete documents which are no more there in database.
if I run the import with clear =true it will delete the index first and
shows 0 records during the import is in progress.

For delta we will have to create a new table in database which will keep
track of deleted records. we dont want to go that route.

Is there a way, where, we can run a full import with clear= true and solr
only update the data after the import is completed.


Regards,
Kaustubh 





--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-remove-documents-which-have-been-deleted-from-the-database-tp4107015.html
Sent from the Solr - User mailing list archive at Nabble.com.


zookeeper timeout issue

2013-12-16 Thread kaustubh147
Hi,

Following warning message is filling our application logs very rapidly. This
statement is printed every time application talks with zookeeper.


[#|2013-12-13T08:33:03.023-0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=64;_ThreadName=Thread-2;|2013-12-13
08:33:03,023 WARN   org.apache.zookeeper.ClientCnxn - Session
0x342e9013f2d0063 for server null, unexpected error, closing socket
connection and attempting reconnect
java.lang.NullPointerException: key can't be null
at java.lang.System.checkKey(System.java:771)
at java.lang.System.getProperty(System.java:647)
at
org.apache.zookeeper.client.ZooKeeperSaslClient.init(ZooKeeperSaslClient.java:133)
at
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:943)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:993)


I am getting following INFO on zookeeper logs



[myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@812] - Refusing
session request for client /IP1:15703 as it has seen zxid 0x1c1103 our
last zxid is 0x40348 client must try another server
2013-12-16 15:45:56,999 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed
socket connection for client /10.163.160.78:15703 (no session established
for client)


it could be related to this problem

https://issues.apache.org/jira/browse/ZOOKEEPER-1237
or 
http://mail-archives.apache.org/mod_mbox/zookeeper-user/201208.mbox/%3CCANLc_9Jwieyyig=yg1yvczaeobc8swwj3fqd4x993ryrpod...@mail.gmail.com%3E

Is there a solution to this problem or I will have to wait for next
zookeeper release.

Regards,
Kaustubh





--
View this message in context: 
http://lucene.472066.n3.nabble.com/zookeeper-timeout-issue-tp4107016.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to load core after cluster restart

2013-11-12 Thread kaustubh147
Hi,

So finally we got our jdk/jre upgraded to 1.6.0-33. but it didnt solve the
problem. I am still seeing same write.lock error.

I was able to solve the problem by changing the lock type from native to
single. 
but I am not sure about other ramifications of this approach.

Do you see any problems coming into my way with 1 shard/4 replication setup.

Thanks,
Kaustubh



--
View this message in context: 
http://lucene.472066.n3.nabble.com/unable-to-load-core-after-cluster-restart-tp4098731p4100538.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to load core after cluster restart

2013-11-06 Thread kaustubh147
Hi All,

I have further investigated the difference between both the environments.

We have JDK 1.6.0_17 (VM 14.3-b01)on UAT and JDK 1.6.0_33 (VM 20.8-b03)on
QA1. Can it be the reason behind this error?

Is there a recommended jdk version for SolrCloud ? 

Thanks,
Kaustubh



--
View this message in context: 
http://lucene.472066.n3.nabble.com/unable-to-load-core-after-cluster-restart-tp4098731p4099621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to load core after cluster restart

2013-11-06 Thread kaustubh147
Hi,

Here is my 
solr.xml



solr

  solrcloud
str name=host${host:}/str
int name=hostPort28081/int
str name=hostContext/solr/str
str name=zkHostIP1:2181,IP2:2181,IP3:2181/mysolr/str
int name=zkClientTimeout15000/int
bool name=genericCoreNodeNames${genericCoreNodeNames:true}/bool
  /solrcloud

  shardHandlerFactory name=shardHandlerFactory
class=HttpShardHandlerFactory
int name=socketTimeout${socketTimeout:0}/int
int name=connTimeout${connTimeout:0}/int
  /shardHandlerFactory

/solr

--

and 
 solrconfig.xml



?xml version=1.0 encoding=UTF-8 ?
config
luceneMatchVersion4.5/luceneMatchVersion
lib dir=../../../contrib/extraction/lib regex=.*\.jar /
lib dir=../../../dist/ regex=solr-cell-\d.*\.jar /
lib dir=../../../contrib/clustering/lib/ regex=.*\.jar /
lib dir=../../../dist/ regex=solr-clustering-\d.*\.jar /
lib dir=../../../dist/ regex=solr-dataimporthandler-\d.*\.jar /
lib dir=../../../contrib/dataimporthandler/lib/ regex=.*\.jar /
lib dir=../../../contrib/langid/lib/ regex=.*\.jar /
lib dir=../../../dist/ regex=solr-langid-\d.*\.jar /

lib dir=../../../contrib/velocity/lib regex=.*\.jar /
lib dir=../../../dist/ regex=solr-velocity-\d.*\.jar /

dataDir${solr.data.dir:}/dataDir

directoryFactory name=DirectoryFactory
class=${solr.directoryFactory:solr.NRTCachingDirectoryFactory}/

codecFactory class=solr.SchemaCodecFactory/
schemaFactory class=ClassicIndexSchemaFactory/

indexConfig
writeLockTimeout2/writeLockTimeout
lockType${solr.lock.type:native}/lockType
infoStreamtrue/infoStream
/indexConfig

updateHandler class=solr.DirectUpdateHandler2

updateLog
str name=dir${solr.data.dir:}/str
/updateLog

autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
/autoCommit

autoSoftCommit
maxTime${solr.autoSoftCommit.maxTime:-1}/maxTime
/autoSoftCommit

/updateHandler


query

maxBooleanClauses50/maxBooleanClauses
filterCache class=solr.FastLRUCache
size=512
initialSize=512
autowarmCount=0/

queryResultCache class=solr.LRUCache
size=512
initialSize=512
autowarmCount=0/

documentCache class=solr.LRUCache
size=512
initialSize=512
autowarmCount=0/


cache name=perSegFilter
class=solr.search.LRUCache
size=10
initialSize=0
autowarmCount=10
regenerator=solr.NoOpRegenerator /

enableLazyFieldLoadingtrue/enableLazyFieldLoading

queryResultWindowSize20/queryResultWindowSize

queryResultMaxDocsCached200/queryResultMaxDocsCached

listener event=newSearcher class=solr.QuerySenderListener
arr name=queries

/arr
/listener
listener event=firstSearcher class=solr.QuerySenderListener
arr name=queries
lst
str name=qstatic firstSearcher warming in solrconfig.xml/str
/lst
/arr
/listener

useColdSearcherfalse/useColdSearcher

maxWarmingSearchers2/maxWarmingSearchers

/query

requestDispatcher handleSelect=false 

requestParsers enableRemoteStreaming=true
multipartUploadLimitInKB=2048000
formdataUploadLimitInKB=16384
addHttpRequestToContext=false/

httpCaching never304=true /

/requestDispatcher

requestHandler name=/dataimport
class=org.apache.solr.handler.dataimport.DataImportHandler
lst name=defaults
str name=configdb-data-config.xml/str
/lst
/requestHandler

requestHandler name=/select class=solr.SearchHandler

lst name=defaults
str name=echoParamsexplicit/str
int name=rows10/int
str name=dfkeyword/str
/lst

/requestHandler

requestHandler name=/query class=solr.SearchHandler
lst name=defaults
str name=echoParamsexplicit/str
str name=wtjson/str
str name=indenttrue/str
str name=dfkeyword/str
/lst
/requestHandler

requestHandler name=/get class=solr.RealTimeGetHandler
lst name=defaults
str name=omitHeadertrue/str
str name=wtjson/str
str name=indenttrue/str
/lst
/requestHandler

requestHandler name=/update class=solr.UpdateRequestHandler

/requestHandler

requestHandler name=/update/json class=solr.JsonUpdateRequestHandler
lst name=defaults
str name=stream.contentTypeapplication/json/str
/lst
/requestHandler
requestHandler name=/update/csv class=solr.CSVRequestHandler
lst name=defaults
str name=stream.contentTypeapplication/csv/str
/lst
/requestHandler

requestHandler name=/update/extract
startup=lazy
class=solr.extraction.ExtractingRequestHandler 
lst name=defaults
str name=lowernamestrue/str
str name=uprefixignored_/str

str name=captureAttrtrue/str
str name=fmap.alinks/str
str name=fmap.divignored_/str
/lst
/requestHandler

requestHandler name=/analysis/field
startup=lazy
class=solr.FieldAnalysisRequestHandler /

requestHandler name=/analysis/document
class=solr.DocumentAnalysisRequestHandler
startup=lazy /

requestHandler name=/admin/
class=solr.admin.AdminHandlers /

requestHandler name=/admin/ping class=solr.PingRequestHandler
lst name=invariants
str name=qsolrpingquery/str
/lst
lst name=defaults
str name=echoParamsall/str
/lst

/requestHandler

requestHandler name=/debug/dump class=solr.DumpRequestHandler 
lst name=defaults
str 

Re: unable to load core after cluster restart

2013-11-06 Thread kaustubh147
Hi,

I tried simple lock type too. It is throwing similar error... 


Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out:
SimpleFSLock@/mnt/emc/app_name/data-prod-refresh/SolrCloud/SolrHome1/solr/collection1_shard1_replica2/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:673)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:267)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1440)


I am trying to find if there is anything wrong with NFS/NFS implementation.
but similar setup works fine in QA1.

I even tried moving my Solrhome from mount drive to server's local drive...
but it yield the same error..

Thanks,
Kaustubh




--
View this message in context: 
http://lucene.472066.n3.nabble.com/unable-to-load-core-after-cluster-restart-tp4098731p4099687.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to load core after cluster restart

2013-11-02 Thread kaustubh147
Hi Shawn,

One thing I forget to mention here is the same setup (with no bootstrap) is
working fine in our QA1 environment. I did not have the bootstrap option
from start, I added it thinking it will solve the problem.

Nonetheless I followed Shawn's instructions, wherever it differed from my
old approach...
1. I moved my zkHost from JVM to solr.xml and added chroot in it
2. removed bootstrap option
3. created collections with URL template suggested (I have tried it earlier
too)

None of it worked for me... I am seeing same errors.. I am adding some more
logs before and after the error occurs


-

INFO  - 2013-11-02 17:40:40.427;
org.apache.solr.update.DefaultSolrCoreState; closing IndexWriter with
IndexWriterCloser
INFO  - 2013-11-02 17:40:40.428; org.apache.solr.core.SolrCore; [xyz]
Closing main searcher on request.
INFO  - 2013-11-02 17:40:40.431;
org.apache.solr.core.CachingDirectoryFactory; Closing
NRTCachingDirectoryFactory - 1 directories currently being tracked
INFO  - 2013-11-02 17:40:40.432;
org.apache.solr.core.CachingDirectoryFactory; looking to close
/mnt/emc/App_name/data-UAT-refresh/SolrCloud/SolrHome2/solr/xyz/data
[CachedDirrefCount=0;path=/mnt/emc/App_name/data-UAT-refresh/SolrCloud/SolrHome2/solr/xyz/data;done=false]
INFO  - 2013-11-02 17:40:40.432;
org.apache.solr.core.CachingDirectoryFactory; Closing directory:
/mnt/emc/App_name/data-UAT-refresh/SolrCloud/SolrHome2/solr/xyz/data
ERROR - 2013-11-02 17:40:40.433; org.apache.solr.core.CoreContainer; Unable
to create core: xyz
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.init(SolrCore.java:834)
at org.apache.solr.core.SolrCore.init(SolrCore.java:625)
at org.apache.solr.core.ZkContainer.createFromZk(ZkContainer.java:256)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:555)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1477)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1589)
at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
... 13 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out:
NativeFSLock@/mnt/emc/App_name/data-UAT-refresh/SolrCloud/SolrHome2/solr/xyz/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:695)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:267)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1440)
... 15 more
ERROR - 2013-11-02 17:40:40.443; org.apache.solr.common.SolrException;
null:org.apache.solr.common.SolrException: Unable to create core: xyz
at
org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:934)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:566)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.init(SolrCore.java:834)
   

unable to load core after cluster restart

2013-10-31 Thread kaustubh147
Hi, 

Glassfish 3.1.2.2 
Solr 4.5 
Zookeeper 3.4.5 

We have set up a SolrCloud with 4 Solr nodes and 3 zookeeper instances. 

I start the cluster for the first time with bootstrap_conf= true All the
nodes starts property.. I am creating cores (with the same name) on all 4
instances. I can add multiple cores on each of the instances... logically I
have 5 collections.

Now i am creating indexes.. and it automatically creates 4 copies of the
index, one for each instance in appropriate SolrHome directory... It will
work properly untill I restart the Solr cluster

as soon as I restart the cluster, it throws this error (refer below) and
none of the collection works properly...


ERROR - 2013-10-31 19:23:24.411; org.apache.solr.core.CoreContainer; Unable
to create core: xyz
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.init(SolrCore.java:834)
at org.apache.solr.core.SolrCore.init(SolrCore.java:625)
at org.apache.solr.core.ZkContainer.createFromZk(ZkContainer.java:256)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:241)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1477)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1589)
at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
... 13 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out:
NativeFSLock@/mnt/emc/app_name/data-refresh/SolrCloud/SolrHome1/solr/xyz/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:673)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:267)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1440)
... 15 more
ERROR - 2013-10-31 19:23:24.420; org.apache.solr.common.SolrException;
null:org.apache.solr.common.SolrException: Unable to create core: xyz
at
org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:936)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:568)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:241)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.init(SolrCore.java:834)
at org.apache.solr.core.SolrCore.init(SolrCore.java:625)
at org.apache.solr.core.ZkContainer.createFromZk(ZkContainer.java:256)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
... 10 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1477)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1589)
at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
... 13 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out:
NativeFSLock@/mnt/emc/app_name/data-refresh/SolrCloud/SolrHome1/solr/xyz/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  

Re: Problem with glassfish and zookeeper 3.4.5

2013-10-31 Thread kaustubh147
Thanks Shawn,

I found a bug in my code, it was creating too many CloudSolrServer objects.

Thanks
Kaustubh



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-glassfish-and-zookeeper-3-4-5-tp4097635p4098732.html
Sent from the Solr - User mailing list archive at Nabble.com.


Problem with glassfish and zookeeper 3.4.5

2013-10-24 Thread kaustubh147
Hi,

Glassfish 3.1.2.2
Solr 4.5
Zookeeper 3.4.5

We have set up a SolrCloud with 4 Solr nodes and 3 zookeeper instances. It
seems to be working fine from Solr admin page.

but when I am trying to connect it to web application using Solrj 4.5.
I am creating my Solr Cloud Server as suggested on the wiki page

LBHttpSolrServer lbHttpSolrServer = new LBHttpSolrServer(
SOLR_INSTANCE01,
SOLR_INSTANCE02,
SOLR_INSTANCE03,
SOLR_INSTANCE04);
solrServer = new CloudSolrServer(zk1:p1, zk2:p1, zk3:p1, lbHttpSolrServer);
solrServer.setDefaultCollection(collection);


It seems to be working fine for a while even though I am getting a WARNING
as below
-
SASL configuration failed: javax.security.auth.login.LoginException: No JAAS
configuration section named 'Client' was found in specified JAAS
configuration file: 'XYZ_path/SolrCloud_04/config/login.conf'. Will continue
connection to Zookeeper server without SASL authentication,​ if Zookeeper
server allows it.
--

The application is deployed on a single node cluster on glassfish. 

as soon as my application has made some queries to the Solr server it will
start throwing error in the solrServer.runQuery() method. The reason of the
error is not clear..

Application logs shows following error trace many times...

-
[#|2013-10-24T14:07:53.750-0700|WARNING|glassfish3.1.2|org.apache.zookeeper.ClientCnxn|_ThreadID=1434;_ThreadName=Thread-2;|SASL
configuration failed: javax.security.auth.login.LoginException: No JAAS
configuration section named 'Client' was found in specified JAAS
configuration file: 'XYZ_PATH/config/login.conf'. Will continue connection
to Zookeeper server without SASL authentication, if Zookeeper server allows
it.|#]

[#|2013-10-24T14:07:53.750-0700|INFO|glassfish3.1.2|org.apache.zookeeper.ClientCnxn|_ThreadID=1434;_ThreadName=Thread-2;|Opening
socket connection to server server_name/IP3:2181|#]

[#|2013-10-24T14:07:53.750-0700|INFO|glassfish3.1.2|org.apache.solr.common.cloud.ConnectionManager|_ThreadID=1435;_ThreadName=Thread-2;|Watcher
org.apache.solr.common.cloud.ConnectionManager@187eaada
name:ZooKeeperConnection Watcher:IP1:2181,IP2:2181,IP3:2181 got event
WatchedEvent state:AuthFailed type:None path:null path:null type:None|#]

[#|2013-10-24T14:07:53.750-0700|INFO|glassfish3.1.2|org.apache.solr.common.cloud.ConnectionManager|_ThreadID=1435;_ThreadName=Thread-2;|Client-ZooKeeper
status change trigger but we are already closed|#]

[#|2013-10-24T14:07:53.751-0700|INFO|glassfish3.1.2|org.apache.zookeeper.ClientCnxn|_ThreadID=1434;_ThreadName=Thread-2;|Socket
connection established to server_name/IP3:2181, initiating session|#]

[#|2013-10-24T14:07:53.751-0700|INFO|glassfish3.1.2|org.apache.solr.common.cloud.ConnectionManager|_ThreadID=1420;_ThreadName=Thread-2;|Watcher
org.apache.solr.common.cloud.ConnectionManager@4ba50169
name:ZooKeeperConnection Watcher:IP1:2181,IP2:2181,IP3:2181 got event
WatchedEvent state:Disconnected type:None path:null path:null type:None|#]

[#|2013-10-24T14:07:53.751-0700|WARNING|glassfish3.1.2|org.apache.zookeeper.ClientCnxn|_ThreadID=1434;_ThreadName=Thread-2;|Session
0x0 for serverserver_name/IP3:2181, unexpected error, closing socket
connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
at sun.nio.ch.IOUtil.read(IOUtil.java:166)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:245)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
|#]


--

before this happen the zookeeper logs on all the 3 instances starts showing
following warning

2013-10-24 14:05:55,200 [myid:3] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@193] - Too
many connections from /IP_APPLICATION_SEVER - max is 200

it means that my application is making too many connections with the
zookeeper and it is exceeding the limit which is set to 200.


Is there a way I can control the number of connections my application is
making with the zookeeper.
The only component which is connecting to zookeeper in my application is
CloudSolrServer object.

As per my investigation SASL warning is related to a existing bug in
Zookeeper 3.4.5 and is being solved for Zookeeper 3.5 and it should not
cause this issue

I need help and guidance..

Thanks,
Kaustubh













--
View this message in context: 

Solr on glassfish with multiple nodes - problem in data import

2013-08-09 Thread kaustubh147
Hi,

We have Solr installed on Glassfish cluster which has 4 nodes and we have a
single solr.data directory which is shared among all 4 nodes. When I trigger
full data import on one of the cores on the server, because it is a http
request, it goes to one of the nodes on the cluster. 

Now after the data import completed, I queried Solr and I realized that the
count is only updated on the node on which the data import was started. 
To solve this problem I am thinking of reloading each of the solr cores
using their respective IP addresses. 

Is there any other way to solve this? Am I missing some configuration ??

Any other suggestions on the setup??? 

Thanks for your help.
-Kaustubh




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-on-glassfish-with-multiple-nodes-problem-in-data-import-tp4083621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Why Solr slows down when accessed thru load balancer

2013-07-25 Thread kaustubh147
Hi,
 
When I am connecting my application to solr thru a load balancer
(https://domain name/apache-solr-4.0.0), it is significantly slow. but if I
connect Solr directly (https://11.11.1.11:8080/apache-solr-4.0.0) on the
application server it works better.

Ideally use of load balancer should give better performance. 

In our setup we have one load balancer which redirects the request to two
Apache web server instances, which eventually redirects to 4 Glassfish
application server instances.

Are we doing something wrong or Is it a known problem with SOLR-Glassfish
combination??
Please help

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Why-Solr-slows-down-when-accessed-thru-load-balancer-tp4080402.html
Sent from the Solr - User mailing list archive at Nabble.com.