Re: Problem managing Solr configsets on Zookeeper
Thanks Erick, Shawn, that was exactly it! On 22/02/2017, 16:29, "Erick Erickson" wrote: This should do it: bin/solr zk upconfig -z 172.28.128.9:2181/solr -n tolkien -d /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config Or use the 'bin/solr cp -r...' command and specify the destination as zk:/solr/tolkien or something. upconfig/downconfig is just a form of cp designed for configsets. Erick On Wed, Feb 22, 2017 at 8:27 AM, Chris Rogers wrote: > Hi Shawn, > > Thanks. I am indeed using a chroot. > > In my solr.in.ih file I have the following: > > ZK_HOST="172.28.128.9/solr" > > I think I understand you’re saying that I need to specify this chroot dir in the upconfig command? Where should this be specified? Something like: > > bin/solr zk upconfig -z 172.28.128.9:2181 -n solr/configs/tolkien -d /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config > > Thank you for your help with this. > > Best, > Chris > > On 22/02/2017, 16:15, "Shawn Heisey" wrote: > > On 2/22/2017 8:25 AM, Chris Rogers wrote: > > … as uploaded with upconfig. However, in the /solr directory we have a > > second /configs folder with the old schema.xml within it. I presume > > this is the collection config being used, as it is named simply > > ‘tolkien’ as defined in the create collection command. > > It sounds like Solr is being started with a chroot on the zkHost > string. If tht's true, it means that your upconfig command is using a > different zkHost value, and that's the problem. > > The zkHost string you use should be the same everywhere. Here's an > example of a zkHost specifying a redundant ensemble of three servers, > and confining itself to a "/solr" chroot: > > server1:2181,server2:2181,server3:2181/solr > > The javadoc for CloudSolrClient contains some details on how zkHost > should be constructed: > > http://lucene.apache.org/solr/6_3_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html#CloudSolrClient-java.lang.String- > > I personally recommend always using a chroot, but if the zookeeper > ensemble is dedicated to a single SolrCloud cluster and no other > software is using it, that wouldn't be necessary. > > Thanks, > Shawn > > >
Re: Problem managing Solr configsets on Zookeeper
Hi Shawn, Thanks. I am indeed using a chroot. In my solr.in.ih file I have the following: ZK_HOST="172.28.128.9/solr" I think I understand you’re saying that I need to specify this chroot dir in the upconfig command? Where should this be specified? Something like: bin/solr zk upconfig -z 172.28.128.9:2181 -n solr/configs/tolkien -d /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config Thank you for your help with this. Best, Chris On 22/02/2017, 16:15, "Shawn Heisey" wrote: On 2/22/2017 8:25 AM, Chris Rogers wrote: > … as uploaded with upconfig. However, in the /solr directory we have a > second /configs folder with the old schema.xml within it. I presume > this is the collection config being used, as it is named simply > ‘tolkien’ as defined in the create collection command. It sounds like Solr is being started with a chroot on the zkHost string. If tht's true, it means that your upconfig command is using a different zkHost value, and that's the problem. The zkHost string you use should be the same everywhere. Here's an example of a zkHost specifying a redundant ensemble of three servers, and confining itself to a "/solr" chroot: server1:2181,server2:2181,server3:2181/solr The javadoc for CloudSolrClient contains some details on how zkHost should be constructed: http://lucene.apache.org/solr/6_3_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html#CloudSolrClient-java.lang.String- I personally recommend always using a chroot, but if the zookeeper ensemble is dedicated to a single SolrCloud cluster and no other software is using it, that wouldn't be necessary. Thanks, Shawn
Re: Problem managing Solr configsets on Zookeeper
Hi Erick, Thanks for your reply. 1> Looking in the Solr UI I have only a single set of config files. 2> No, browser is not caching the page. Tried in anonymous browser and result is the same. 3> No, I don’t have a managed Schema file or managed schema factory definition. I am using Further information from my investigation into the depths of Zookeeper. Using the Zookeeper cli, I have found the following znode structure: [configs, …, solr] Within /configs there is the most recent and up-to-date config folder: /configs/tolkien_config/schema.xml etc … as uploaded with upconfig. However, in the /solr directory we have a second /configs folder with the old schema.xml within it. I presume this is the collection config being used, as it is named simply ‘tolkien’ as defined in the create collection command. /solr/configs/tolkien/schema.xml etc It seems that upconfig has pushed the config to zookeeper, but the config reload isn’t associating this new config. Should the upconfig be pushing the new config directly into that /solr/configs znode? Best, Chris On 22/02/2017, 15:06, "Erick Erickson" wrote: Chris: I'd check a couple of things: 1> did the files get copied where you expect? Do you have only one set of files in the zknode configs node (admin UI>>cloud>>tree>>expand the appropriate branches). This seems unlikely if you did the downconfig, but stranger things have happened. 2> Is there any chance that your _browser_ has cached the page for the schema file and you're seeing that? 3> Chris' comment is well taken. Do you have a managed_schema file as well as a schema.xml file in your configset and are they different? You needed to change the schema factory definition in solrconfig.xml to _really_ use the classic schema definition. You should have something like this in your solrconfig.xml file: Best, Erick On Wed, Feb 22, 2017 at 4:54 AM, Chris Rogers wrote: > Hi Gus, > > Thanks for the reply, but unfortunately this is not the problem. I am using the basic schema factory, with a directly edited schema file. > > Best, > Chris > > On 22/02/2017, 12:00, "Gus Heck" wrote: > > Hi Chris, > > Are you perhaps using (by default) ManagedIndexSchemaFactory? > https://cwiki.apache.org/confluence/display/solr/Schema+Factory+Definition+in+SolrConfig > > > If so on first boot the schema.xml file is copied and then subsequently > ignored in favor of the managed copy. If you do not wish to use the schema > REST API and prefer editing a file, you can switch back to using > ClassicIndexSchemaFactory (see link, bottom of page) > > HTH, > Gus > > On Wed, Feb 22, 2017 at 6:16 AM, Chris Rogers < > chris.rog...@bodleian.ox.ac.uk> wrote: > > > Hi all, > > > > I’ve got a Zookeeper (3.4.8) / Solrcloud (6.4.0) setup with two nodes. > > > > I’ve successfully uploaded a collection through one of the Solr nodes: > > > > ~~~ > > bin/solr create -c tolkien -d tolkien_config -s 2 -rf 2 > > ~~~ > > > > I’ve then changed the schema.xml in the config set for this collection (in > > the /server/solr/configsets/ folder), and used the upconfig zk command to > > upload it: > > > > ~~~ > > bin/solr zk upconfig -z 172.28.128.9:2181 -n tolkien_config -d > > /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config > > ~~~ > > > > I’ve then used the collections API to reload the collections. I’ve run > > this command for both Solr nodes: > > > > ~~~ > > http://localhost:8983/solr/admin/collections?action=RELOAD&name=tolkien > > ~~~ > > > > All these commands appear to execute successfully. I can confirm that the > > config is successfully uploaded to Zookeeper, as the downconfig command > > returns the correct files. > > > > HOWEVER, the content of the Schema UI in the Solr admin does not change. > > Fields I had removed were still there, and fields added were not. > > > > I then decided to delete the collection entirely, and recreate a new > > collection with the new schema, but the schema for this new collection > > remains the same as the old collection! > >
Re: Problem managing Solr configsets on Zookeeper
Hi Gus, Thanks for the reply, but unfortunately this is not the problem. I am using the basic schema factory, with a directly edited schema file. Best, Chris On 22/02/2017, 12:00, "Gus Heck" wrote: Hi Chris, Are you perhaps using (by default) ManagedIndexSchemaFactory? https://cwiki.apache.org/confluence/display/solr/Schema+Factory+Definition+in+SolrConfig If so on first boot the schema.xml file is copied and then subsequently ignored in favor of the managed copy. If you do not wish to use the schema REST API and prefer editing a file, you can switch back to using ClassicIndexSchemaFactory (see link, bottom of page) HTH, Gus On Wed, Feb 22, 2017 at 6:16 AM, Chris Rogers < chris.rog...@bodleian.ox.ac.uk> wrote: > Hi all, > > I’ve got a Zookeeper (3.4.8) / Solrcloud (6.4.0) setup with two nodes. > > I’ve successfully uploaded a collection through one of the Solr nodes: > > ~~~ > bin/solr create -c tolkien -d tolkien_config -s 2 -rf 2 > ~~~ > > I’ve then changed the schema.xml in the config set for this collection (in > the /server/solr/configsets/ folder), and used the upconfig zk command to > upload it: > > ~~~ > bin/solr zk upconfig -z 172.28.128.9:2181 -n tolkien_config -d > /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config > ~~~ > > I’ve then used the collections API to reload the collections. I’ve run > this command for both Solr nodes: > > ~~~ > http://localhost:8983/solr/admin/collections?action=RELOAD&name=tolkien > ~~~ > > All these commands appear to execute successfully. I can confirm that the > config is successfully uploaded to Zookeeper, as the downconfig command > returns the correct files. > > HOWEVER, the content of the Schema UI in the Solr admin does not change. > Fields I had removed were still there, and fields added were not. > > I then decided to delete the collection entirely, and recreate a new > collection with the new schema, but the schema for this new collection > remains the same as the old collection! > > It’s as if the old config has been cached somewhere or something… > > Am I missing a stage or command here? > > Many thanks, > Chris > > -- > Chris Rogers > Digital Projects Manager > Bodleian Digital Library Systems and Services > chris.rog...@bodleian.ox.ac.uk > -- http://www.the111shift.com
Problem managing Solr configsets on Zookeeper
Hi all, I’ve got a Zookeeper (3.4.8) / Solrcloud (6.4.0) setup with two nodes. I’ve successfully uploaded a collection through one of the Solr nodes: ~~~ bin/solr create -c tolkien -d tolkien_config -s 2 -rf 2 ~~~ I’ve then changed the schema.xml in the config set for this collection (in the /server/solr/configsets/ folder), and used the upconfig zk command to upload it: ~~~ bin/solr zk upconfig -z 172.28.128.9:2181 -n tolkien_config -d /home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config ~~~ I’ve then used the collections API to reload the collections. I’ve run this command for both Solr nodes: ~~~ http://localhost:8983/solr/admin/collections?action=RELOAD&name=tolkien ~~~ All these commands appear to execute successfully. I can confirm that the config is successfully uploaded to Zookeeper, as the downconfig command returns the correct files. HOWEVER, the content of the Schema UI in the Solr admin does not change. Fields I had removed were still there, and fields added were not. I then decided to delete the collection entirely, and recreate a new collection with the new schema, but the schema for this new collection remains the same as the old collection! It’s as if the old config has been cached somewhere or something… Am I missing a stage or command here? Many thanks, Chris -- Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk
Re: no dataimport-handler defined!
Hi Alex, Just tested the DIH example in 6.4 (bin/solr -e dih) Getting the same “No dataimport-handler defined!” for every one of the cores installed as part of the example. Cheers, Chris On 24/01/2017, 15:07, "Alexandre Rafalovitch" wrote: Strange. If you run a pre-built DIH example, do any of the cores work? (not the RSS one, that is broken anyway). Regards, Alex. http://www.solr-start.com/ - Resources for Solr users, new and experienced On 24 January 2017 at 08:32, Chris Rogers wrote: > Hi Alex, > > I’m editing the solrconfig.xml file at /solr/server/solr/tei_config (ie the one generated from the configset when the node was created). > > I’m running standalone, not cloud. > > I’m restarting sole after every change. Do I need to reload the core instead of restarting? > > I’ve also tried replacing the relative path to the .jar with an absolute path to the dist directory. Still didn’t work. > > Thanks, > Chris > > On 24/01/2017, 13:20, "Alexandre Rafalovitch" wrote: > > Which solrconfig.xml are you editing and what kind of Solr install are > you running (cloud?). And did you reload the core. > > I suspect you are not editing the file that is actually in use. For > example, if you are running a cloud setup, the solrconfig.xml on the > filesystem is disconnected from the config actually in use that is > stored in ZooKeeper. You would need to reupload it for change to take > effect. > > You also may need to reload the core for changes to take effect. > > Regards, >Alex. > > http://www.solr-start.com/ - Resources for Solr users, new and experienced > > > On 24 January 2017 at 07:43, Chris Rogers > wrote: > > Hi all, > > > > Having frustrating issues with getting SOLR 6.4.0 to recognize the existence of my DIH config. I’m using Oracle Java8 jdk on Ubuntu 14.04. > > > > The DIH .jar file appears to be loading correctly. There are no errors in the SOLR logs. It just says “Sorry, no dataimport-handler defined” in the SOLR admin UI. > > > > My config files are listed below. Can anyone spot any mistakes here? > > > > Many thanks, > > Chris > > > > # solrconfig.xml ## > > > > > > > > … > > > > > > > > DIH-data-config.xml > > > > > > > > # DIH-data-config.xml (in the same dir as solrconfig.xml) ## > > > > > > > > > > > > > fileName=".*xml" > > newerThan="'NOW-5YEARS'" > > recursive="true" > > rootEntity="false" > > dataSource="null" > > baseDir="/home/bodl-tei-svc/sites/bodl-tei-svc/var/data/tolkein_tei"> > > > > > > > >> forEach="/TEI" url="${f.fileAbsolutePath}" transformer="RegexTransformer" > > > > > > > > > > > > > > > > > > > > > > > > > -- > > Chris Rogers > > Digital Projects Manager > > Bodleian Digital Library Systems and Services > > chris.rog...@bodleian.ox.ac.uk > >
Re: no dataimport-handler defined!
A quick update. I rolled back to solr 6.2, and the data import handler is recognized there. So there has either been a change in the config required between 6.2 and 6.4, or there’s a bug in 6.4 Any thoughts? On 24/01/2017, 13:32, "Chris Rogers" wrote: Hi Alex, I’m editing the solrconfig.xml file at /solr/server/solr/tei_config (ie the one generated from the configset when the node was created). I’m running standalone, not cloud. I’m restarting sole after every change. Do I need to reload the core instead of restarting? I’ve also tried replacing the relative path to the .jar with an absolute path to the dist directory. Still didn’t work. Thanks, Chris On 24/01/2017, 13:20, "Alexandre Rafalovitch" wrote: Which solrconfig.xml are you editing and what kind of Solr install are you running (cloud?). And did you reload the core. I suspect you are not editing the file that is actually in use. For example, if you are running a cloud setup, the solrconfig.xml on the filesystem is disconnected from the config actually in use that is stored in ZooKeeper. You would need to reupload it for change to take effect. You also may need to reload the core for changes to take effect. Regards, Alex. http://www.solr-start.com/ - Resources for Solr users, new and experienced On 24 January 2017 at 07:43, Chris Rogers wrote: > Hi all, > > Having frustrating issues with getting SOLR 6.4.0 to recognize the existence of my DIH config. I’m using Oracle Java8 jdk on Ubuntu 14.04. > > The DIH .jar file appears to be loading correctly. There are no errors in the SOLR logs. It just says “Sorry, no dataimport-handler defined” in the SOLR admin UI. > > My config files are listed below. Can anyone spot any mistakes here? > > Many thanks, > Chris > > # solrconfig.xml ## > > > > … > > > > DIH-data-config.xml > > > > # DIH-data-config.xml (in the same dir as solrconfig.xml) ## > > > > > > fileName=".*xml" > newerThan="'NOW-5YEARS'" > recursive="true" > rootEntity="false" > dataSource="null" > baseDir="/home/bodl-tei-svc/sites/bodl-tei-svc/var/data/tolkein_tei"> > > > > forEach="/TEI" url="${f.fileAbsolutePath}" transformer="RegexTransformer" > > > > > > > > > > > > > -- > Chris Rogers > Digital Projects Manager > Bodleian Digital Library Systems and Services > chris.rog...@bodleian.ox.ac.uk
Re: no dataimport-handler defined!
Hi Alex, I’m editing the solrconfig.xml file at /solr/server/solr/tei_config (ie the one generated from the configset when the node was created). I’m running standalone, not cloud. I’m restarting sole after every change. Do I need to reload the core instead of restarting? I’ve also tried replacing the relative path to the .jar with an absolute path to the dist directory. Still didn’t work. Thanks, Chris On 24/01/2017, 13:20, "Alexandre Rafalovitch" wrote: Which solrconfig.xml are you editing and what kind of Solr install are you running (cloud?). And did you reload the core. I suspect you are not editing the file that is actually in use. For example, if you are running a cloud setup, the solrconfig.xml on the filesystem is disconnected from the config actually in use that is stored in ZooKeeper. You would need to reupload it for change to take effect. You also may need to reload the core for changes to take effect. Regards, Alex. http://www.solr-start.com/ - Resources for Solr users, new and experienced On 24 January 2017 at 07:43, Chris Rogers wrote: > Hi all, > > Having frustrating issues with getting SOLR 6.4.0 to recognize the existence of my DIH config. I’m using Oracle Java8 jdk on Ubuntu 14.04. > > The DIH .jar file appears to be loading correctly. There are no errors in the SOLR logs. It just says “Sorry, no dataimport-handler defined” in the SOLR admin UI. > > My config files are listed below. Can anyone spot any mistakes here? > > Many thanks, > Chris > > # solrconfig.xml ## > > > > … > > > > DIH-data-config.xml > > > > # DIH-data-config.xml (in the same dir as solrconfig.xml) ## > > > > > > fileName=".*xml" > newerThan="'NOW-5YEARS'" > recursive="true" > rootEntity="false" > dataSource="null" > baseDir="/home/bodl-tei-svc/sites/bodl-tei-svc/var/data/tolkein_tei"> > > > > forEach="/TEI" url="${f.fileAbsolutePath}" transformer="RegexTransformer" > > > > > > > > > > > > > -- > Chris Rogers > Digital Projects Manager > Bodleian Digital Library Systems and Services > chris.rog...@bodleian.ox.ac.uk
no dataimport-handler defined!
Hi all, Having frustrating issues with getting SOLR 6.4.0 to recognize the existence of my DIH config. I’m using Oracle Java8 jdk on Ubuntu 14.04. The DIH .jar file appears to be loading correctly. There are no errors in the SOLR logs. It just says “Sorry, no dataimport-handler defined” in the SOLR admin UI. My config files are listed below. Can anyone spot any mistakes here? Many thanks, Chris # solrconfig.xml ## … DIH-data-config.xml # DIH-data-config.xml (in the same dir as solrconfig.xml) ## -- Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk
Re: Using DIH FileListEntityProcessor with SolrCloud
Hi all, Just bumping my question again, as doesn’t seem to have been picked up by anyone. Any help would be much appreciated. Chris On 02/12/2016, 16:36, "Chris Rogers" wrote: Hi all, A question regarding using the DIH FileListEntityProcessor with SolrCloud (solr 6.3.0, zookeeper 3.4.8). I get that the config in SolrCloud lives on the Zookeeper node (a different server from the solr nodes in my setup). With this in mind, where is the baseDir attribute in the FileListEntityProcessor config relative to? I’m seeing the config in the Solr GUI, and I’ve tried setting it as an absolute path on my Zookeeper server, but this doesn’t seem to work… any ideas how this should be setup? My DIH config is below: This same script worked as expected on a single solr node (i.e. not in SolrCloud mode). Thanks, Chris -- Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk
Using DIH FileListEntityProcessor with SolrCloud
Hi all, A question regarding using the DIH FileListEntityProcessor with SolrCloud (solr 6.3.0, zookeeper 3.4.8). I get that the config in SolrCloud lives on the Zookeeper node (a different server from the solr nodes in my setup). With this in mind, where is the baseDir attribute in the FileListEntityProcessor config relative to? I’m seeing the config in the Solr GUI, and I’ve tried setting it as an absolute path on my Zookeeper server, but this doesn’t seem to work… any ideas how this should be setup? My DIH config is below: This same script worked as expected on a single solr node (i.e. not in SolrCloud mode). Thanks, Chris -- Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk
Re: SolrCloud: A previous ephemeral live node still exists
Thanks Alan, This turned out to be the problem. I was trying to connect to Zookeeper with the command: bin/solr start -c -z 172.28.128.3:2181 with all of the nodes. So I think all the hostnames where defaulting to localhost. Specifying the hostname and port with the –h command fixed the issue. I think I must have got this right the first time when testing with Solr 6.1.0, so my assumption that this was an issues with 6.2.0 seems to be a red herring. Thanks again, Chris On 31/08/2016, 09:43, "Alan Woodward" wrote: It looks as though all four nodes are trying to register with ZK using the same hostname and port number - possibly they're all connecting as 'localhost'? Alan Woodward www.flax.co.uk On 31 Aug 2016, at 09:34, Chris Rogers wrote: > Just pinging this again as I sent it late last night. Would be great if someone could help with this. It's got me totally stumped... > > > Chris Rogers > Digital Projects Manager > Bodleian Digital Library Systems and Services > chris.rog...@bodleian.ox.ac.uk > > ____ > From: Chris Rogers [chris.rog...@bodleian.ox.ac.uk] > Sent: 30 August 2016 21:34 > To: solr-user@lucene.apache.org > Subject: SolrCloud: A previous ephemeral live node still exists > > Hi all, > > I'm trying to create a SolrCloud setup with Vagrant boxes using Solr 6.2.0 and Zookeeper 3.4.8 > > I managed to get this to work perfectly with Solr 6.1.0, but I'm not able to start more than one node in cloud mode with Solr 6.2.0. > > I have four VMs connected on a private network of Vagrant boxes, one running Zookeeper, and three Solr nodes. > > The first Solr node connects to Zookeeper as expected. I can access the admin and see the Cloud info there. > > But when I try and connect with a second node, something goes wrong. > > I get the following error in the Solr log: > > > 2016-08-30 20:12:12.292 INFO (main) [ ] o.e.j.u.log Logging initialized @490ms > > 2016-08-30 20:12:12.551 INFO (main) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 > > 2016-08-30 20:12:12.574 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:///home/vagrant/solr-6.2.0/server/contexts/] at interval 0 > > 2016-08-30 20:12:12.906 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet > > 2016-08-30 20:12:12.923 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext@o.e.j.w.WebAppContext@5383967b{/solr,file:///home/vagrant/solr-6.2.0/server/solr-webapp/webapp/,STARTING}{/home/vagrant/solr-6.2.0/server/solr-webapp/webapp} has uncovered http methods for path: / > > 2016-08-30 20:12:12.936 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=1465085305@57536d79 > > 2016-08-30 20:12:12.962 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx) > > 2016-08-30 20:12:12.962 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /home/vagrant/solr-6.2.0/server/solr > > 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/home/vagrant/solr-6.2.0/server/solr' > > 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx) > > 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /home/vagrant/solr-6.2.0/server/solr > > 2016-08-30 20:12:12.987 INFO (main) [ ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider > > 2016-08-30 20:12:13.017 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper > > 2016-08-30 20:12:13.118 INFO (zkCallback-1-thread-1) [ ] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@25b8acce name:ZooKeeperConnection Watcher:172.28.128.3:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None > > 2016-08-30 20:12:13.119 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper > > 2016-08-30 20:12:13.119 INFO (main) [ ] o.a.s.c.c.SolrZkClient Using default ZkACLProvider > > 2016-08-30 20:12:13.130 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) > > 2016-08-30 20:12:13.133 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuratio
RE: SolrCloud: A previous ephemeral live node still exists
Just pinging this again as I sent it late last night. Would be great if someone could help with this. It's got me totally stumped... Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk From: Chris Rogers [chris.rog...@bodleian.ox.ac.uk] Sent: 30 August 2016 21:34 To: solr-user@lucene.apache.org Subject: SolrCloud: A previous ephemeral live node still exists Hi all, I'm trying to create a SolrCloud setup with Vagrant boxes using Solr 6.2.0 and Zookeeper 3.4.8 I managed to get this to work perfectly with Solr 6.1.0, but I'm not able to start more than one node in cloud mode with Solr 6.2.0. I have four VMs connected on a private network of Vagrant boxes, one running Zookeeper, and three Solr nodes. The first Solr node connects to Zookeeper as expected. I can access the admin and see the Cloud info there. But when I try and connect with a second node, something goes wrong. I get the following error in the Solr log: 2016-08-30 20:12:12.292 INFO (main) [ ] o.e.j.u.log Logging initialized @490ms 2016-08-30 20:12:12.551 INFO (main) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 2016-08-30 20:12:12.574 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:///home/vagrant/solr-6.2.0/server/contexts/] at interval 0 2016-08-30 20:12:12.906 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet 2016-08-30 20:12:12.923 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext@o.e.j.w.WebAppContext@5383967b{/solr,file:///home/vagrant/solr-6.2.0/server/solr-webapp/webapp/,STARTING}{/home/vagrant/solr-6.2.0/server/solr-webapp/webapp} has uncovered http methods for path: / 2016-08-30 20:12:12.936 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=1465085305@57536d79 2016-08-30 20:12:12.962 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx) 2016-08-30 20:12:12.962 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /home/vagrant/solr-6.2.0/server/solr 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/home/vagrant/solr-6.2.0/server/solr' 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx) 2016-08-30 20:12:12.963 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /home/vagrant/solr-6.2.0/server/solr 2016-08-30 20:12:12.987 INFO (main) [ ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider 2016-08-30 20:12:13.017 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper 2016-08-30 20:12:13.118 INFO (zkCallback-1-thread-1) [ ] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@25b8acce name:ZooKeeperConnection Watcher:172.28.128.3:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None 2016-08-30 20:12:13.119 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper 2016-08-30 20:12:13.119 INFO (main) [ ] o.a.s.c.c.SolrZkClient Using default ZkACLProvider 2016-08-30 20:12:13.130 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) 2016-08-30 20:12:13.133 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/vagrant/solr-6.2.0/server/solr/solr.xml 2016-08-30 20:12:13.233 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Config-defined core root directory: /home/vagrant/solr-6.2.0/server/solr 2016-08-30 20:12:13.264 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 182259421 2016-08-30 20:12:13.265 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer [instanceDir=/home/vagrant/solr-6.2.0/server/solr] 2016-08-30 20:12:13.266 WARN (main) [ ] o.a.s.c.CoreContainer Couldn't add files from /home/vagrant/solr-6.2.0/server/solr/lib to classpath: /home/vagrant/solr-6.2.0/server/solr/lib 2016-08-30 20:12:13.281 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,connectionsEvictorSleepDelay : 5000,maxConnectionIdleTime : 4, 2016-08-30 20:12:13.507 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=60&connTimeout=6&retry=true 2016-08-30 20:12:13.511 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory 2016-08-30 20:12:13.512 INFO (main) [ ] o.a.s.l.LogWa
SolrCloud: A previous ephemeral live node still exists
2016-08-30 20:12:43.744 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done 2016-08-30 20:12:43.762 INFO (main) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext@5383967b{/solr,file:///home/vagrant/solr-6.2.0/server/solr-webapp/webapp/,AVAILABLE}{/home/vagrant/solr-6.2.0/server/solr-webapp/webapp} 2016-08-30 20:12:43.771 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector@305b7c14{HTTP/1.1,[http/1.1]}{0.0.0.0:8983} 2016-08-30 20:12:43.771 INFO (main) [ ] o.e.j.s.Server Started @31972ms It does start in the end, but it's not working properly, I just get errors in the Solr admin. The only reference I can find to the the error: "A previous ephemeral live node still exists. Solr cannot continue. Please ensure that no other Solr process using the same port is running already." ,was this Jira ticket, but it suggests that the issue is resolved in 6.2.0 https://issues.apache.org/jira/browse/SOLR-8777 Has anyone else experienced this issue? Any hints on resolving? Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk<mailto://chris.rog...@bodleian.ox.ac.uk>
Re: SolrCloud: Collection creation failed due to timeout
Thanks Shawn, that was indeed the issue. Upgraded and everything is working as expected. Much appreciated. Chris On 22/08/2016, 14:50, "Shawn Heisey" wrote: On 8/22/2016 7:27 AM, Chris Rogers wrote: > Solr version: 6.1.0 Zookeeper: 3.3.6 This is probably the root of the issue. Solr includes Zookeeper 3.4.6, for the client. The server you're running is significantly older than the client version. The WARN messages I quoted previously indicates that the server dropped a packet type (number 14) that it didn't understand, which seems to indicate that the client code in Solr is trying to use functionality that it knows about, but is not provided by the 3.3.6 server version. You should upgrade your zookeeper server, probably to the latest version, which is 3.4.8. Don't use 3.4.7 if you manage to find it -- it has an extremely serious bug, so it was removed from the mirror system. Thanks, Shawn
Re: SolrCloud: Collection creation failed due to timeout
rest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935) at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915) at org.apache.solr.common.cloud.SolrZkClient$11.execute(SolrZkClient.java:572) at org.apache.solr.common.cloud.SolrZkClient$11.execute(SolrZkClient.java:569) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) at org.apache.solr.common.cloud.SolrZkClient.multi(SolrZkClient.java:569) at org.apache.solr.cloud.ShardLeaderElectionContextBase$1.execute(ElectionContext.java:201) at org.apache.solr.common.util.RetryUtil.retryOnThrowable(RetryUtil.java:49) at org.apache.solr.common.util.RetryUtil.retryOnThrowable(RetryUtil.java:42) at org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:183) ... 86 more 2016-08-21 21:34:00.493 INFO (qtp110456297-11) [c:mycollection s:shard1 r:core_node1 x:mycollection_shard1_replica1] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery 2016-08-21 21:34:00.494 INFO (qtp110456297-11) [c:mycollection s:shard1 r:core_node1 x:mycollection_shard1_replica1] o.a.s.c.ElectionContext Canceling election /collections/mycollection/leader_elect/shard1/election/96456851103481860-core_node1-n_14 2016-08-21 21:34:00.494 WARN (main-SendThread(172.28.128.3:2181)) [ ] o.a.z.ClientCnxn Session 0x156aefeba2d0004 for server 172.28.128.3/172.28.128.3:2181, unexpected error, closing socket connection and attempting reconnect Thanks again, Chris On 22/08/2016, 14:11, "Shawn Heisey" wrote: On 8/22/2016 6:20 AM, Chris Rogers wrote: > It’s then that I start seeing lots of errors in the Solr logs, and lots of repetitive messages appearing in Zookeeper: > > These two Solr errors over and over: > > java.io.IOException: Xid out of order. Got Xid 1299 with err 0 expected Xid 1298 for a packet with details: clientPath:null serverPath:null finished:false header:: 1298,14 replyHeader:: 0,0,-4 request:: org.apache.zookeeper.MultiTransactionRecord@95acc4f3 response:: org.apache.zookeeper.MultiResponse@0 That appears to be one log message, but you said there were two. Also, this message is incomplete. It is missing the timestamp at the beginning and appears to have been cut off at the end too. I think the message probably had *many* more lines of output that weren't included. > And this from Zookeeper: > 2016-08-21 21:33:39,147 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@593] - Dropping packet at server of type 14 > 2016-08-21 21:33:39,154 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@634] - EndOfStreamException: Unable to read additional data from client sessionid 0x156aefeba2d0004, likely client has closed socket That seems to be saying that Solr closed the connection to zookeeper. I have no idea what might be wrong, based just on what's been provided here. This section of logging seems to contain everything related to the specific connection from port 54548, and if that's true, then it does not appear to have been a timeout. Is there anything in the solr.log file at the timestamp at or near 21:33:39.154(when zookeeper thought the connection was closed)? What version of Solr? What version of zookeeper did you install on the other node? Thanks, Shawn
SolrCloud: Collection creation failed due to timeout
Hi there, I'm struggling with setting up a test implementation of SolrCloud using Vagrant. I've got two vagrant boxes on a private network: * One Ubuntu machine running Zookeeper * One Ubuntu machine running Solr My zookeeper config is as follows: tickTime=4000 dataDir=/home/vagrant/zookeeper-data clientPort=2181 server.1=localhost:2888:3888 Zookeeper is running as expected. I’m starting it in foreground mode with the command below: bin/zkServer.sh start-foreground I’m then starting Solr in cloud mode: bin/solr start -c -z 172.28.128.3:2181 At this point, all seems well. Nothing bad shows up in the Solr logs, and no alarming messages coming through in the console for Zookeeper. I then try and create a collection with the command below: bin/solr create -c mycollection -d data_driven_schema_configs It’s then that I start seeing lots of errors in the Solr logs, and lots of repetitive messages appearing in Zookeeper: These two Solr errors over and over: java.io.IOException: Xid out of order. Got Xid 1299 with err 0 expected Xid 1298 for a packet with details: clientPath:null serverPath:null finished:false header:: 1298,14 replyHeader:: 0,0,-4 request:: org.apache.zookeeper.MultiTransactionRecord@95acc4f3 response:: org.apache.zookeeper.MultiResponse@0 There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Could not register as the leader because creating the ephemeral registration node in ZooKeeper failed And this from Zookeeper: [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /172.28.128.4:54548 2016-08-21 21:33:39,137 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x156aefeba2d0004 at /172.28.128.4:54548 2016-08-21 21:33:39,145 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x156aefeba2d0004 with negotiated timeout 15000 for client /172.28.128.4:54548 2016-08-21 21:33:39,147 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@593] - Dropping packet at server of type 14 2016-08-21 21:33:39,154 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@634] - EndOfStreamException: Unable to read additional data from client sessionid 0x156aefeba2d0004, likely client has closed socket 2016-08-21 21:33:39,155 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /172.28.128.4:54548 which had sessionid 0x156aefeba2d0004 2016-08-21 21:33:40,698 - INFO Finally, the collection creation fails with the following message: ERROR: Failed to create collection 'mycollection' due to: create the collection time out:180s I can't find any useful references to these errors in SO or in the docs. Can anyone help me diagnose the problem? Many thanks, Chris Chris Rogers Digital Projects Manager Bodleian Digital Library Systems and Services chris.rog...@bodleian.ox.ac.uk<mailto://chris.rog...@bodleian.ox.ac.uk>