you can migrate zookeeper data manually.

1. connect zookeeper.
        - zkCli.sh -server host:port
2. check old data
        - get /collections/"your collection
name"/leader_initiated_recovery/"your shard name"

================================================================================
[zk: localhost:3181(CONNECTED) 25] get
/collections/collection1/leader_initiated_recovery/shard1
*down*
cZxid = 0xe4
ctime = Thu Nov 13 13:38:53 KST 2014
mZxid = 0xe4
mtime = Thu Nov 13 13:38:53 KST 2014
pZxid = 0xe4
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
================================================================================

i guess that there is only single word which is "down"

3. delete the data.
        - remove /collections/"your collection
name"/leader_initiated_recovery/"your shard name"

4. create new data.
        - create /collections/"your collection
name"/leader_initiated_recovery/"your shard name" {state:down}

5. restart the server.



On Thu, Nov 13, 2014 at 7:42 AM, Anshum Gupta <ans...@anshumgupta.net>
wrote:

> Considering the impact, I think we should put this out as an announcement
> on the 'news' section of the website warning people about this.
>
> On Wed, Nov 12, 2014 at 12:33 PM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
>
> > I opened https://issues.apache.org/jira/browse/SOLR-6732
> >
> > On Wed, Nov 12, 2014 at 12:29 PM, Shalin Shekhar Mangar <
> > shalinman...@gmail.com> wrote:
> >
> > > Hi Thomas,
> > >
> > > You're right, there's a back-compat break here. I'll open an issue.
> > >
> > > On Wed, Nov 12, 2014 at 9:37 AM, Thomas Lamy <t.l...@cytainment.de>
> > wrote:
> > >
> > >> Am 12.11.2014 um 15:29 schrieb Thomas Lamy:
> > >>
> > >>> Hi there!
> > >>>
> > >>> As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530
> on
> > >>> a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
> > >>> 4.10.2.
> > >>> The first node upgrade worked like a charm.
> > >>> After upgrading the second node, two cores no longer come up and we
> get
> > >>> the following error:
> > >>>
> > >>> ERROR - 2014-11-12 15:17:34.226;
> > org.apache.solr.cloud.RecoveryStrategy;
> > >>> Recovery failed - trying again... (16) core=cams_shard1_replica4
> > >>> ERROR - 2014-11-12 15:17:34.230;
> org.apache.solr.common.SolrException;
> > >>> Error while trying to recover. core=onlinelist_shard1_
> > >>> replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
> > >>> char=d,position=0 BEFORE='d' AFTER='own'
> > >>>     at org.noggit.JSONParser.err(JSONParser.java:223)
> > >>>     at org.noggit.JSONParser.next(JSONParser.java:622)
> > >>>     at org.noggit.JSONParser.nextEvent(JSONParser.java:663)
> > >>>     at org.noggit.ObjectBuilder.<init>(ObjectBuilder.java:44)
> > >>>     at org.noggit.ObjectBuilder.getVal(ObjectBuilder.java:37)
> > >>>     at org.apache.solr.common.cloud.ZkStateReader.fromJSON(
> > >>> ZkStateReader.java:129)
> > >>>     at
> > org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
> > >>> eObject(ZkController.java:1925)
> > >>>     at
> > org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStat
> > >>> e(ZkController.java:1890)
> > >>>     at org.apache.solr.cloud.ZkController.publish(
> > >>> ZkController.java:1071)
> > >>>     at org.apache.solr.cloud.ZkController.publish(
> > >>> ZkController.java:1041)
> > >>>     at org.apache.solr.cloud.ZkController.publish(
> > >>> ZkController.java:1037)
> > >>>     at org.apache.solr.cloud.RecoveryStrategy.doRecovery(
> > >>> RecoveryStrategy.java:355)
> > >>>     at org.apache.solr.cloud.RecoveryStrategy.run(
> > >>> RecoveryStrategy.java:235)
> > >>>
> > >>> Any hint on how to solve this? Google didn't reveal anything
> useful...
> > >>>
> > >>>
> > >>> Kind regards
> > >>> Thomas
> > >>>
> > >>>  Just switched to INFO loglevel:
> > >>
> > >> INFO  - 2014-11-12 15:30:31.563;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Publishing state of core onlinelist_shard1_replica7 as recovering,
> > leader
> > >> is http://solr-bc1-blade2:8080/solr/onlinelist_shard1_replica2/ and I
> > am
> > >> http://solr-bc1-blade3:8080/solr/onlinelist_shard1_replica7/
> > >> INFO  - 2014-11-12 15:30:31.563;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Publishing state of core cams_shard1_replica4 as recovering, leader is
> > >> http://solr-bc1-blade2:8080/solr/cams_shard1_replica2/ and I am
> > >> http://solr-bc1-blade3:8080/solr/cams_shard1_replica4/
> > >> INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
> > >> publishing core=onlinelist_shard1_replica7 state=recovering
> > >> collection=onlinelist
> > >> INFO  - 2014-11-12 15:30:31.563; org.apache.solr.cloud.ZkController;
> > >> publishing core=cams_shard1_replica4 state=recovering collection=cams
> > >> ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
> > >> Error while trying to recover. core=cams_shard1_replica4rg.
> > >> noggit.JSONParser$ParseException: JSON Parse Error: char=d,position=0
> > >> BEFORE='d' AFTER='own'
> > >> ERROR - 2014-11-12 15:30:31.564; org.apache.solr.common.SolrException;
> > >> Error while trying to recover. core=onlinelist_shard1_
> > >> replica7rg.noggit.JSONParser$ParseException: JSON Parse Error:
> > >> char=d,position=0 BEFORE='d' AFTER='own'
> > >> ERROR - 2014-11-12 15:30:31.564;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Recovery failed - trying again... (5) core=cams_shard1_replica4
> > >> ERROR - 2014-11-12 15:30:31.564;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Recovery failed - trying again... (5) core=onlinelist_shard1_replica7
> > >> INFO  - 2014-11-12 15:30:31.564;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Wait 60.0 seconds before trying to recover again (6)
> > >> INFO  - 2014-11-12 15:30:31.564;
> org.apache.solr.cloud.RecoveryStrategy;
> > >> Wait 60.0 seconds before trying to recover again (6)
> > >>
> > >> The leader for both collections (solr-bc1-blade2) is still on 4.10.1.
> > >> As no special instructions were given in the release notes and it's a
> > >> minor upgrade, we thought there should be no BC issues and planned to
> > >> upgrade one node after the other.
> > >>
> > >> Did that provide more insight?
> > >>
> > >>
> > >> --
> > >> Thomas Lamy
> > >> Cytainment AG & Co KG
> > >> Nordkanalstrasse 52
> > >> 20097 Hamburg
> > >>
> > >> Tel.:     +49 (40) 23 706-747
> > >> Fax:     +49 (40) 23 706-139
> > >>
> > >> Sitz und Registergericht Hamburg
> > >> HRA 98121
> > >> HRB 86068
> > >> Ust-ID: DE213009476
> > >>
> > >>
> > >
> > >
> > > --
> > > Regards,
> > > Shalin Shekhar Mangar.
> > >
> >
> >
> >
> > --
> > Regards,
> > Shalin Shekhar Mangar.
> >
>
>
>
> --
> Anshum Gupta
> http://about.me/anshumgupta
>



-- 
*God bless U*

Reply via email to