UUID is just an external facing id to reference this volume with from CS
APIs. It isn't used to reference the volume on the hypervisor. That¹s what
path is used for.

Thanks,
-Nitin

On 11/08/14 7:57 PM, "Carlos Reátegui" <create...@gmail.com> wrote:

>
>On Aug 11, 2014, at 6:23 PM, Nitin Mehta <nitin.me...@citrix.com> wrote:
>
>> This is superb Carlos.
>thank you
>> So you created dummy vms to create records in CS db
>> and then changed the volume path to reflect the old vhds.
>Exactly.
>
>> Yes, I think it should be fine to delete the dummy vids created with the
>> new instance.
>Thanks for the confirmation.  I was concerned that there is a uuid in the
>volume table and I was not sure how or if that was used in the system.
>> 
>> Thanks,
>> -Nitin
>> 
>> On 11/08/14 5:49 PM, "Carlos Reategui" <car...@reategui.com> wrote:
>> 
>>> Hi Nitin,
>>> I created a new primary store share in NFS for the new deployment and
>>> removed the old one from exports.  The XS hosts where re-used and
>>> installed
>>> fresh so nothing was using the old primary store.  The new CS
>>>deployment
>>> only knows about the new primary store.
>>> 
>>> I used 'vhd-util scan -f -p -v -m old/path/*.vhd' to check the chain of
>>> vhd
>>> and copied over all the ones I needed from the old path to the new
>>>path.
>>> I
>>> got the old paths from by db backup.
>>> 
>>> Yes I changed the path in the volumes table to point to the 'old' vhd
>>> file.
>>> I stopped the instances from CS before editing the db. When I started
>>> them my old instances were back.
>>> 
>>> It was only 10 instances.... any more and I probably would have
>>>scripted
>>> it.
>>> 
>>> The vhd files that were created with the new instances are still there
>>>on
>>> the primary store and I am guessing I should be ok to delete them.
>>> 
>>> Thanks,
>>> Carlos
>>> 
>>> PS. I did run into a problem with the primary store scan/cleanup
>>>process
>>> the first time I tried where it cleaned up the parent of one of the
>>>child
>>> vhd before I had a chance to copy the child over.  I stopped management
>>> server and included all the files in the same copy command which
>>>seemed to
>>> get me past that.
>>> 
>>> 
>>> On Mon, Aug 11, 2014 at 5:30 PM, Nitin Mehta <nitin.me...@citrix.com>
>>> wrote:
>>> 
>>>> Carlos - This looks fine to me. Have a couple of questions though
>>>> So you reintroduced the old storage pool back on the new MS instance -
>>>> make sure the old instances are shutdown and do not access the same
>>>> storage else they can corrupt the volumes ?
>>>> Did you mean that you changed the path in the volumes table ?
>>>> 
>>>> Thanks,
>>>> -Nitin
>>>> 
>>>> On 11/08/14 11:56 AM, "Carlos Reategui" <create...@gmail.com> wrote:
>>>> 
>>>>> Hi All,
>>>>> Follow-up on my recovery process.  After failing to upgrade to 4.4 I
>>>> did a
>>>>> fresh install and decided to go ahead and also do fresh installs of
>>>>> XenServer to upgrade those to XS6.2.
>>>>> 
>>>>> Instead of importing each of the vhd from all my instances as
>>>>>templates
>>>>> and
>>>>> creating instances from those, I created new instances matching the
>>>>>the
>>>>> previous ones and then edited the volumes table with the vhd of the
>>>>> original instance from the previous deployment (had to make sure to
>>>>> include
>>>>> parent vhd).  My volumes were all on shared NFS storage.
>>>>> 
>>>>> Things appear to be working ok, but want to check if any of you
>>>>>foresee
>>>>> any
>>>>> issues with the method I followed.
>>>>> 
>>>>> thanks,
>>>>> Carlos
>>>> 
>>>> 
>> 
>

Reply via email to