(seems like I managed to somehow S/MIME encrypt me previous reply, sorry
about that!)
Thank you all for the quick and super helpful replies regarding this issue.
I've managed to solve it now and will just write what I did in the end.
- created a new LX branded zone, with an indestructible delegated dataset
- then instead of creating a manual copy of the old snapshot, I used ZFS
clone
- zfs clone zones/b96a3154-85dc-4a2d-ba80-31affd525667@indestructible
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest
- this way I got a r/w clone of my old data
- proceeded to rearrange the data from the previous dataset (it was the
entire zone root)
- updated the mount point of the new clone dataset, inside the zone
- zfs set mountpoint=/opt/local/crashplan/manifest
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest
- installed CrashPlan, launched it and let it do it's thing. Adopted the
old computer configuration and so forth
- when I was satisfied with the result I then promoted my clone to make
sure the data resides in my new dataset, in global zone:
- zfs promote zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest
Now the backups are running again, with a newer LX dist and a more sane zfs
dataset configuration. So hopefully it was all for the better in the end. :)
some output from the clone → promote step:
[root@0c-c4-7a-69-03-66 ~]# zfs list -t all -r
zones/b96a3154-85dc-4a2d-ba80-31affd525667
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328 NAME
USED AVAIL REFER
MOUNTPOINT
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328 26.1G
3.06T 1.12G /zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data 25.3G
3.06T 921M /zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data@indestructible 128K
- 192K -
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest 24.4G
3.06T 2.53T /opt/local/crashplan/manifest
zones/b96a3154-85dc-4a2d-ba80-31affd525667 2.52T
3.06T 879M /zones/b96a3154-85dc-4a2d-ba80-31affd525667
zones/b96a3154-85dc-4a2d-ba80-31affd525667@indestructible 2.52T
- 2.52T -
[root@0c-c4-7a-69-03-66 ~]# zfs promote
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest
[root@0c-c4-7a-69-03-66 ~]# zfs list -t all -r
zones/b96a3154-85dc-4a2d-ba80-31affd525667
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328
NAME
USED AVAIL REFER MOUNTPOINT
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328
2.55T 3.06T 1.12G /zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data
2.55T 3.06T 921M /zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data@indestructible
128K - 192K -
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest
2.55T 3.06T 2.53T /opt/local/crashplan/manifest
zones/7bdfbe09-82b5-ce5d-c50d-9f70f01f6328/data/manifest@indestructible
22.2G - 2.52T -
zones/b96a3154-85dc-4a2d-ba80-31affd525667
339M 3.06T 879M /zones/b96a3154-85dc-4a2d-ba80-31affd525667
Cheers,
Eric
On 2016 May24 at 19:47:50, Nigel W ([email protected]) wrote:
zfs should be in /native/sbin/zfs when inside the LX zones, at least that
is where it is on the Debian LX images. If you have a new enough platform
and host it will even mount the folders correctly on boot (December 2015
iirc).
My install of Crashplan stores all of the backup data on a sub-dataset of
the delegated dataset so that I could put samba into the same vm (with its
own sub-dataset) so that the one install of Crashplan could offsite the
data that I put on my NAS as well.
On Tue, May 24, 2016 at 11:16 AM, Sebastien Perreault <
[email protected]> wrote:
> Hi,
>
> Documentation says so, check in /system/bin or /system/sbin i think the
> zfs binary is there.
>
> Seb,
>
>
> On Tuesday, May 24, 2016, Eric Ripa <[email protected]> wrote:
>
>> I didn’t think it was possible to run zfs commands in LX branded zones.
>> Is it possible?
>>
>> Cheers
>> Eric
>>
>>
>> On 24 May 2016, at 18:43, Sebastien Perreault <[email protected]>
>> wrote:
>>
>> Hi,
>>
>> Don't do step 5... do this instead in the zone:
>>
>> zfs create -omountpoint=/opt/local/crashplan/manifest zones/<zone
>> uuid>/data/manifest
>>
>> Seb,
>>
>> On Tue, May 24, 2016 at 11:56 AM, Eric Ripa <[email protected]> wrote:
>>
>>> Thanks for the steps Sebastien. Sounds like I will have to perform some
>>> cleanup as I do not currently have enough spare room to copy all data from
>>> the snapshot without touching the snapshot data.
>>>
>>> My plan is:
>>> 1) create a new LX zone with CentOS 7 with delegate_dataset=true &
>>> indestructible_delegated=true (might as well upgrade the dist)
>>> 2) run rsync from the old snapshot to the new delegated dataset
>>> 3) shut down the old zone, move the IP etc. to the new zone
>>> 4) install the CrashPlan service on the new zone and configure it +
>>> stop the service
>>> 5) change mount point of the delagated dataset to
>>> /opt/local/crashplan/manifest (as hinted in vmadm(1m))
>>> 6) profit!
>>>
>>> Sounds feasible?
>>>
>>> Cheers,
>>> Eric
>>>
>>>
>>>
>>>
>>> On 24 May 2016, at 16:58, Sebastien Perreault <[email protected]>
>>> wrote:
>>>
>>> Hi,
>>>
>>> My best guess is the following, under /checkpoint/indestructible lives
>>> the data that used to live in / this is because /checkpoint in a ro mount
>>> of a snapshot named indestructible, go in the global or host and do zfs
>>> list -t all | grep <zone uuid>. it should be there. If you did zfs snapshot
>>> zones/<zone uuid>@lala you would see /checkpoint/lala in your zone now..
>>>
>>> So what you should do is copy back your data into /opt/local/crashplan
>>> ( copy not move ).
>>>
>>> Also I would highly encourage you to use delegate_dataset property
>>> instead, it creates a filesystem /data that is kept between reboot and
>>> reprovision.
>>>
>>> Seb,
>>>
>>> On Tue, May 24, 2016 at 10:44 AM, Eric Ripa <[email protected]>
>>> wrote:
>>>
>>>> Thanks again for the reply!
>>>>
>>>> The data used to reside under /opt/local/crashplan/manifest, after the
>>>> reboot there no longer is any data under /opt/local/crashplan/manifest. No
>>>> folder, no nothing. The data still does exist, but under
>>>> /checkpoints/indestructible/opt/local/crashplan/manifest.
>>>>
>>>> So my questions are 1) why did this happen? 2) how can I properly
>>>> recover to my desired state, to have the data r/w under
>>>> /opt/local/crashplan/manifest?
>>>>
>>>> BR
>>>> Eric
>>>>
>>>>
>>>> On 24 May 2016 at 16:17, Adam Števko <[email protected]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> what do you mean by “reappearing"? The data which can be found under
>>>>> /checkpoint/indestructible is a snapshot and you can’t move it as they are
>>>>> readonly (remember that you can’t mount the snapshot read-write as
>>>>> snapshot
>>>>> is read-only copy of filesystem in time).
>>>>>
>>>>> Cheers,
>>>>> Adam
>>>>>
>>>>>
>>>>> On May 24, 2016, at 4:09 PM, Eric Ripa <[email protected]> wrote:
>>>>>
>>>>> Thanks a lot for the quick reply Adam!
>>>>>
>>>>> The explanation makes sense, however my aim is not to remove the data,
>>>>> rather make the data reappear (mount the latest snapshot rw??) in the
>>>>> original location, in this case /opt/local/crashplan
>>>>>
>>>>> How can I make sure this is done propely? I would rather not lose my
>>>>> ~3 TB backup data. :)
>>>>>
>>>>> Thanks
>>>>> Eric
>>>>>
>>>>>
>>>>> On 24 May 2016 at 15:56, Adam Števko <[email protected]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> indestructible_zoneroot is a feature, which shall prevent any
>>>>>> accidental deletion of the VM. It is implemented by creaating a snapshot
>>>>>> called “indestructible” and is held, which causes any deletetion attempts
>>>>>> to fail. /checkpont is a place where all snapshots related to the zone
>>>>>> get
>>>>>> mounted as lofs. There is a difference in handling of these in SmartOS
>>>>>> and
>>>>>> in SDC:
>>>>>>
>>>>>> In SDC, there is a components in system, which automounts these
>>>>>> snapshots when they are created.
>>>>>> In SmartOS, snapshost is created, but not automatically mounted. When
>>>>>> you reboot the zone, the actual list of snapshots is fetched and
>>>>>> lofs-mounted into the /checkpoint. If you create snapshot later, it won’t
>>>>>> appear under /checkpoint until you reboot the zone.
>>>>>>
>>>>>> The readonly filesystem error you saw is caused by the fact, that
>>>>>> lofs mount is read-only.
>>>>>>
>>>>>> If you want to get rid of it either delete
>>>>>> zones/<uuid>@indestructible snapshot or run “vmadm update <uuid>
>>>>>> indestructible_zoneroot=false” and it should remove the snapshot.
>>>>>>
>>>>>> Cheers,
>>>>>> Adam
>>>>>>
>>>>>> On May 24, 2016, at 3:44 PM, Eric Ripa <[email protected]> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I have a LX branded zone running CentOS 6. In it I have a Java-based
>>>>>> service called Crashplan.
>>>>>>
>>>>>> I have the zone protected with
>>>>>> "indestructible_zoneroot": true
>>>>>>
>>>>>> At some point roughly 3-5 days ago all of my backup archives,
>>>>>> residing under /opt/local/crashplan, has been *moved* to
>>>>>> /checkpoint/indestructible/opt/local/crashplan
>>>>>>
>>>>>> It is possible that this happened during a vmadm reboot <zoneid>, but
>>>>>> I'm not sure.
>>>>>>
>>>>>>
>>>>>> Is this some side-effect of indestructible_zoneroot that I'm unaware
>>>>>> off? I tried to 'mv' them back from GZ, but I get this message:
>>>>>>
>>>>>> mv: cannot unlink
>>>>>> /zones/b96a3154-85dc-4a2d-ba80-31affd525667/root/checkpoints/indestructible/opt/local/crashplan/manifest/442330126173077772/cpbf0000000000000129209/442330126173077772:
>>>>>> Read-only file system
>>>>>>
>>>>>> Any pointers of how to proceed? Is it even related to SmartOS at all
>>>>>> or is it some CentOS thing?
>>>>>>
>>>>>> SunOS 0c-c4-7a-69-03-66 5.11 joyent_20160204T173339Z i86pc i386 i86p
>>>>>>
>>>>>> --
>>>>>> Thanks a lot!
>>>>>> Eric Ripa
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://www.listbox.com
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> http://www.listbox.com
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Hälsningar
>>>>
>>>> *Eric Ripa*
>>>> Cloud Engineer
>>>>
>>>> <http://skymill.se/>
>>>>
>>>>
>>>> Phone
>>>> +46 (0) 731 800 502
>>>> Email [email protected]
>>>> Web skymill.se
>>>>
>>>> [image: twitter] <http://twitter.com/helloskymill>[image: linkedin]
>>>> <http://www.linkedin.com/company/skymill-solutions-ab>
>>>>
>>>>
>>>
>>>
>>> --
>>> Sebastien Perreault
>>> Partner
>>> Les Technologies Alesium Inc
>>> P: 514-298-7193
>>> F: 514-221-3668
>>> E: [email protected]
>>>
>>>
>>>
>>
>>
>> --
>> Sebastien Perreault
>> Partner
>> Les Technologies Alesium Inc
>> P: 514-298-7193
>> F: 514-221-3668
>> E: [email protected]
>>
>>
>>
>
> --
> Sebastien Perreault
> Partner
> Les Technologies Alesium Inc
> P: 514-298-7193
> F: 514-221-3668
> E: [email protected]
>
*smartos-discuss* | Archives
<https://www.listbox.com/member/archive/184463/=now>
<https://www.listbox.com/member/archive/rss/184463/27180539-7245d433> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com