Hi All:
I am running into a problem when using zfs send/receive to copy a large
KVM container to another box. Basically, the send command:
$ zfs send -R
zones/[email protected] | ssh -l
root remote-host zfs recv -v -e zones
fails:
"cannot receive new filesystem stream: out of space"
(this is probably just a simple zfs send/receive problem, but I don't
see what parameter is giving me problems)
My json file for this zone has:
"disks": [
{
"boot": true,
"model": "virtio",
"size": 1500000
}
(so, it is defined for a 1.5TB file system)
The zfs properties for this sending filesystem are:
$ zfs get all zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0
NAME PROPERTY VALUE SOURCE
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 type
volume -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 creation
Wed Dec 21 23:23 2016 -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 used
2.26T -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 available
2.54T -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 referenced
841G -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 compressratio
1.07x -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 reservation
none default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 volsize
1.43T local
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 volblocksize
8K default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 checksum
on default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 compression
on inherited from zones
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 readonly
off default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 copies
1 default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 refreservation
1.43T local
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 primarycache
all default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 secondarycache
all default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 usedbysnapshots
11.3G -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 usedbydataset
841G -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 usedbychildren
0 -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 usedbyrefreservation
1.43T -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 logbias
latency default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 dedup
off default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 mlslabel
none default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 sync
standard default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 refcompressratio
1.05x -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 written
0 -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 logicalused
908G -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 logicalreferenced
884G -
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 snapshot_limit
none default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 snapshot_count
none default
zones/92195ff3-c834-4c59-d20b-d713dc2aff07-disk0 redundant_metadata
all default
I have tried this copy a number of times, and it fails each time, but
only after it has copied the 884G of data.
Thanks in advance for reading this and any help you can offer.
Paul
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com