"dpool" is another datapool created with Ubuntu 19.10 and it had the
same defaults with respect to "large-dnode" as rpool. My main problem
has been with rpool, since it took my whole nvme-SSD. By the way the
same happened in FreeBSD with zroot, during the install it also took
all space on my striped HDDs :) 

Note that FreeBSD is a 32-bits version on an old Pentium 4 HT :)

By the way dpool (Ubuntu) is also striped over two 450GB partitions on a 500GB 
and a 1TB HDD. The second part of the 1TB HDD still had the
partition/datapool created by Ubuntu 18.04 with zfs 0.7.x release and
that one had no large-dnode problems.

I solved my send/receive problem by specifying on the Ubuntu system for
each dataset on rpool and dpool dnodesize=legacy and reloaded the
content of those datasets.

See the Ubuntu dnodesize overview in the attachment.

On FreeBSD zroot has "large-dnode = active" and the dnodesize is as
follows:
----------------------------------------------------------------
root@freebsd:~ # zfs get dnodesize
NAME                              PROPERTY   VALUE   SOURCE
bootpool                          dnodesize  legacy  default
zroot                             dnodesize  legacy  default
zroot/ROOT                        dnodesize  legacy  default
zroot/ROOT@upgrade12-1            dnodesize  -       -

zroot/hp-data                     dnodesize  legacy  local
zroot/hp-data/ISO                 dnodesize  legacy  inherited from 
                                                zroot/hp-data
-----------------------------------------------------------------

I have created a separate dataset on FreeBSD with the same attributes
as rpool/USERDATA on Ubuntu

zroot/USERDATA                    dnodesize  auto    local

Sending data to this datset had the following result:

See the send/receive results after the dnodesize overview in the
attachment. Note that at the end I tried to create a new dataset
zroot/USER.

-----------------------------------------------------------------------

And now the sends inside FreeBSD both with a new USER dataset and the
exsisting USERDATA with dnodesize=auto.

root@freebsd:~ # zfs send -c zroot/var/log@upgrade12-1 | zfs receive
zroot/USER
root@freebsd:~ # zfs send -c zroot/var/log@upgrade12-1 | zfs receive -F
zroot/USERDATA
root@freebsd:~ # 

zroot/USER has been created and USERDATA existed with dnodesize=auto
The result has been as expected. 

zroot/USER                        dnodesize  legacy  default
zroot/USER@upgrade12-1            dnodesize  -       -
zroot/USERDATA                    dnodesize  auto    local
zroot/USERDATA@upgrade12-1        dnodesize  -       -

---------------------------------------------------------------------

And now send from FreeBSD to Ubuntu:

see for the command attachment at the end.

and the result

rpool/USER@upgrade12-1                   0B      -      888K  -

rpool/USER               dnodesize  auto    inherited from rpool
rpool/USER@upgrade12-1   dnodesize  -       -

---------------------------------------------------------------------
Both system have the large-dnode feature active!
And almost all combinations work, 
- freeBSD to freeBSD from dnodesize=legacy to a dnodesize, that is
either legacy or auto
- Ubuntu to Ubuntu, I do not remember any problem.
- freeBSD to Ubuntu from dnodesize=legacy to dnodesize=auto
- Ubuntu (dnodesize=legacy) to FreeBSD 12.x (dnodesize-legacy) works
and that is what I use now.

The default one selected as default by both development teams in
spendlid isolation, did not work! The one from Ubuntu 19.10
(dnodesize=auto) to FreeBSD 12.x (dnodesize=legacy)
Also from Ubuntu 19.10 (dnodesize=auto) to FreeBSD 12.x
(dnodesize=auto) failed, see test.

GOOD LUCK finding the error.


On Wed, 2020-01-29 at 04:29 +0000, Garrett Fields wrote:
> So these pools were created with the Ubuntu Ubiquity ZFS
> installer?  I
> missed that because the pool names are hardcoded to bpool and rpool
> and
> your message lists 'dpool/dummy' and 'zroot/hp-data/dummy'
> 
> Also, in the linked email thread, you stated "The ZFS manual advised
> auto, if also using xattr=sa, so that is why I used auto for my own
> datapools/datasets."
> 
> Now the origin of the pool is clearer to me. Yes I do see -O
> dnodesize=auto being set on (and inherited from) rpool in Ubiquity
> root
> zfs installation.  This would impact the ease of sending to non-
> large_dnode pools (or in your case FreeBSD with large_dnode
> problems).
> 
> Some simple tests to run:
> Within FreeBSD, I'd be really surprised if large_dnode=active to
> large_dnode=enabled/active zfs send/recv doesn't work, but I'd start
> there.
> 
> Next, I'd try to send from FreeBSD large_dnode=active to Linux
> large_dnode=enabled/active. If it fails, what error is returned?
> 
> Also, like rlaager stated, we should do the original Linux
> large_dnode=active to FreeBSD large_dnode=enabled/active that gave
> you
> problems. This all will give us evidence for bug reports in FreeBSD
> and/or ZOL upstreams.
> 
> I'm on a mobile device, so can build examples, if requested, at a
> later
> time.
> 


** Attachment added: "dnodesize-ubuntu-freebsd"
   
https://bugs.launchpad.net/bugs/1854982/+attachment/5323858/+files/dnodesize-ubuntu-freebsd

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to