"dpool" is another datapool created with Ubuntu 19.10 and it had the
same defaults with respect to "large-dnode" as rpool. My main problem
has been with rpool, since it took my whole nvme-SSD. By the way the
same happened in FreeBSD with zroot, during the install it also took
all space on my striped HDDs :) 

Note that FreeBSD is a 32-bits version on an old Pentium 4 HT :)

By the way dpool (Ubuntu) is also striped over two 450GB partitions on a 500GB 
and a 1TB HDD. The second part of the 1TB HDD still had the
partition/datapool created by Ubuntu 18.04 with zfs 0.7.x release and
that one had no large-dnode problems.

I solved my send/receive problem by specifying on the Ubuntu system for
each dataset on rpool and dpool dnodesize=legacy and reloaded the
content of those datasets.

See the Ubuntu dnodesize overview in the attachment.

On FreeBSD zroot has "large-dnode = active" and the dnodesize is as
follows:
----------------------------------------------------------------
root@freebsd:~ # zfs get dnodesize
NAME                              PROPERTY   VALUE   SOURCE
bootpool                          dnodesize  legacy  default
zroot                             dnodesize  legacy  default
zroot/ROOT                        dnodesize  legacy  default
zroot/ROOT@upgrade12-1            dnodesize  -       -

zroot/hp-data                     dnodesize  legacy  local
zroot/hp-data/ISO                 dnodesize  legacy  inherited from 
                                                zroot/hp-data
-----------------------------------------------------------------

I have created a separate dataset on FreeBSD with the same attributes
as rpool/USERDATA on Ubuntu

zroot/USERDATA                    dnodesize  auto    local

Sending data to this datset had the following result:

See the send/receive results after the dnodesize overview in the
attachment. Note that at the end I tried to create a new dataset
zroot/USER.

-----------------------------------------------------------------------

And now the sends inside FreeBSD both with a new USER dataset and the
exsisting USERDATA with dnodesize=auto.

root@freebsd:~ # zfs send -c zroot/var/log@upgrade12-1 | zfs receive
zroot/USER
root@freebsd:~ # zfs send -c zroot/var/log@upgrade12-1 | zfs receive -F
zroot/USERDATA
root@freebsd:~ # 

zroot/USER has been created and USERDATA existed with dnodesize=auto
The result has been as expected. 

zroot/USER                        dnodesize  legacy  default
zroot/USER@upgrade12-1            dnodesize  -       -
zroot/USERDATA                    dnodesize  auto    local
zroot/USERDATA@upgrade12-1        dnodesize  -       -

---------------------------------------------------------------------

And now send from FreeBSD to Ubuntu:

see for the command attachment at the end.

and the result

rpool/USER@upgrade12-1                   0B      -      888K  -

rpool/USER               dnodesize  auto    inherited from rpool
rpool/USER@upgrade12-1   dnodesize  -       -

---------------------------------------------------------------------
Both system have the large-dnode feature active!
And almost all combinations work, 
- freeBSD to freeBSD from dnodesize=legacy to a dnodesize, that is
either legacy or auto
- Ubuntu to Ubuntu, I do not remember any problem.
- freeBSD to Ubuntu from dnodesize=legacy to dnodesize=auto
- Ubuntu (dnodesize=legacy) to FreeBSD 12.x (dnodesize-legacy) works
and that is what I use now.

The default one selected as default by both development teams in
spendlid isolation, did not work! The one from Ubuntu 19.10
(dnodesize=auto) to FreeBSD 12.x (dnodesize=legacy)
Also from Ubuntu 19.10 (dnodesize=auto) to FreeBSD 12.x
(dnodesize=auto) failed, see test.

GOOD LUCK finding the error.


On Wed, 2020-01-29 at 04:29 +0000, Garrett Fields wrote:
> So these pools were created with the Ubuntu Ubiquity ZFS
> installer?  I
> missed that because the pool names are hardcoded to bpool and rpool
> and
> your message lists 'dpool/dummy' and 'zroot/hp-data/dummy'
> 
> Also, in the linked email thread, you stated "The ZFS manual advised
> auto, if also using xattr=sa, so that is why I used auto for my own
> datapools/datasets."
> 
> Now the origin of the pool is clearer to me. Yes I do see -O
> dnodesize=auto being set on (and inherited from) rpool in Ubiquity
> root
> zfs installation.  This would impact the ease of sending to non-
> large_dnode pools (or in your case FreeBSD with large_dnode
> problems).
> 
> Some simple tests to run:
> Within FreeBSD, I'd be really surprised if large_dnode=active to
> large_dnode=enabled/active zfs send/recv doesn't work, but I'd start
> there.
> 
> Next, I'd try to send from FreeBSD large_dnode=active to Linux
> large_dnode=enabled/active. If it fails, what error is returned?
> 
> Also, like rlaager stated, we should do the original Linux
> large_dnode=active to FreeBSD large_dnode=enabled/active that gave
> you
> problems. This all will give us evidence for bug reports in FreeBSD
> and/or ZOL upstreams.
> 
> I'm on a mobile device, so can build examples, if requested, at a
> later
> time.
> 


** Attachment added: "dnodesize-ubuntu-freebsd"
   
https://bugs.launchpad.net/bugs/1854982/+attachment/5323858/+files/dnodesize-ubuntu-freebsd

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to