Running 12-CURRENT (FreeBSD 12.0-CURRENT #32 r306579: Sun Oct  2 09:34:50 CEST 
2016 ), I
have a NanoBSD setup which creates an image for a router device.

The problem I face is related to ZFS. The system has a system's SSD (Samsung 
850 Pro,
256GB) which has an UFS filesystem. Aditionally, I have also a backup and a 
data HDD,
both WD, one 3 TB WD RED Pro, on 4 TB WD RED (the backup device). Both the 
sources for
the NanoBSD and the object tree as well as the NANO_WORLDDIR are residing on 
the 3 TB
data drive. 

The box itself has 8 GB RAM. When it comes to create the memory disk, which is 
~ 1,3 GB
in size, the NanoBSD script starts creating the memory disk and then installing 
into this memory disk. And this part is a kind of abyssal in terms of the speed.

The drive sounds like hell, the heads are moving rapidly. The copy speed is 
slow compared to another box I usually use in the lab with UFS filesystem only 
type of HDD).

The whole stuff the nanbsd is installed from and to is on a separate ZFS 
partition, but
in the same pool as everything else. When I first setup the new partitions, I 
switched on
deduplication, but I quickly deactivated it, because it had a tremendous impact 
on the
working speed and memory consumption on that box. But something seems not right 
then - as I initially described, the copy/initialisation speed/bandwith is 
abyssal. Well,
I also fear that I did something wrong when I firt initialised the HDD - there 
is this
125bytes/4k block discussion and I do not know how to check whether I'm 
affected to that
or not (or even causing the problems) and how to check whether DEDUPLICATION is
definitely OFF (apart from the usual stuff list features via "zfs get all").

As an example: the nanbosd script takes ~ 1 minute to copy /boot/loader from 
source to
memory disk and the HDD makes sounds like hell and close to loosing the r/w 
heads. On
other boxes this task is done in a blink of an eye ...

Thanks for your patience,


Attachment: pgpJnApNKJBY8.pgp
Description: OpenPGP digital signature

Reply via email to