Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Matthias Petermann

Hi Greg and others,

Am 03.08.2020 um 21:31 schrieb Greg Troxel:

Other than the 1 cpu vs ? cpus, no.   I tested xen perfmorance long ago,
in 2006 with a setup

   NetBSD dom0
   disk file in filesystem
   NetBSD domU with xbd0 from the file

  and found that reading with dd:

the dom0 raw disk was just about the same as bare metal

the file was maybe 5-10% slower (maybe not quite; it was noticeable but
not a big deal)

the xbd0d "raw disk" was also 5-10 % slower than reading the file in
the dom0

Now, this isn't what you asked, but I find the difference you found seem
like a bug.

I would definitely do dd from the raw disk in your case 1 and 2,
followed by dd of the iso.

Also, I would repeat your tests and run "systat vmstat" during each
case, and also netstat to see if the network interface is not keeping
up.   Then I would run iperf, ttcp or whatever to test network separate
from samba and disk.




many thanks to you for the helpful hints. First of all it was important 
for me to get feedback if the performance losses I noticed are common. 
As I understood it, this is rather not the case and a detailed 
investigation is appropriate. This is what I will tackle next.


By the way, I noticed just today that even when using the "pure" NetBSD 
kernel without Xen the performance is also bad in some cases. The ISO 
file mentioned in the last posting is written with about 60 MByte/s. 
Even if I trigger a sync every second the performance hardly drops. 
However - if I (after a reboot to clean up the file cache) want to copy 
the same file back via the same way (from NetBSD point of view "read") I 
also only get a throughput of about 20 MByte/s. I thought that reading 
should be faster than writing in any case.


But there is one detail I have just now become aware of... when I bought 
these NUCs a few years ago, I installed hybrid hard disks (Seagate 
Firecuda SSHD 2TB). This type has a mechanical/magnetic mass storage and 
a (relatively small) SSD on the side. But from the point of view of the 
operating system the device appears as a whole. The on-disk-controller 
decides when and how the SSD is used. I don't know this in detail 
either, only that booting from this disk works quite fast. It's quite 
possible that here simply the blocks that are read first after power on 
are moved to the SSD. To cut a long story short - I think I first have 
to convert to either a purely mechanical hard disk or a pure SSD to 
avoid measurement inaccuracies due to the unpredictable behavior of the 
SSHD. Then I will approach the whole thing scientifically and consider 
your advice.


This week will be very exhausting and I can't assure that I will 
progress quickly. But I will stay on it and write my results here as 
soon as I have them.


Best regards
Matthias


Man page names

2020-08-03 Thread Todd Gruhn
I have noticed that there are many man pages with the name/form
*.conf.5

Are  all man pages with the form *.* and *.*.* in section 5?

Can anyone see future problems  caused by linking
a.b.c --> a_b_c  ?


Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Mike Pumford

On 03/08/2020 17:48, Sad Clouds wrote:


I believe Samba is single threaded, so can't take advantage of multiple
CPUs for a single stream. I'm not a Xen expert, however I'm not sure
running this in Dom0 is a representative test. I imagine most workloads
would be done within a DomU, while Dom0 is just a control domain and
allocates CPU and memory resources, so you may have additional
overheads + latencies.

Samba uses multiple processes for parallelism. So there is a separate 
smbd process for each connection. So for a single share to a single 
client its single threaded. Multiple clients or multiple shares will 
spawn additional smbd processes.


Mike


Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Greg Troxel
Matthias Petermann  writes:

> Constellation 1 (Pure NetBSD kernel):
>
> * NetBSD/amd64 9.0 Kernel + Samba: throughput ~60 MByte/s
>
> Constellation 2 (NetBSD/Xen Domain 0):
>
> * Xen 4.11 + NetBSD/Xen Dom0 + Samba: throughput ~12 MByte/s
>
> I measured this by copying an 8 GB ISO file from a Windows host.
> In constellation 2, no guests had started and the full main memory of
> Dom0 was assigned. In my view, the only significant difference is that
> NetBSD can only use one of the two CPU cores under Xen. Since the CPU
> was idle on average at 20% during copying, that doesn't seem to be the
> bottleneck?
>
> Are such differences in I/O performance to be expected?

Other than the 1 cpu vs ? cpus, no.   I tested xen perfmorance long ago,
in 2006 with a setup

  NetBSD dom0
  disk file in filesystem
  NetBSD domU with xbd0 from the file

 and found that reading with dd:

   the dom0 raw disk was just about the same as bare metal

   the file was maybe 5-10% slower (maybe not quite; it was noticeable but
   not a big deal)

   the xbd0d "raw disk" was also 5-10 % slower than reading the file in
   the dom0

Now, this isn't what you asked, but I find the difference you found seem
like a bug.

I would definitely do dd from the raw disk in your case 1 and 2,
followed by dd of the iso.

Also, I would repeat your tests and run "systat vmstat" during each
case, and also netstat to see if the network interface is not keeping
up.   Then I would run iperf, ttcp or whatever to test network separate
from samba and disk.


  


Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Sad Clouds
On Mon, 3 Aug 2020 15:08:28 +0200
Matthias Petermann  wrote:

> I measured this by copying an 8 GB ISO file from a Windows host.
> In constellation 2, no guests had started and the full main memory of 
> Dom0 was assigned. In my view, the only significant difference is
> that NetBSD can only use one of the two CPU cores under Xen. Since
> the CPU was idle on average at 20% during copying, that doesn't seem
> to be the bottleneck?

I believe Samba is single threaded, so can't take advantage of multiple
CPUs for a single stream. I'm not a Xen expert, however I'm not sure
running this in Dom0 is a representative test. I imagine most workloads
would be done within a DomU, while Dom0 is just a control domain and
allocates CPU and memory resources, so you may have additional
overheads + latencies.

Best to start testing simple use-cases, i.e. simple disk I/O with dd, or
network I/O with iperf, and narrow down the issue.


Re: Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Mike Pumford




On 03/08/2020 11:22, Greg Troxel wrote:

Martin Husemann  writes:


On Mon, Aug 03, 2020 at 11:08:22AM +0200, Matthias Petermann wrote:

2  32 Pri GPT table
   342014 Unused


That part is expected...


Not to me, entirely.

I get it why 0, 1 and 2-33 are GPT.

I get it why we don't align to 63 (because there is no good reason,
and because it doesn't line up with 4K physical sectors).

The forced choice these days is 8, because of 4K sectors.  I can see why
picking 8 for alignment isn't future-proof against the disk announced
next week with 8K or 32K physical sectors (yes, I'm making that up, but
I would not be shocked to see that over the next 10 years)..

SSD's also add another complexity. The underlying flash media may have a 
sector size way larger than 4kB. Based on the NAND flash devices I've 
used in embedded projects sector sizes of 64kB or larger could be what 
the SSD is actually using. So this larger alignment might be helpful in 
this scenario as well.


Mike


NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Matthias Petermann

Hello everybody,

on a small Intel NUC with Realtek network chip I want to operate
a Xen host. My NetBSD Xen guests are supposed to host different web apps 
as well as an Asterisk PBX.


I also want to provide network drives via Samba as performant as 
possible. To keep the overhead of hard drive access low I thought it 
could be a good idea to let Samba - although not architecturally clean - 
operate from Dom0.


When trying to do this, I noticed a significant difference in network speed.

Constellation 1 (Pure NetBSD kernel):

* NetBSD/amd64 9.0 Kernel + Samba: throughput ~60 MByte/s

Constellation 2 (NetBSD/Xen Domain 0):

* Xen 4.11 + NetBSD/Xen Dom0 + Samba: throughput ~12 MByte/s

I measured this by copying an 8 GB ISO file from a Windows host.
In constellation 2, no guests had started and the full main memory of 
Dom0 was assigned. In my view, the only significant difference is that 
NetBSD can only use one of the two CPU cores under Xen. Since the CPU 
was idle on average at 20% during copying, that doesn't seem to be the 
bottleneck?


Are such differences in I/O performance to be expected?

Thank you & best regards
Matthias


Boot selection of boot.cfg doesn't work as expected (with UEFI boot loader)

2020-08-03 Thread Matthias Petermann

Hello everyone,

on NetBSD/amd64 9.0 Release I am setting up a Xen Host. Therefore, I 
added the first line to my boot.cfg:


menu=Boot Xen:load /netbsd-XEN3_DOM0.gz root=NAME=root;multiboot /xen.gz 
dom0_mem=512M dom0_max_vcpus=1 dom0_vcpus_pin

menu=Boot normally:rndseed /var/db/entropy-file;boot
menu=Boot single user:rndseed /var/db/entropy-file;boot -s
menu=Drop to boot prompt:prompt
default=1
timeout=5
clear=1

(there is no line break in the "Boot Xen" line in the original file).

This results in a boot menu:

1. Boot Xen
2. Boot normally
3. Boot single user
4. Drop to boot prompt

When Xen did have a (here unrelated) problem booting, I wanted to select 
Option 2 (Boot normally) to boot a standard kernel instead. To my 
surprise, Option 2 also tries to boot the Xen kernel. On the other hand, 
Option 3 boots into the single user environment as expected.


Do I have made an obvious mistake in my boot.cfg, or does this look like 
a bug?


Kind regards
Matthias


Re: Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Matthias Petermann

Hello Martin,

Am 03.08.2020 um 11:15 schrieb Martin Husemann:

On Mon, Aug 03, 2020 at 11:08:22AM +0200, Matthias Petermann wrote:

2  32 Pri GPT table
   342014 Unused


That part is expected...


 2048  262144  1  GPT part - EFI System

   

... to align the start here.


   2641922048 Unused

   
but this sounds like a bug in the alignment code. I'll have a look.


thanks - I will stay tuned on this. Please let me know if you want me to 
test something.



The other part (gpt(8) being user unfriendly) I'll leave to somebody else.


I hope I have not talked too badly about the GPT tool :-) As long as it 
is logical what it does - and that is how it seems at the moment - it is 
not too unfriendly. My expectations for a low level tool are probably 
too high.



Kind regards
Matthias


Re: How to properly resize VG/LV on LVM?

2020-08-03 Thread Bartosz Maciejewski

Thank you Dima and Greg,

Finally got it working. I didn't noticed that first total sectors have 
to be changed in disklabel, then disklable -e let me resize disk.


For the record:

1. Resize disk in DomU/XCP-NG

2. check new total sectors from dmesg

3. Alter total sectors in disklabel -e /dev/xbd3

4. Alter partitions in disklabel -e /dev/xbd3

5. lvm pvresize /dev/rxbd3a

6. lvm lvextend -L+200G /dev/mapper/varmailvg-virtuallv

7. umount /var/mail/virtual/

8. fsck -y /dev/mapper/varmailvg-virtuallv

9. resize_ffs /dev/mapper/varmailvg-virtuallv

10. mount /dev/mapper/varmailvg-virtuallv /var/mail/virtual/

Result:

/dev/mapper/varmailvg-virtuallv   1.2T   860G *261G* 76% 
/var/mail/virtual


W dniu 03.08.2020 o 12:42, Dima Veselov pisze:

On 03.08.2020 12:10, Bartosz Maciejewski wrote:

I got logical volume for mailboxes on LVM for easy expanding as 
mailboxes are growing. However I can't really resize it, or I don't 
know how to do it on NetBSD.


Actually it is done the same anywhere. You have to enlarge every device
in the chain.


Filesystem Size   Used Avail %Cap Mounted on
/dev/mapper/varmailvg-virtuallv   984G   860G 74G  92% 
/var/mail/virtual


xbd3: 1200 GB, 512 bytes/sect x 2516582400 sectors
# lvm pvresize /dev/rxbd3a


It seems your PV is located on NetBSD slice of xbd3, not on xbd3 
itself. That means you have to enlarge slice using disklabel(8) first.



which looks good, but lvm pvs still shows 1000GB instead of 1200GB

# lvm pvs
   PV  VG    Fmt  Attr PSize    PFree
   /dev/rxbd3a varmailvg lvm2 a-   1000.00g 1020.00m

Probably I should "resize in place" with disklabel or fstab after 
extending in Xen but I don't know exactly how. There is also 
resize_ffs somewhere in a process I think.


Is there any guide for NetBSD how to properly extend lvm volume?


This job is always very local, I don't think one can create sufficient
documentation to cover all the possible options. However big document
can be found here https://www.netbsd.org/docs/guide/en/chap-lvm.html



Re: How to properly resize VG/LV on LVM?

2020-08-03 Thread Dima Veselov

On 03.08.2020 12:10, Bartosz Maciejewski wrote:

I got logical volume for mailboxes on LVM for easy expanding as 
mailboxes are growing. However I can't really resize it, or I don't know 
how to do it on NetBSD.


Actually it is done the same anywhere. You have to enlarge every device
in the chain.


Filesystem    Size   Used Avail %Cap Mounted on
/dev/mapper/varmailvg-virtuallv   984G   860G 74G  92% 
/var/mail/virtual


xbd3: 1200 GB, 512 bytes/sect x 2516582400 sectors
# lvm pvresize /dev/rxbd3a


It seems your PV is located on NetBSD slice of xbd3, not on xbd3 itself. 
That means you have to enlarge slice using disklabel(8) first.



which looks good, but lvm pvs still shows 1000GB instead of 1200GB

# lvm pvs
   PV  VG    Fmt  Attr PSize    PFree
   /dev/rxbd3a varmailvg lvm2 a-   1000.00g 1020.00m

Probably I should "resize in place" with disklabel or fstab after 
extending in Xen but I don't know exactly how. There is also resize_ffs 
somewhere in a process I think.


Is there any guide for NetBSD how to properly extend lvm volume?


This job is always very local, I don't think one can create sufficient
documentation to cover all the possible options. However big document
can be found here https://www.netbsd.org/docs/guide/en/chap-lvm.html

--
Sincerely yours,
Dima Veselov
Physics R Establishment of Saint-Petersburg University


Re: Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Martin Husemann
On Mon, Aug 03, 2020 at 06:22:12AM -0400, Greg Troxel wrote:
> But 2048 is 11 bits of sector address alignment, and wastes an entire
> MB.  Yes, that doesn't really matter on a 4T disk, but on a 256 MB flash
> drive it seems like a lot.  (I'm perhaps overly sensitive, having used
> Unix on a machine with 2 disks of 2.5M each.)

The alignement depends on the total disk size, it is 1 MB only for huge
disks (where it does not matter). On the 256 MB flash drive it will be
1 sector.

Martin


How to properly resize VG/LV on LVM?

2020-08-03 Thread Bartosz Maciejewski

Hello there,

I got logical volume for mailboxes on LVM for easy expanding as 
mailboxes are growing. However I can't really resize it, or I don't know 
how to do it on NetBSD.


Filesystem    Size   Used Avail %Cap Mounted on
/dev/mapper/varmailvg-virtuallv   984G   860G 74G  92% 
/var/mail/virtual


I already extended disk on Xen Level from 1000GB to 1200GB, and dmesg is 
showing it correctly:


xbd3 at xenbus0 id 51776: Xen Virtual Block Device Interface
xbd3: using event channel 11
xbd3: 1200 GB, 512 bytes/sect x 2516582400 sectors
xbd3: WARNING: cache flush not supported by backend

Then I run:

# lvm pvresize /dev/rxbd3a
  Physical volume "/dev/rxbd3a" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

which looks good, but lvm pvs still shows 1000GB instead of 1200GB

# lvm pvs
  PV  VG    Fmt  Attr PSize    PFree
  /dev/rxbd3a varmailvg lvm2 a-   1000.00g 1020.00m

Probably I should "resize in place" with disklabel or fstab after 
extending in Xen but I don't know exactly how. There is also resize_ffs 
somewhere in a process I think.


Is there any guide for NetBSD how to properly extend lvm volume?




Re: Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Greg Troxel
Martin Husemann  writes:

> On Mon, Aug 03, 2020 at 11:08:22AM +0200, Matthias Petermann wrote:
>>2  32 Pri GPT table
>>   342014 Unused
>
> That part is expected...

Not to me, entirely.

I get it why 0, 1 and 2-33 are GPT.

I get it why we don't align to 63 (because there is no good reason,
and because it doesn't line up with 4K physical sectors).

The forced choice these days is 8, because of 4K sectors.  I can see why
picking 8 for alignment isn't future-proof against the disk announced
next week with 8K or 32K physical sectors (yes, I'm making that up, but
I would not be shocked to see that over the next 10 years)..

>> 2048  262144  1  GPT part - EFI System

But 2048 is 11 bits of sector address alignment, and wastes an entire
MB.  Yes, that doesn't really matter on a 4T disk, but on a 256 MB flash
drive it seems like a lot.  (I'm perhaps overly sensitive, having used
Unix on a machine with 2 disks of 2.5M each.)

Perhaps people are expected to have partitions with integer numbers of
MB, and thus all start/end will then line up with addresses with 11 bits
of zeroes.

If everybody else thinks that this is overly aligned but that it doesn't
hurt, that's fine -- I'm not trying to agitate to change it.  I would
just like to understand if there is a good reason to align to 2048
sectors, vs 64 or 256.


Re: Getting undefined reference to ___tls_get_addr when building packages on netbsd-5.2/i386

2020-08-03 Thread Greg Troxel
Kamil Rytarowski  writes:

> ___tls_get_addr is delivered on i386 in /usr/libexec/ld.elf_so and on
> amd64 in /usr/libexec/ld.elf_so-i386

ld.elf-so-i386 doesn't exist on netbsd-5.

> What's the situation with this symbol in NetBSD-5.2 is unknown to me.

Thread-local storage was proudly announced as new in NetBSD 6.0 in
October of 2012:
  https://netbsd.org/releases/formal-6/NetBSD-6.0.html

I'm sometimes trailing edge, but if you have a NetBSD 5 system, it's
been unsupported since 2015-09, and well past time to update.  I keep an
image around for portability testing of a few upstream packages (e.g.,
bup) and I can say it's been getting harder and harder to build things
on it.

I have found that 8 and 9 both work well, and I encourage you to
upgrade.



Re: Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Martin Husemann
On Mon, Aug 03, 2020 at 11:08:22AM +0200, Matthias Petermann wrote:
>2  32 Pri GPT table
>   342014 Unused

That part is expected...

> 2048  262144  1  GPT part - EFI System
  

... to align the start here.

>   2641922048 Unused
  
but this sounds like a bug in the alignment code. I'll have a look.

The other part (gpt(8) being user unfriendly) I'll leave to somebody else.

Mart


Sysinst creates gaps between GPT partitions - why?

2020-08-03 Thread Matthias Petermann

Hello,

recently I observed some sysinst behaviour which led to some confusion 
afterwards.


When using sysinst to create GPT partitions on a fresh disk, it seems to 
create gaps between the partitions:


ganymed# gpt show wd0
   startsize  index  contents
   0   1 PMBR
   1   1 Pri GPT header
   2  32 Pri GPT table
  342014 Unused
2048  262144  1  GPT part - EFI System
  2641922048 Unused
  26624033554432  2  GPT part - NetBSD FFSv1/FFSv2
338206722048 Unused
3382272016623616  3  GPT part - NetBSD swap
50446336  3856582799 Unused
  3907029135  32 Sec GPT table
  3907029167   1 Sec GPT header

This leds to a particular unexpected behavior if one tries to create 
another partition afterwards, using all unused space:


ganymed# gpt add -t linux-lvm -l lvm wd0
/dev/rwd0: Partition 4 added: e6d6d379-f507-44c2-a23c-238f2a3df928 34 2014

Instead of using the large free space at the end of the disk, the gap 
between the PMBR and the EFI system partition is used:


ganymed# gpt show wd0
   startsize  index  contents
   0   1 PMBR
   1   1 Pri GPT header
   2  32 Pri GPT table
  342014  4  GPT part - Linux LVM
2048  262144  1  GPT part - EFI System
  2641922048 Unused
  26624033554432  2  GPT part - NetBSD FFSv1/FFSv2
338206722048 Unused
3382272016623616  3  GPT part - NetBSD swap
50446336  3856582799 Unused
  3907029135  32 Sec GPT table
  3907029167   1 Sec GPT header

I can workaround this by using the -b option of gpt to specify the 
beginning block number of the partition to be created. Anyway - this 
seems not to be as intuitive as it should. Assuming partitioning is a 
one time effort only, the priority is not that high. I would still like 
to understand why the gaps are existing (maybe for alignment?) and if 
the reported behavior of the gpt command is as expected.


ganymed# gpt add -b 50446336 -t linux-lvm -l lvm wd0
/dev/rwd0: Partition 4 added: e6d6d379-f507-44c2-a23c-238f2a3df928 
50446336 3856582799


ganymed# gpt show wd0
   startsize  index  contents
   0   1 PMBR
   1   1 Pri GPT header
   2  32 Pri GPT table
  342014 Unused
2048  262144  1  GPT part - EFI System
  2641922048 Unused
  26624033554432  2  GPT part - NetBSD FFSv1/FFSv2
338206722048 Unused
3382272016623616  3  GPT part - NetBSD swap
50446336  3856582799  4  GPT part - Linux LVM
  3907029135  32 Sec GPT table
  3907029167   1 Sec GPT header

Kind regards
Matthias

p.s. this is on NetBSD 9.0 Release


Re: Getting undefined reference to ___tls_get_addr when building packages on netbsd-5.2/i386

2020-08-03 Thread Kamil Rytarowski
On 03.08.2020 10:48, Brian Buhrow wrote:
>   hello.  Following up on my own thread, I've figured out that the
> symbol in question ___tls_get_addr shows up in libbfd.a if I install
> pkgsrc/devel/binutils. The question now is, how can I get my packages to
> link against that library?   And, will that library successfully load
> binaries on NetBSD-5.2/i386?
> Thanks for any ideas on this.  
> 
> BTW, I found the following page on NetBSD and thread  local storage, which
> seems to be the trouble here.  So the question is, can I work around the
> lack of native tls in NetBSD-5, which doesn't appear to have it, by using
> binutils, which does?  And, if so, how do I build packages using that
> instead of the NetBSD native tools?
> 
> -thanks
> -Brian
> 
> http://www.netbsd.org/~mjf/tls/tasks.html
> 

___tls_get_addr is delivered on i386 in /usr/libexec/ld.elf_so and on
amd64 in /usr/libexec/ld.elf_so-i386

What's the situation with this symbol in NetBSD-5.2 is unknown to me.



signature.asc
Description: OpenPGP digital signature


Re: Getting undefined reference to ___tls_get_addr when building packages on netbsd-5.2/i386

2020-08-03 Thread Brian Buhrow
hello.  Following up on my own thread, I've figured out that the
symbol in question ___tls_get_addr shows up in libbfd.a if I install
pkgsrc/devel/binutils. The question now is, how can I get my packages to
link against that library?   And, will that library successfully load
binaries on NetBSD-5.2/i386?
Thanks for any ideas on this.  

BTW, I found the following page on NetBSD and thread  local storage, which
seems to be the trouble here.  So the question is, can I work around the
lack of native tls in NetBSD-5, which doesn't appear to have it, by using
binutils, which does?  And, if so, how do I build packages using that
instead of the NetBSD native tools?

-thanks
-Brian

http://www.netbsd.org/~mjf/tls/tasks.html