NetBSD 7.1.1 cairo update issues

2018-03-20 Thread Riccardo Mottola

Hi,

I am running 7.1.1 on x86. I update pkgsrc and pkg_rolling-replace tells me:
cairo build fails because it pulls in MesaLibs which fails:

checking for SHA1 implementation... libc
checking for LIBUDEV... no
checking for LIBDEVQ... no
Please specify at least one package name on the command line.
--print-errors: not found
checking for GLPROTO... yes
configure: error: Direct rendering requires libdrm >= 2.4.60
*** Error code 1

Stop.
make[2]: stopped in /usr/pkgsrc/graphics/MesaLib


Where is the issue?

Riccardo


Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Robert Elz
Date:Tue, 20 Mar 2018 14:18:31 +
From:Chavdar Ivanov 
Message-ID:  


  | Anyway, nothing so far explains Martin's results being just a tad below
  | those of Linux and everyone else getting speeds 5-6 times slower.

What are the file system parameters?It is easy to make ffs go slow if it is
not set up properly.

Also remember the "fast" in ffs was designed to be fast reading, not so much
writing (on the assumption that most files are read much more often than 
they're written ... consider the contents of /usr/bin for example) and that it
was designed to operate with SMD disks, where knowledge and control is
considerably enhanced over scsi or ide.

kre



Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Chavdar Ivanov
On Tue, 20 Mar 2018, 12:30 Sad Clouds,  wrote:

> Hello, a few comments on your tests:
>
> - Reading from /dev/urandom could be a bottleneck, depending on how that
> random data is generated. Best to avoid this, if you need random data, try
> to use a bench tool that can quickly generate dynamic random data.
>

Obviously. I pre-created the file and measured the transfer between two
filesystems on different disks.

>
> - Writing to ZFS can give all sorts of results, i.e. it may be doing
> compression, encryption, deduplication., etc. You'd need to disable all
> those features in order to have comparable results to NetBSD local file
> system.
>

Ditto. Included for comparison only - e.g. see the figure when reading
/dev/zero - it is almost instantaneous.

Subsequently I did some FreeBSD tests as well, those were in line with
NetBSD.

Anyway, nothing so far explains Martin's results being just a tad below
those of Linux and everyone else getting speeds 5-6 times slower.

>
> - I think by default, dd does not call fsync() when it closes its output
> file, with GNU dd you need to use conv=fsync argument, otherwise you could
> be benchmarking writing data to OS page cache, instead of virtual disk.
>

Right.

>
>
>
>
> On Tue, Mar 20, 2018 at 9:20 AM, Chavdar Ivanov  wrote:
>
>> Well, testing with a file of zeroes is not a very good benchmark - see
>> the result for OmniOS/CE below:
>> 
>> ➜  xci dd if=/dev/zero of=out bs=100 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes transferred in 0.685792 secs (1458168149 bytes/sec)
>> 
>>
>> So I decided to switch to previously created random contents and move it
>> with dd between two different disks. Here is what I get:
>> ---
>> Centos 7.4  XFS
>> ➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes (1.0 GB) copied, 9.6948 s, 103 MB/s
>> ➜  xci dd if=rand.out of=/data/rand.out bs=100
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes (1.0 GB) copied, 2.49195 s, 401 MB/s
>> OmniOS CE - ZFS
>> ➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes transferred in 16.982885 secs (58882812 bytes/sec)
>> ➜  xci dd if=/dev/urandom if=rand.out of=/data/testme/rand.out  bs=100
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes transferred in 21.341659 secs (46856713 bytes/sec)
>> NetBSD-current amd64 8.99.12 --- FFS
>> ➜  sysbuild   dd if=/dev/urandom of=rand.out bs=100 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes transferred in 32.992 secs (30310378 bytes/sec)
>> ➜  sysbuild dd if=rand.out of=/usr/pkgsrc/rand.out bs=100
>> 1000+0 records in
>> 1000+0 records out
>> 10 bytes transferred in 23.535 secs (42489908 bytes/sec)
>> 
>>
>> OmniOS/ZFS and NetBSD/FFS results are comparable, the Centos/XFS one is a
>> bit hard to explain.
>>
>> This is on the same Windows 10 host as before.
>>
>> Chavdar
>>
>> On Mon, 19 Mar 2018 at 23:16 Chavdar Ivanov  wrote:
>>
>>> I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
>>> the same or similar enough to the one in Centos; there was no significant
>>> difference between the two. The fastest figure came on the system disk when
>>> it was attached to an IDE controller with ICH6 chipset. about 180MB/sec

Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Sad Clouds
Hello, a few comments on your tests:

- Reading from /dev/urandom could be a bottleneck, depending on how that
random data is generated. Best to avoid this, if you need random data, try
to use a bench tool that can quickly generate dynamic random data.

- Writing to ZFS can give all sorts of results, i.e. it may be doing
compression, encryption, deduplication., etc. You'd need to disable all
those features in order to have comparable results to NetBSD local file
system.

- I think by default, dd does not call fsync() when it closes its output
file, with GNU dd you need to use conv=fsync argument, otherwise you could
be benchmarking writing data to OS page cache, instead of virtual disk.




On Tue, Mar 20, 2018 at 9:20 AM, Chavdar Ivanov  wrote:

> Well, testing with a file of zeroes is not a very good benchmark - see the
> result for OmniOS/CE below:
> 
> ➜  xci dd if=/dev/zero of=out bs=100 count=1000
> 1000+0 records in
> 1000+0 records out
> 10 bytes transferred in 0.685792 secs (1458168149 bytes/sec)
> 
>
> So I decided to switch to previously created random contents and move it
> with dd between two different disks. Here is what I get:
> ---
> Centos 7.4  XFS
> ➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
> 1000+0 records in
> 1000+0 records out
> 10 bytes (1.0 GB) copied, 9.6948 s, 103 MB/s
> ➜  xci dd if=rand.out of=/data/rand.out bs=100
> 1000+0 records in
> 1000+0 records out
> 10 bytes (1.0 GB) copied, 2.49195 s, 401 MB/s
> OmniOS CE - ZFS
> ➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
> 1000+0 records in
> 1000+0 records out
> 10 bytes transferred in 16.982885 secs (58882812 bytes/sec)
> ➜  xci dd if=/dev/urandom if=rand.out of=/data/testme/rand.out  bs=100
> 1000+0 records in
> 1000+0 records out
> 10 bytes transferred in 21.341659 secs (46856713 bytes/sec)
> NetBSD-current amd64 8.99.12 --- FFS
> ➜  sysbuild   dd if=/dev/urandom of=rand.out bs=100 count=1000
> 1000+0 records in
> 1000+0 records out
> 10 bytes transferred in 32.992 secs (30310378 bytes/sec)
> ➜  sysbuild dd if=rand.out of=/usr/pkgsrc/rand.out bs=100
> 1000+0 records in
> 1000+0 records out
> 10 bytes transferred in 23.535 secs (42489908 bytes/sec)
> 
>
> OmniOS/ZFS and NetBSD/FFS results are comparable, the Centos/XFS one is a
> bit hard to explain.
>
> This is on the same Windows 10 host as before.
>
> Chavdar
>
> On Mon, 19 Mar 2018 at 23:16 Chavdar Ivanov  wrote:
>
>> I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
>> the same or similar enough to the one in Centos; there was no significant
>> difference between the two. The fastest figure came on the system disk when
>> it was attached to an IDE controller with ICH6 chipset. about 180MB/sec.
>> All other combinations return between 110 and 160 MB/sec. Tried
>> with/without host os cache, also there is a setting that the disk is solid
>> state. No apparent difference.
>>
>> My host system is build 17120, so that may explain something. Not the
>> difference in figures though, comparing to Centos.
>>
>> Chavdar
>>
>> On Mon, 19 Mar 2018 at 23:06  wrote:
>>
>>> On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:
>>> > Any setting which influence the test and I didn't apply?
>>>
>>> yes, need to figure out what to make GNU dd behave the same.
>>> It has different defaults.
>>>
>>


Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Fekete Zoltán

2018-03-20 00:05 időpontban m...@netbsd.org ezt írta:

On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:

Any setting which influence the test and I didn't apply?


yes, need to figure out what to make GNU dd behave the same.
It has different defaults.


Ok, I installed a precompiled binary of coreutils-8.26.

/usr/pkg/bin/gdd after 3 measurement average: 105 MB/sec. So there is no 
significant difference between BSD dd and GNU dd.


As an addition:

I've run this test on a hardware-installed NetBSD 7.1.2 with 1TB SATA 
drive, and the result is 153 MB/sec.

Intel Core2 Duo CPU, 8GB RAM.

FeZ


Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Chavdar Ivanov
Well, testing with a file of zeroes is not a very good benchmark - see the
result for OmniOS/CE below:

➜  xci dd if=/dev/zero of=out bs=100 count=1000
1000+0 records in
1000+0 records out
10 bytes transferred in 0.685792 secs (1458168149 bytes/sec)


So I decided to switch to previously created random contents and move it
with dd between two different disks. Here is what I get:
---
Centos 7.4  XFS
➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
1000+0 records in
1000+0 records out
10 bytes (1.0 GB) copied, 9.6948 s, 103 MB/s
➜  xci dd if=rand.out of=/data/rand.out bs=100
1000+0 records in
1000+0 records out
10 bytes (1.0 GB) copied, 2.49195 s, 401 MB/s
OmniOS CE - ZFS
➜  xci dd if=/dev/urandom of=rand.out bs=100 count=1000
1000+0 records in
1000+0 records out
10 bytes transferred in 16.982885 secs (58882812 bytes/sec)
➜  xci dd if=/dev/urandom if=rand.out of=/data/testme/rand.out  bs=100
1000+0 records in
1000+0 records out
10 bytes transferred in 21.341659 secs (46856713 bytes/sec)
NetBSD-current amd64 8.99.12 --- FFS
➜  sysbuild   dd if=/dev/urandom of=rand.out bs=100 count=1000
1000+0 records in
1000+0 records out
10 bytes transferred in 32.992 secs (30310378 bytes/sec)
➜  sysbuild dd if=rand.out of=/usr/pkgsrc/rand.out bs=100
1000+0 records in
1000+0 records out
10 bytes transferred in 23.535 secs (42489908 bytes/sec)


OmniOS/ZFS and NetBSD/FFS results are comparable, the Centos/XFS one is a
bit hard to explain.

This is on the same Windows 10 host as before.

Chavdar

On Mon, 19 Mar 2018 at 23:16 Chavdar Ivanov  wrote:

> I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
> the same or similar enough to the one in Centos; there was no significant
> difference between the two. The fastest figure came on the system disk when
> it was attached to an IDE controller with ICH6 chipset. about 180MB/sec.
> All other combinations return between 110 and 160 MB/sec. Tried
> with/without host os cache, also there is a setting that the disk is solid
> state. No apparent difference.
>
> My host system is build 17120, so that may explain something. Not the
> difference in figures though, comparing to Centos.
>
> Chavdar
>
> On Mon, 19 Mar 2018 at 23:06  wrote:
>
>> On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:
>> > Any setting which influence the test and I didn't apply?
>>
>> yes, need to figure out what to make GNU dd behave the same.
>> It has different defaults.
>>
>


Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Chavdar Ivanov
I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
the same or similar enough to the one in Centos; there was no significant
difference between the two. The fastest figure came on the system disk when
it was attached to an IDE controller with ICH6 chipset. about 180MB/sec.
All other combinations return between 110 and 160 MB/sec. Tried
with/without host os cache, also there is a setting that the disk is solid
state. No apparent difference.

My host system is build 17120, so that may explain something. Not the
difference in figures though, comparing to Centos.

Chavdar

On Mon, 19 Mar 2018 at 23:06  wrote:

> On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:
> > Any setting which influence the test and I didn't apply?
>
> yes, need to figure out what to make GNU dd behave the same.
> It has different defaults.
>


Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Sad Clouds
On Mon, 19 Mar 2018 22:44:44 +
Chavdar Ivanov  wrote:

> I managed to get mine to about 180MB/sec, host i/o cache didn't make
> much difference, but I switched to ICH9 chipset and ICH6 SATA
> controller... Hold on, I just realised my root device is on an IDE
> controller, not SATA, which must have been the default setting for
> NetBSD in VirtualBox. I'll check using SATA.
> 
> My Centos VM returns some 600MB/sec.
> 
> Chavdar
> 

I suspect this may be due to Linux KVM/VirtualBox integration and
optimized paravirtualzed drivers. 



Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Sad Clouds
On Mon, 19 Mar 2018 16:17:33 +0100
Martin Husemann  wrote:

> On Mon, Mar 19, 2018 at 12:06:44PM +, Sad Clouds wrote:
> > Hello, which virtual controller do you use in VirtualBox and do you
> > have "Use Host I/O Cache" selected on that controller? If yes, then
> > you need to disable it before running I/O tests, otherwise it
> > caches loads of data in RAM instead of sending it to disk.
> 
> I am not sure it makes sense to benchmark the host IO performance in
> this context ;-)
> 
> However: I have the default settings, PIIX4. This is netbsd-8 GENERIC,
> as of a few days ago.
> 
> Turning off the host IO cache makes no measurable difference for me.
> 
> Martin

Hmm... something strange is going on here, I can't get anywhere close
to the throughput that you're getting on a NetBSD-8 VM, and I use
similar settings. I'm running VirtualBox 5.2.8 and changing "Use Host
I/O Cache" made no difference for me, max throughput is always around 50
MBytes/sec.



Re: NetBSD disk performance on VirtualBox

2018-03-20 Thread Chavdar Ivanov
I managed to get mine to about 180MB/sec, host i/o cache didn't make much
difference, but I switched to ICH9 chipset and ICH6 SATA controller... Hold
on, I just realised my root device is on an IDE controller, not SATA, which
must have been the default setting for NetBSD in VirtualBox. I'll check
using SATA.

My Centos VM returns some 600MB/sec.

Chavdar


On Mon, 19 Mar 2018 at 22:39 Sad Clouds  wrote:

> On Mon, 19 Mar 2018 16:17:33 +0100
> Martin Husemann  wrote:
>
> > On Mon, Mar 19, 2018 at 12:06:44PM +, Sad Clouds wrote:
> > > Hello, which virtual controller do you use in VirtualBox and do you
> > > have "Use Host I/O Cache" selected on that controller? If yes, then
> > > you need to disable it before running I/O tests, otherwise it
> > > caches loads of data in RAM instead of sending it to disk.
> >
> > I am not sure it makes sense to benchmark the host IO performance in
> > this context ;-)
> >
> > However: I have the default settings, PIIX4. This is netbsd-8 GENERIC,
> > as of a few days ago.
> >
> > Turning off the host IO cache makes no measurable difference for me.
> >
> > Martin
>
> Hmm... something strange is going on here, I can't get anywhere close
> to the throughput that you're getting on a NetBSD-8 VM, and I use
> similar settings. I'm running VirtualBox 5.2.8 and changing "Use Host
> I/O Cache" made no difference for me, max throughput is always around 50
> MBytes/sec.
>