Re: FOSDEM 2019 - Embedded FreeBSD on a five-core RISC-V processor using LLVM

2019-02-03 Thread Dinesh Thirumurthy
The recording of Benny's talk is at
https://video.fosdem.org/2019/K.3.401/netbsd_update.mp4

Thomas's talk's recording has not been uploaded. It will show up
sometime this week at
https://video.fosdem.org/2019/K.3.401/

The BSD track that happened is at
https://fosdem.org/2019/schedule/track/bsd/

Thanks.
Regards,
Dinesh



On 2/2/19, Benny Siegert  wrote:
> Don't forget that there are two NetBSD talks this yeat at FOSDEM:
>
> https://fosdem.org/2019/schedule/event/netbsd_update/ (by me, 13:00)
>
> and
>
> https://fosdem.org/2019/schedule/event/kleak/ (Thomas Barabosch, 13:25)
>
> On Fri, Feb 1, 2019 at 11:24 PM Dinesh Thirumurthy
>  wrote:
>>
>> Hi,
>>
>> This talk
>>
>> https://fosdem.org/2019/schedule/event/testing_freebsd_risc_v5/
>>
>> is being presented at 1130 UTC Sat Feb 2nd. You can view via streaming.
>>
>> The BSD track is at https://fosdem.org/2019/schedule/track/bsd/
>>
>> The RISC-V track is at https://fosdem.org/2019/schedule/track/risc_v/
>>
>> Thanks.
>> Regards
>> Dinesh
>
>
>
> --
> Benny
>


Re: Video Driver for Intel - resolution stuck at 800x600

2019-02-03 Thread David Brownlee
The NetBSD kernel includes ~all the hardware drivers, network stack, drm
(Direct Rendering Manager) display code, virtual memory management and
related facilities. By dropping a current kernel onto a NetBSD-8 install
you can take advantage of any changes in the above, while still keeping all
of the libraries, programs, and installed apps from a standard NetBSD-8
install.

You don't get the advantage of any userland library and program
improvements since NetBSD-8, but its makes a good compromise.

Critically this allows you to switch back and forth between running with a
current kernel and a NetBSD-8 kernel by just rebooting - so if there is a
problem with a current kernel you can easily go back to a stock install.

You just need to add entries to /boot.cfg to allow you to switch.

The script I sent assumes /current already exists and that you're running
amd64. I've attached an updated version which should handle that, plus will
run as root without sudo (if you do not have sudo setup). It also notes the
lines you should add to /boot.cfg

I've taken the liberty of cc'ing in the list on this reply (hope you do not
mind), because
a) If anyone else was reading the original reply they may as well have the
slightly improved hacky script
b) If I'm sending something which will be run directly or indirectly as
root, its always nice to have it available to other eyes to confirm there
is nothing nefarious :)

Thanks

David

On Sun, 3 Feb 2019 at 12:27, Ron Georgia  wrote:

> David,
> Thank you for responding, I hope you do not mind me sending you email
> directly. I have a question; please excuse my ignorance. What does the
> current kernel with NetBSD 8.0 buy me? Does that bring in some of the new
> drivers?
> If I understand correctly, I simply install NetBSD 8.0, then I follow (or
> run) the script you included, is that correct?
>
> On 2/1/19, 8:59 AM, "David Brownlee"  wrote:
>
> On Fri, 1 Feb 2019 at 12:36, Ron Georgia  wrote:
> >
> > " Why not just run NetBSD-current if that works with your card?"
> > A most excellent question, with a relatively embarrassing answer: I
> am not sure how to keep NetBSD-current, current. I am part of the
> NetBSD-current mailing list and read about different issues others are
> experiencing; however, I do not really know how to update the base OS or
> apply a particular (suggested) patch. I did read the " Tracking
> NetBSD-current" page, but it seems confusing to me.
> >
> > Thank you for responding. I'll try current again.
>
> You might want to try just running a current kernel first - I'm
> running stock netbsd-8 userland and packages and just a current kernel
> on my T530...
>
> I setup boot.cfg to default to a new option (boot '/current') then
> have this quickly hacked up script I run every so often to update the
> current kernel
>
> David
>
>
>
>


update-kernel
Description: Binary data


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-03 Thread tlaronde
Hello,

On Sun, Feb 03, 2019 at 12:07:06PM +, Sad Clouds wrote:
> On Sun, 3 Feb 2019 11:27:07 +0100
> tlaro...@polynum.com wrote:
> 
> > With all your help and from this summary, I suspect that the probable
> > culprit is 3) above (linked also to 2) but mainly 3): an instance
> > of Samba, serving a 10T or a 100T request is blocking on I/O,
> 
> You must have some ancient hardware that is not capable of utilising
> full network bandwidth.
> 
> I have Sun Ultra10 with 440MHz CPU, this has a 1000baseT PCI network
> card, but when sending TCP data it can only manage around 13 MiB/sec.
> Looking at 'vmstat 1' output it is clear that CPU is 100% busy, 50%
> system + 50% interrupt.
> 
> ultra10$ ifconfig wm0
> wm0: flags=0x8843 mtu 1500
> capabilities=2bf80
> capabilities=2bf80
> capabilities=2bf80
> enabled=0
> ec_capabilities=7
> ec_enabled=0
> address: 00:0e:04:b7:7f:47
> media: Ethernet autoselect (1000baseT 
> full-duplex,flowcontrol,rxpause,txpause)
> status: active
> inet 192.168.1.3/24 broadcast 192.168.1.255 flags 0x0
> inet6 fe80::20e:4ff:feb7:7f47%wm0/64 flags 0x0 scopeid 0x2
> 
> 
> ultra10$ ./sv_net -mode=cli -ip=192.168.1.2 -port= -threads=1 -block=64k 
> -size=100m
> Per-thread metrics:
>   T 1  connect 1.01 msec,  transfer 7255.29 msec (13.78 MiB/sec, 9899.03 
> Pks/sec)
> 
> Per-thread socket options:
>   T 1  rcvbuf=33580,  sndbuf=49688,  sndmaxseg=1460,  nodelay=Off
> 
> Aggregate metrics:
>   connect  1.01 msec
>   transfer 7255.29 msec (13.78 MiB/sec, 9899.03 Pks/sec)

Thank you for the data.

But there is one point that I failed to give: it happens that on some
nodes, the connections to the server become totally unresponsive for
several seconds.

Since the server, if not brand new, is a decent less than 5
years old dual-core Intel with plenty of RAM, upgraded to NetBSD 7.2, it
is able to use the Intel gigabit card at full speed.

Thanks to your data, I see that on some hardware, despite the
capabilities of the ethernet card, I might not get the full speed.

But since (what I forgot to write) sometimes the whole performance,
via the Samba shares, drops, and since, from the answers, the bottle
neck is not on the ethernet card of the server, and since something
is affecting everything, the problem seems logically to be on the
server and this is on the server that a process is grabbing one of
the core.

Thanks for your answer. I already learned valuable things!
-- 
Thierry Laronde 
 http://www.kergis.com/
   http://www.sbfa.fr/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-03 Thread Sad Clouds
On Sun, 3 Feb 2019 11:27:07 +0100
tlaro...@polynum.com wrote:

> With all your help and from this summary, I suspect that the probable
> culprit is 3) above (linked also to 2) but mainly 3): an instance
> of Samba, serving a 10T or a 100T request is blocking on I/O,

You must have some ancient hardware that is not capable of utilising
full network bandwidth.

I have Sun Ultra10 with 440MHz CPU, this has a 1000baseT PCI network
card, but when sending TCP data it can only manage around 13 MiB/sec.
Looking at 'vmstat 1' output it is clear that CPU is 100% busy, 50%
system + 50% interrupt.

ultra10$ ifconfig wm0
wm0: flags=0x8843 mtu 1500
capabilities=2bf80
capabilities=2bf80
capabilities=2bf80
enabled=0
ec_capabilities=7
ec_enabled=0
address: 00:0e:04:b7:7f:47
media: Ethernet autoselect (1000baseT 
full-duplex,flowcontrol,rxpause,txpause)
status: active
inet 192.168.1.3/24 broadcast 192.168.1.255 flags 0x0
inet6 fe80::20e:4ff:feb7:7f47%wm0/64 flags 0x0 scopeid 0x2


ultra10$ ./sv_net -mode=cli -ip=192.168.1.2 -port= -threads=1 -block=64k 
-size=100m
Per-thread metrics:
  T 1  connect 1.01 msec,  transfer 7255.29 msec (13.78 MiB/sec, 9899.03 
Pks/sec)

Per-thread socket options:
  T 1  rcvbuf=33580,  sndbuf=49688,  sndmaxseg=1460,  nodelay=Off

Aggregate metrics:
  connect  1.01 msec
  transfer 7255.29 msec (13.78 MiB/sec, 9899.03 Pks/sec)



Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-03 Thread tlaronde
Hello,

And thank you all for the answers!

In order to not have to interpolate the various informations, I will
summarize:

My initial question: I have a NetBSD server serving FFSv2 filesystems
via Samba (last pkgsrc version) through a 1000T ethernet card to a bunch
of Windows clients, heterogeneous, both as OS version and as ethernet
cards, ranging from 10T to 1000T. All the nodes are connected to a Cisco
switch. The network performances (for Samba) seem poor and I wonder how
the various ethernet speeds (10 to 1000) could affect the performance,
impacting the negociations on the server auto-select 1000T card.

>From your answers (the details given are worth reading, and someone
reading this should return to the whole answers. I'm just summarizing,
if I'm not mistaken):

1) The negociations are done by the switch, and the server card doesn't
handle it by itself;
2) On the server, the capabilities of the disks serving should be
determined;
3) On the server, Samba is not multithreaded and spawning an
instance for each connection, so even on a multicore perhaps not using
all the cores and even if it does, the instances are still concurrent;
4) The measure of the network performances should be determined by using
for example iperf3 available on pkgsrc.

>From your questions about precisions:

For the switch:
a) The switch is a Cisco gigabit 16 ports switch (RV325) able of 
handling
simultaneous full-duplex gigabit on all the ports;

b) The cards are correctly detected to their maximum speed on the
switch: the leds indicate correctly gigabit for the correct cards, and
not gigabit (no difference between 10T and 100T) for the others.

For the poor performances:
a) On a Windows (10 if I remember---I'm not on site), with a Gigabit
Ethernet card, downloading from a Samba share gives a 12MB/s that is
the max performance of a 100T card; uploading to the server via
Samba gives a 3MB/s;
b) Testing on another Windows node with a 100T card, I have the same
throughput copying via Samba or using ftp.

With all your help and from this summary, I suspect that the probable
culprit is 3) above (linked also to 2) but mainly 3): an instance
of Samba, serving a 10T or a 100T request is blocking on I/O, specially
on writing (sync?), the other instances waiting for a chance to
have a slice of CPU? That is, the problem is probably with caching and
syncing ---there are some Samba parameters in the config file but
the whole is a bit cryptic... And I'd like to use NFS, but Microsoft
allowing and then dropping, I don't know if the NFS client on Windows 
can still be installed without requiring to install a full Linux based
distribution...

Thank you all! once again.

Best regards,
-- 
Thierry Laronde 
 http://www.kergis.com/
   http://www.sbfa.fr/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C


Re: amd64 shutdown during USB rsync

2019-02-03 Thread Martin Husemann
On Sat, Feb 02, 2019 at 11:13:26PM +, Mark Carroll wrote:
> Hello. I'm trying out NetBSD 8.0 (GENERIC) #0 on an old Intel Broadwell
> NUC.

Please try 8.0_STABLE from

http://nycdn.netbsd.org/pub/NetBSD-daily/netbsd-8/latest/

Martin


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-03 Thread Sad Clouds
On Sat, 2 Feb 2019 17:01:18 +0100
tlaro...@polynum.com wrote:

> Hello,
> 
> I have a NetBSD serving FFSv2 filesystems to various Windows nodes via
> Samba.
> 
> The network efficiency seems to me underpar.

And how did you determine that? There are so many factors that can
affect performance, you need to run detailed tests and work your way
down the list. Normally, good switches are designed to handle
throughput for the number of ports they have, i.e. their switching
fabric should be able to cope with all of those ports transmitting at
the same time at the highest supported rate. So quite often it's not
the switch that causes performance issues, but disk I/O latency and
protocol overheads.

At work, I used to spend a fair amount of my time diagnosing SMB and
NFS performance issues. Below is a checklist that I would normally
run through:

- Check network speeds between various hosts using something like
ttcp or iperf. This should give you baseline performance for TCP
throughput with your hardware.

- Understand how your SMB clients are reading/writing data, i.e. what
block size they use, do they use long sequential, or small random
reads/writes.

- Understand disk I/O latency on your SMB server. What throughput can
it sustain for the workloads from your clients.

- What SMB protocol versions are you clients using SMB1, SMB2 or SMB3.
Later versions of SMB protocol are more efficient. Are your clients
using SMB signing feature, this can have as much as %20 performance
hit.

- Understand how many concurrent streams are running and if they are
running from a single or multiple clients. Samba server is not
multithreaded, so I think it forks a single child process per client.
This means it won't scale on multicore hardware if you are running many
concurrent streams but all from a single client. It is better to spread
the load across many different clients, this way multiple Samba server
processes can be scheduled to run on multiple CPU cores.

... and the list goes on.