amd64 shutdown during USB rsync

2019-02-02 Thread Mark Carroll
Hello. I'm trying out NetBSD 8.0 (GENERIC) #0 on an old Intel Broadwell
NUC. I have a couple of USB3 portable HDDs plugged into it, each with an
encrypted FFS filesystem mounted via cgd. Doing an rsync from one to the
other it gets around 800GB through but can't finish it off: each time it
does a little more then abruptly shuts the machine down, leaving nothing
interesting ending the logs. I've not seen this machine shut down
without warning in any other circumstance. Watching CPU temperature with
envstat -i 10 shows nothing interesting there: they seem to be down at
40C or so just before the shutdown. In case it helps,

# usbdevs /dev/usb0
addr 0: xHCI Root Hub, vendor 8086
 addr 1: Expansion, Seagate
 addr 2: Elements 25A2, Western Digital
addr 0: xHCI Root Hub, vendor 8086

Is there anything obvious I should be looking at or trying? After the
latest fsck I might see if there are any USB BIOS settings to tweak.

-- Mark


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-02 Thread Johnny Billquist

On 2019-02-02 23:14, Michael van Elst wrote:

tlaro...@polynum.com writes:


Is the speed adapted to each connected device? Or does the serving card
fix the speed, during a slice of time, for all connexions to the minimum
speed?


Autonegotiation means that the card and the switch communicate and
agree on a speed.



What is the "cost" of switching the speed or, in other words, is
connecting a 10base card able to slow done the whole throughput of the
card even for other devices---due to the overhead of switching the speed
depending on connected devices?


The speed isn't switched unless you disconnect or reconnect the
ethernet cable, but if the configuration is unchanged, card and
switch will usually agree on the same speed as before.



(The other question relates to the switch but not to NetBSD: does the
switch have a table for the connected devices and buffers the
transactions, rewriting the packets to adjust for the speed of each of
the devices?).


Almost all switches will allow to connect devices with different
speeds and packets that are received at one speed are buffered and
sent out at possibly a different speed. The packets don't need to
be rewritten.


Right. And I would say "all switches". That's essentially the difference 
between a hub and a switch. A hub just forward the bits in realtime, 
while a switch stores and forwards the packet. Packets are never 
rewritten, but a switch will have buffers, so it can handle different 
speeds on different ports.



Differences in speed between sender and receiver are usually handled
by higher level protocols (e.g. TCP) or by lower level protocols
(802.3x) when device and switch support it.


Well, in general, it is not "handled" at all. The packets are received 
by the switch at one speed, and sent out on a different port at a 
different speed. The two machines communicating will be unaware of the 
fact that they are actually communicating at different speeds. And the 
switch will just be using some memory, and introduce some small delay.


The one problem is when the switch is running out of memory. For 
example, if a machine runs a 100 Mbit/s link, and is producing data at 
full speed, and it needs to be forwarded on a 10 Mbit/s port. But that's 
simply a case of the switch then just dropping the packets.


A more clever switch will utilize random early drops (RED), which 
potentially have less impact on TCP connections than just outright drop 
all packets when the switch is really out of memory. But all of that is 
just attempts at protocol performance improvements based on knowledge of 
how TCP reacts to dropped packets.


  Johnny

--
Johnny Billquist  || "I'm on a bus
  ||  on a psychedelic trip
email: b...@softjar.se ||  Reading murder books
pdp is alive! ||  tryin' to stay hip" - B. Idol


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-02 Thread Michael van Elst
tlaro...@polynum.com writes:

>Is the speed adapted to each connected device? Or does the serving card
>fix the speed, during a slice of time, for all connexions to the minimum
>speed?

Autonegotiation means that the card and the switch communicate and
agree on a speed.


>What is the "cost" of switching the speed or, in other words, is
>connecting a 10base card able to slow done the whole throughput of the
>card even for other devices---due to the overhead of switching the speed
>depending on connected devices?

The speed isn't switched unless you disconnect or reconnect the
ethernet cable, but if the configuration is unchanged, card and
switch will usually agree on the same speed as before.


>(The other question relates to the switch but not to NetBSD: does the
>switch have a table for the connected devices and buffers the
>transactions, rewriting the packets to adjust for the speed of each of
>the devices?).

Almost all switches will allow to connect devices with different
speeds and packets that are received at one speed are buffered and
sent out at possibly a different speed. The packets don't need to
be rewritten.

Differences in speed between sender and receiver are usually handled
by higher level protocols (e.g. TCP) or by lower level protocols
(802.3x) when device and switch support it.

-- 
-- 
Michael van Elst
Internet: mlel...@serpens.de
"A potential Snark may lurk in every tree."


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-02 Thread Greg Troxel
tlaro...@polynum.com writes:

> I have a NetBSD serving FFSv2 filesystems to various Windows nodes via
> Samba.
>
> The network efficiency seems to me underpar.

I would recommend trying to test with ttcp or some such first, to
establish packet-handling baseline performance separate from remote
filesystem issues.

Remember that network speeds are given in Mb/s but data throughput is
often in MB/s (bits vs bytes).

Then, there are various header overheads (ethernet, IP, TCP).  Many
interfaces cannot run at full Gb/s rate, but most 100 Mb/s ones get
close (for semi-modern or modern computers).

So you should explain what rates you are actually seeing.  80 Mb/s on
100 Mb/s is great, and 300 Mb/s on Gb/s ethernet is not unusual.

Plus, with seek time and disk read time, that can start to matter.

> There is very probably Samba tuning involved. Windows tuning too. But a
> question arised to me about miscellaneous speeds of ethernet cards
> connecting to a card on the NetBSD server able of auto-selecting the
> speed between 10 to 1000.
>
> The Windows boxes are very hetergoneous (one might even say that there
> are not too same Windows OS versions, because some hardware is quite
> old) and the cards range from 10 to 1000 able ethernet devices.
>
> Needless to say, there is a switch (Cisco) on which all the nodes are
> connected.

Modern (after 10base2 went awwy in the 90s) Ethernet is all point to
point, with switches or hubs.  These days, hubs are extremely rare.  I
believe hubs can only handle one speed.  There were certainly 10baseT
hubs, and I remember 10/100 hub/switch combos that were actually a 10
Mb/s hub and a 100 Mb/s hub with a 2-port switch between tem, and each
port got connected to the right hub depending on what was connected.
But anything in the last 5 years, maybe even 10 years, is highly likely
a full switch.

A switch will negotiate a speed with each connected device.  Most have
lights to show you what was negotiated, explained in the manual.

Sometimes autonegotiation is flaky (speed, and full-duplex) and it helps
to force a speed on each client.  But I suspect, without a good basis,
that this is probably not your issue.

> When concurrent accesses to an auto-select ethernet card are done by
> ethernet cards ranging from 10 to 1000 speeds, are is this handled by
> the card?

Each computer's interface negogiates a speed with the switch.  With a
gigabit switch, that should be the highest speed the interface supports.

Packets are then sent from card to switch, at that card's rate, and then
from switch to the other card, at the second card's rate.  There  is no
per-packet negotiation of speeds.

> What is the "cost" of switching the speed or, in other words, is
> connecting a 10base card able to slow done the whole throughput of the
> card even for other devices---due to the overhead of switching the speed
> depending on connected devices?

Broadcast packets can be an issue.  Not because the individual links
change speed, but because if you have a 10 Mb/s link then all broadcast
packets have to be sent on it.

> (The other question relates to the switch but not to NetBSD: does the
> switch have a table for the connected devices and buffers the
> transactions, rewriting the packets to adjust for the speed of each of
> the devices?).

If it is truly a switch, yes.  But not so much rewriting as clocking
them out at the right speed (with the right modulation).


Usually this sort of switch speed issues works just fine without anybody
having to pay attention.

I would try to measure the file read speed from each machine to the
NetBSD server, and make a table with machine name, local link speed, and
rate, and see if it makes any sense.  Also perhaps try the
newest/fastest windows box with the others all powered off.  Having a
machine on and idle should not really change things, and that's another
data point.

If you follow up on the list, it would be good to give the switch
model, and to find out what speed each device is connected at.  On
NetBSD, 'ifconfig' will show you.  An example line:

media: Ethernet autoselect (100baseTX full-duplex)

Probably there is some way to find this out on Windows.


Re: Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-02 Thread Andy Ruhl
On Sat, Feb 2, 2019 at 10:18 AM  wrote:
>
> Hello,
>
> I have a NetBSD serving FFSv2 filesystems to various Windows nodes via
> Samba.
>
> The network efficiency seems to me underpar.
>
> There is very probably Samba tuning involved. Windows tuning too. But a
> question arised to me about miscellaneous speeds of ethernet cards
> connecting to a card on the NetBSD server able of auto-selecting the
> speed between 10 to 1000.
>
> The Windows boxes are very hetergoneous (one might even say that there
> are not too same Windows OS versions, because some hardware is quite
> old) and the cards range from 10 to 1000 able ethernet devices.
>
> Needless to say, there is a switch (Cisco) on which all the nodes are
> connected.
>
> When concurrent accesses to an auto-select ethernet card are done by
> ethernet cards ranging from 10 to 1000 speeds, are is this handled by
> the card?
>
> Is the speed adapted to each connected device? Or does the serving card
> fix the speed, during a slice of time, for all connexions to the minimum
> speed?
>
> What is the "cost" of switching the speed or, in other words, is
> connecting a 10base card able to slow done the whole throughput of the
> card even for other devices---due to the overhead of switching the speed
> depending on connected devices?
>
> (The other question relates to the switch but not to NetBSD: does the
> switch have a table for the connected devices and buffers the
> transactions, rewriting the packets to adjust for the speed of each of
> the devices?).
>
> If someone has any clue on the subject, I will be very thankful to
> learn!

As you probably suspect, this isn't a NetBSD issue, and is something
you can read on extensively on the internet. Maybe you need a place to
start, which is often where I find myself on many subjects. I probably
will miss something.

The switch negotiates connections on a port by port basis. So if one
device is 1 gig, it will negotiate 1 gig. If another device is 10
megabit, it will negotiate that. Each port is a separate entity. Then
you have half vs. full duplex. So what happens when they talk?

The switch does something at layer 2 called RED or WRED (Weighted
Random Early Detection) to decide if one port is going too fast for
another. It's not really an ideal place to be, and it usually happens
when either you have different adapter speeds or you have a whole lot
of machines on lots of ports trying to overrun 1 port (like an uplink
port).

But you're hoping it doesn't come to that. It's best if TCP just does
it's thing and sets the window size to one that both sides can handle
nicely and things "just work". RED or WRED will happen but hopefully
less.

I'd love someone to correct me if I'm wrong on this.

If you're asking if using a 10 megabit adapter is the best way to do
traffic shaping, it isn't, and that's a whole different subject that
probably doesn't belong here.

Andy


Ethernet auto-select and concurrent 10, 100 and 1000 connections

2019-02-02 Thread tlaronde
Hello,

I have a NetBSD serving FFSv2 filesystems to various Windows nodes via
Samba.

The network efficiency seems to me underpar.

There is very probably Samba tuning involved. Windows tuning too. But a
question arised to me about miscellaneous speeds of ethernet cards
connecting to a card on the NetBSD server able of auto-selecting the
speed between 10 to 1000.

The Windows boxes are very hetergoneous (one might even say that there
are not too same Windows OS versions, because some hardware is quite
old) and the cards range from 10 to 1000 able ethernet devices.

Needless to say, there is a switch (Cisco) on which all the nodes are
connected.

When concurrent accesses to an auto-select ethernet card are done by
ethernet cards ranging from 10 to 1000 speeds, are is this handled by
the card?

Is the speed adapted to each connected device? Or does the serving card
fix the speed, during a slice of time, for all connexions to the minimum
speed?

What is the "cost" of switching the speed or, in other words, is
connecting a 10base card able to slow done the whole throughput of the
card even for other devices---due to the overhead of switching the speed
depending on connected devices?

(The other question relates to the switch but not to NetBSD: does the
switch have a table for the connected devices and buffers the
transactions, rewriting the packets to adjust for the speed of each of
the devices?).

If someone has any clue on the subject, I will be very thankful to
learn!

TIA,
-- 
Thierry Laronde 
 http://www.kergis.com/
   http://www.sbfa.fr/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C


Re: FOSDEM 2019 - Embedded FreeBSD on a five-core RISC-V processor using LLVM

2019-02-02 Thread Benny Siegert
Don't forget that there are two NetBSD talks this yeat at FOSDEM:

https://fosdem.org/2019/schedule/event/netbsd_update/ (by me, 13:00)

and

https://fosdem.org/2019/schedule/event/kleak/ (Thomas Barabosch, 13:25)

On Fri, Feb 1, 2019 at 11:24 PM Dinesh Thirumurthy
 wrote:
>
> Hi,
>
> This talk
>
> https://fosdem.org/2019/schedule/event/testing_freebsd_risc_v5/
>
> is being presented at 1130 UTC Sat Feb 2nd. You can view via streaming.
>
> The BSD track is at https://fosdem.org/2019/schedule/track/bsd/
>
> The RISC-V track is at https://fosdem.org/2019/schedule/track/risc_v/
>
> Thanks.
> Regards
> Dinesh



-- 
Benny