Advice for Users trying to help Driver Developers

2019-08-02 Thread Tom Smyth
Hello all,

I was wondering as a user, what sort of testing and feedback
can I give to driver developers that would be useful /helpful
in improving driver functionality and performance in OpenBSD.

Im particularly interested in OpenBSD Network performance.
what tools / tests provide useful feedback to developers
so that it would make it a (little) easier for them to improve
the drivers (if and when it is necessary).

an example problem is where a driver running on OpenBSD
is not performing as fast as  a driver which is developed on
another OS which might have used non public / open-source information
in initializing / setting up the hardware.

as far as I know  I need to provide developers
output of dmesg command,
debug output in the event of a crash (sendbug)

compare performance under identical hardware conditions with different
Operating systems  eg using iperf3 / tcpbench

Is it useful to collect the output of pcidump -xxx
and compare the output
with the equivalent pci config space dump  in other operating systems
 eg lspci -vv - in linux...
?


Is it useful for devs to have users to collect and diff this data  and
present it
to devs,

are there other tools / methodologies that would help users help driver
developers

Im  interested in testing and helping improve  network drivers such as as
im using
some of these interfaces in production (or want to run them in production)

em
ix
vio
ixl
iavf
tap
vlan
egre
eoip
etherip
vxlan


Thanks for your time and suggestions

Tom Smyth


Re: Best 1Gbe NIC

2019-08-02 Thread Brian Brombacher
I find cheap PCI-Express and PCI-X em(4) cards suffice for my needs.  990-992 
Mbps with tcpbench.


> On Aug 2, 2019, at 11:26 AM, Claudio Jeker  wrote:
> 
>> On Fri, Aug 02, 2019 at 12:28:58PM +0100, Andy Lemin wrote:
>> Ahhh, thank you!
>> 
>> I didn’t realise this had changed and now the drivers are written with
>> full knowledge of the interface.
> 
> That is an overstatement but we know for sure a lot more about these cards
> then many other less open ones.
> 
>> So that would make Intel Server NICs (i350 for example) some of the best
>> 1Gbe cards nowadays then?
> 
> They are well supported by OpenBSD as are many other server nics like bge
> and bnx. I would not call them best, when it comes to network cards it
> seems to be a race to the bottom. All chips have stuff in them that is
> just not great. em(4) for example needs a major workaround because the
> buffersize is specified by a bitfield. 
> 
> My view is more pessimistic, all network cards are shit there are just
> some that are less shitty. Also I prefer to use em(4) over most other
> gigabit cards.
> 
> -- 
> :wq Claudio
> 
>> 
>> Sent from a teeny tiny keyboard, so please excuse typos
>> 
 On 2 Aug 2019, at 09:52, Jonathan Gray  wrote:
 
 On Fri, Aug 02, 2019 at 09:19:09AM +0100, Andy Lemin wrote:
 Hi list,
 
 I know this is a rather classic question, but I have searched a lot on 
 this again recently, and I just cannot find any conclusive up to date 
 information?
 
 I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
 official comments I can find relate to 3COM for ISA, or community 
 consensus towards Chelsio for 10Gbe.
 
 I know Intel works ok and I???ve used the i350???s before, but my 
 understanding is that Intel still doesn???t provide the documentation for 
 their NICs and so the emX driver is reverse engineered.
>>> 
>>> This is incorrect.  Intel provides datasheets for Ethernet parts.
>>> em(4) is derived from Intel authored code for FreeBSD supplied under a
>>> permissive license.
>>> 
 
 And if I remember correctly some offload features were also disabled in 
 the emX driver a while back as some functions where found to be insecure 
 on die and so it was deemed safer to bring the logic back on CPU.
 
 So I???m looking for the best 1Gbe NIC that supports the most 
 offloading/best driver support/performance etc.
 
 Thanks, Andy.
 
 PS; could we update the official supported hardware lists? ;)
 All the best.
 
 
 Sent from a teeny tiny keyboard, so please excuse typos
 
>> 
> 



Re: Kerberos SSH routing tables problem

2019-08-02 Thread Stuart Henderson
On 2019-07-29, Predrag Punosevac  wrote:
> Hi Misc,
>
> I am using Edgerouter lite as a firewall/DNS cashing resolver for one of
> our remote location
>
> ubnt1# uname -mrsv
> OpenBSD 6.5 GENERIC.MP#0 octeon
>
> The desktops behind the firewall have to use Kerberised SSH to perform
> some work on one of .mil servers. I opened egress ports kerberos,
> klogin, kshell TCP protocol as well as kerberos UDP. After the work is
> finished and desktops are "logged out" routing tables (dns) are in a bad
> state on the firewall. A simple
>
> pfctl -F all -f /etc/pf.conf
>
> fixes the problem and desktops can again do DNS resolving and surfing
> the Internet. 
>
> Could somebody give me a head start how to go about further trouble
> shooting and fixing the problem? Obviously flashing states is not very
> convenient.
>
> Most Kind Regards,
> Predrag Punosevac
>
>

Can you go into some more details about what the "bad state" is?

"routing tables (dns) are in a bad state on the firewall" doesn't
explain much (and doesn't really make sense, dns has nothing to do with
routing tables..)




Re: problem to copy a (possibly large) file over a network device

2019-08-02 Thread Stuart Henderson
On 2019-08-01, Rudolf Sykora  wrote:
> (The problem stays with scp, or anything, once a network device
> (localhost is enough) participates.)

(For your tar | nc, you'll need nc -N or nc -w1 or something to
shutdown the connection after the end of file, though this is beside
the point if other methods are failing too).

What happens if you "tcpbench -s" in one terminal, and "tcpbench
localhost" in another? (i.e. is the problem just with network access,
or does it only happen when also reading and/or writing files?)

> Nobody spoke about "deliberately", or working in a "binary" way.
> In the follow-up I added the information that the size of files can be
> larger than 10 GB. In some systems (e.g. plan9) there are quite a few
> limits/constants which lead, or can lead, to problems when
> exceeded. That is what was meant by the settings.

Should be no issues with FFS or NFSv3. NFSv2 and FAT do have smaller
limits.

I think others will have trouble figuring out what's going on unless
you can determine what's unusual about your system that is causing
this ..




Re: su - root => segmentation fault

2019-08-02 Thread dmitry.sensei
But we have some bug in heimdal's su 

пт, 2 авг. 2019 г., 20:27 dmitry.sensei :

> Ok. Thanks.
>
> пт, 2 авг. 2019 г., 20:25 Stuart Henderson :
>
>> On 2019-08-02, dmitry.sensei  wrote:
>> > Lol!
>> > ORLOV-NB$ kdump -f ktrace.out
>> >  58118 ktrace   RET   ktrace 0
>> >  58118 ktrace   CALL
>> execve(0x7f7d9100,0x7f7d9710,0x7f7d9730)
>> >  58118 ktrace   NAMI  "*/usr/local/heimdal/bin/su*"
>> >  58118 ktrace   ARGS
>> > [0] = "su"
>> > [1] = "-"
>> > [2] = "root"
>> > ORLOV-NB$ whereis su
>> > /usr/bin/su
>>
>> whereis isn't terribly useful, it doesn't use $PATH, instead uses a
>> fixed path of common directories.
>>
>> The "type" builtin in most Bourne-style shells is usually more helpful.
>>
>>
>>


Re: su - root => segmentation fault

2019-08-02 Thread dmitry.sensei
Ok. Thanks.

пт, 2 авг. 2019 г., 20:25 Stuart Henderson :

> On 2019-08-02, dmitry.sensei  wrote:
> > Lol!
> > ORLOV-NB$ kdump -f ktrace.out
> >  58118 ktrace   RET   ktrace 0
> >  58118 ktrace   CALL
> execve(0x7f7d9100,0x7f7d9710,0x7f7d9730)
> >  58118 ktrace   NAMI  "*/usr/local/heimdal/bin/su*"
> >  58118 ktrace   ARGS
> > [0] = "su"
> > [1] = "-"
> > [2] = "root"
> > ORLOV-NB$ whereis su
> > /usr/bin/su
>
> whereis isn't terribly useful, it doesn't use $PATH, instead uses a
> fixed path of common directories.
>
> The "type" builtin in most Bourne-style shells is usually more helpful.
>
>
>


Re: Best 1Gbe NIC

2019-08-02 Thread Claudio Jeker
On Fri, Aug 02, 2019 at 12:28:58PM +0100, Andy Lemin wrote:
> Ahhh, thank you!
> 
> I didn’t realise this had changed and now the drivers are written with
> full knowledge of the interface.

That is an overstatement but we know for sure a lot more about these cards
then many other less open ones.

> So that would make Intel Server NICs (i350 for example) some of the best
> 1Gbe cards nowadays then?

They are well supported by OpenBSD as are many other server nics like bge
and bnx. I would not call them best, when it comes to network cards it
seems to be a race to the bottom. All chips have stuff in them that is
just not great. em(4) for example needs a major workaround because the
buffersize is specified by a bitfield. 

My view is more pessimistic, all network cards are shit there are just
some that are less shitty. Also I prefer to use em(4) over most other
gigabit cards.

-- 
:wq Claudio

> 
> Sent from a teeny tiny keyboard, so please excuse typos
> 
> > On 2 Aug 2019, at 09:52, Jonathan Gray  wrote:
> > 
> >> On Fri, Aug 02, 2019 at 09:19:09AM +0100, Andy Lemin wrote:
> >> Hi list,
> >> 
> >> I know this is a rather classic question, but I have searched a lot on 
> >> this again recently, and I just cannot find any conclusive up to date 
> >> information?
> >> 
> >> I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
> >> official comments I can find relate to 3COM for ISA, or community 
> >> consensus towards Chelsio for 10Gbe.
> >> 
> >> I know Intel works ok and I???ve used the i350???s before, but my 
> >> understanding is that Intel still doesn???t provide the documentation for 
> >> their NICs and so the emX driver is reverse engineered.
> > 
> > This is incorrect.  Intel provides datasheets for Ethernet parts.
> > em(4) is derived from Intel authored code for FreeBSD supplied under a
> > permissive license.
> > 
> >> 
> >> And if I remember correctly some offload features were also disabled in 
> >> the emX driver a while back as some functions where found to be insecure 
> >> on die and so it was deemed safer to bring the logic back on CPU.
> >> 
> >> So I???m looking for the best 1Gbe NIC that supports the most 
> >> offloading/best driver support/performance etc.
> >> 
> >> Thanks, Andy.
> >> 
> >> PS; could we update the official supported hardware lists? ;)
> >> All the best.
> >> 
> >> 
> >> Sent from a teeny tiny keyboard, so please excuse typos
> >> 
> 



Re: su - root => segmentation fault

2019-08-02 Thread Stuart Henderson
On 2019-08-02, dmitry.sensei  wrote:
> Lol!
> ORLOV-NB$ kdump -f ktrace.out
>  58118 ktrace   RET   ktrace 0
>  58118 ktrace   CALL  execve(0x7f7d9100,0x7f7d9710,0x7f7d9730)
>  58118 ktrace   NAMI  "*/usr/local/heimdal/bin/su*"
>  58118 ktrace   ARGS
> [0] = "su"
> [1] = "-"
> [2] = "root"
> ORLOV-NB$ whereis su
> /usr/bin/su

whereis isn't terribly useful, it doesn't use $PATH, instead uses a
fixed path of common directories.

The "type" builtin in most Bourne-style shells is usually more helpful.




Re: AMDGPU in current issue

2019-08-02 Thread Oriol Demaria


Been using since yesterday a custom kernel with amdgpu, on a Ryzen 5 PRO
2500U, as I saw many commits. For me now the display is now usable and
stable, still minor issues, so I can use the laptop with external
monitor.

In exchange I tried to hibernate, which was working with the UEFI vesa
driver, and seems that doesn't work (tried only once) and suspend still
not working.

I will keep using the amdgpu driver from now unless I see major issues,
because being able to use two screens is important.

Regards.

Jonathan Gray  writes:

> On Fri, Aug 02, 2019 at 03:11:54AM -0500, Charlie Burnett wrote:
>> Hey-
>> I'd been messing around with the AMDGPU on current (which I'm aware is very
>> experimental) and had very few issues with it using a Vega 56 GPU. I
>> recently swapped to another Vega GPU (Radeon VII) and have issues with the
>> display not showing anything. Still boots fine, in that I can still enter
>> commands (i.e. reboot) so it has to be a display issue. I tried searching
>> for the diff where the firmware was added which I'm certain I saw (for Vega
>> 20) but can't seem to find it in the commit history. Anyone have a fix for
>> it, and if not, who should I talk to if I wanted to help get it working? I
>> saw most of the AMDGPU commits have been by @jonathangray if he would be
>> the best option.
>> Thanks!
>
> vega20 firmware was added when ports/sysutils/firmware/amdgpu was
> updated to 20190312.
>
> vega20 is marked as experimental in the version of drm we have, but we
> don't currently check the flag on probe like linux does.
>
> The following diff will prevent amdgpu from matching on devices
> in the amdgpu_pciidlist table with the AMD_EXP_HW_SUPPORT flag
> (currently these are all vega20 ids).
>
> Index: sys/dev/pci/drm/include/drm/drm_drv.h
> ===
> RCS file: /cvs/src/sys/dev/pci/drm/include/drm/drm_drv.h,v
> retrieving revision 1.2
> diff -u -p -r1.2 drm_drv.h
> --- sys/dev/pci/drm/include/drm/drm_drv.h 25 Jul 2019 05:48:16 -  
> 1.2
> +++ sys/dev/pci/drm/include/drm/drm_drv.h 2 Aug 2019 03:29:58 -
> @@ -291,5 +291,7 @@ static inline bool drm_drv_uses_atomic_m
>  int  drm_dev_register(struct drm_device *, unsigned long);
>  void drm_dev_unregister(struct drm_device *);
>  int  drm_getpciinfo(struct drm_device *, void *, struct drm_file *);
> +const struct drm_pcidev  *drm_find_description(int, int,
> +const struct drm_pcidev *);
>  
>  #endif
> Index: sys/dev/pci/drm/amd/amdgpu/amdgpu_kms.c
> ===
> RCS file: /cvs/src/sys/dev/pci/drm/amd/amdgpu/amdgpu_kms.c,v
> retrieving revision 1.3
> diff -u -p -r1.3 amdgpu_kms.c
> --- sys/dev/pci/drm/amd/amdgpu/amdgpu_kms.c   4 Jul 2019 03:39:07 -   
> 1.3
> +++ sys/dev/pci/drm/amd/amdgpu/amdgpu_kms.c   2 Aug 2019 03:35:35 -
> @@ -1337,10 +1337,23 @@ int amdgpu_debugfs_firmware_init(struct 
>  int
>  amdgpu_probe(struct device *parent, void *match, void *aux)
>  {
> + struct pci_attach_args *pa = aux;
> + const struct drm_pcidev *id_entry;
> + unsigned long flags = 0;
> +
>   if (amdgpu_fatal_error)
>   return 0;
> - if (drm_pciprobe(aux, amdgpu_pciidlist))
> - return 20;
> +
> + id_entry = drm_find_description(PCI_VENDOR(pa->pa_id),
> + PCI_PRODUCT(pa->pa_id), amdgpu_pciidlist);
> + if (id_entry != NULL) {
> + flags = id_entry->driver_data;
> + if (flags & AMD_EXP_HW_SUPPORT)
> + return 0;
> + else
> + return 20;  
> + }
> + 
>   return 0;
>  }
>  


-- 
Oriol Demaria
2FFED630C16E4FF8



Re: Best 1Gbe NIC

2019-08-02 Thread Andy Lemin
Ahhh, thank you!

I didn’t realise this had changed and now the drivers are written with full 
knowledge of the interface.

So that would make Intel Server NICs (i350 for example) some of the best 1Gbe 
cards nowadays then?

Thanks :)
Andy


Sent from a teeny tiny keyboard, so please excuse typos

> On 2 Aug 2019, at 09:52, Jonathan Gray  wrote:
> 
>> On Fri, Aug 02, 2019 at 09:19:09AM +0100, Andy Lemin wrote:
>> Hi list,
>> 
>> I know this is a rather classic question, but I have searched a lot on this 
>> again recently, and I just cannot find any conclusive up to date information?
>> 
>> I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
>> official comments I can find relate to 3COM for ISA, or community consensus 
>> towards Chelsio for 10Gbe.
>> 
>> I know Intel works ok and I???ve used the i350???s before, but my 
>> understanding is that Intel still doesn???t provide the documentation for 
>> their NICs and so the emX driver is reverse engineered.
> 
> This is incorrect.  Intel provides datasheets for Ethernet parts.
> em(4) is derived from Intel authored code for FreeBSD supplied under a
> permissive license.
> 
>> 
>> And if I remember correctly some offload features were also disabled in the 
>> emX driver a while back as some functions where found to be insecure on die 
>> and so it was deemed safer to bring the logic back on CPU.
>> 
>> So I???m looking for the best 1Gbe NIC that supports the most 
>> offloading/best driver support/performance etc.
>> 
>> Thanks, Andy.
>> 
>> PS; could we update the official supported hardware lists? ;)
>> All the best.
>> 
>> 
>> Sent from a teeny tiny keyboard, so please excuse typos
>> 



AMDGPU + Radeon RX570

2019-08-02 Thread Jens Griepentrog

Dear Listeners,

Three weeks ago I compiled a current kernel on top of

OpenBSD 6.5-current (GENERIC.MP) #128: Fri Jul 12 09:59:59 MDT 2019
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

using the kernel configuration file

$ more AMDGPU
#   $OpenBSD: GENERIC.MP,v 1.14 2018/07/13 05:25:24 tb Exp $

include "arch/amd64/conf/GENERIC"

option  MULTIPROCESSOR
#option MP_LOCKDEBUG
#option WITNESS

cpu*    at mainbus?

amdgpu* at pci?
drm0    at amdgpu? primary 1
drm*    at amdgpu?
wsdisplay0  at amdgpu? primary 1
wsdisplay*  at amdgpu? mux -1

since I was curious to see, whether the amdgpu driver will
work on my RX570 graphics card. After the boot process
I end up with a blank screen. It would be nice to know
whether something is wrong with this kernel configuration.

OpenBSD 6.5-current (AMDGPU) #0: Mon Jul 15 21:00:07 CEST 2019
r...@server.system.info:/usr/src/sys/arch/amd64/compile/AMDGPU
real mem = 274826579968 (262095MB)
avail mem = 266484862976 (254139MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec0c0 (136 entries)
bios0: vendor American Megatrends Inc. version "3.3" date 07/12/2018
bios0: Supermicro X9DR3-F
acpi0 at bios0: ACPI 5.0
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP APIC FPDT MCFG SRAT SLIT HPET PRAD SPMI SSDT 
EINJ ERST HEST BERT DMAR
acpi0: wakeup devices P0P9(S1) EUSB(S4) USBE(S4) PEX0(S4) PEX1(S1) 
PEX2(S1) PEX3(S1) PEX4(S1) PEX5(S1) PEX6(S1) PEX7(S1) NPE1(S1) NPE2(S1) 
GBE_(S4) I350(S4) NPE3(S1) [...]

acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.33 MHz, 06-2d-07
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
cpu0: apic clock running at 100MHz
cpu0: mwait min=64, max=64, C-substates=0.2.1.1.2, IBE
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.02 MHz, 06-2d-07
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.02 MHz, 06-2d-07
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu2: 256KB 64b/line 8-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 6 (application processor)
cpu3: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.02 MHz, 06-2d-07
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu3: 256KB 64b/line 8-way L2 cache
cpu3: smt 0, core 3, package 0
cpu4 at mainbus0: apid 8 (application processor)
cpu4: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.01 MHz, 06-2d-07
cpu4: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu4: 256KB 64b/line 8-way L2 cache
cpu4: smt 0, core 4, package 0
cpu5 at mainbus0: apid 10 (application processor)
cpu5: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 2200.02 MHz, 06-2d-07
cpu5: 

Re: Best 1Gbe NIC

2019-08-02 Thread Jonathan Gray
On Fri, Aug 02, 2019 at 09:19:09AM +0100, Andy Lemin wrote:
> Hi list,
> 
> I know this is a rather classic question, but I have searched a lot on this 
> again recently, and I just cannot find any conclusive up to date information?
> 
> I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
> official comments I can find relate to 3COM for ISA, or community consensus 
> towards Chelsio for 10Gbe.
> 
> I know Intel works ok and I???ve used the i350???s before, but my 
> understanding is that Intel still doesn???t provide the documentation for 
> their NICs and so the emX driver is reverse engineered.

This is incorrect.  Intel provides datasheets for Ethernet parts.
em(4) is derived from Intel authored code for FreeBSD supplied under a
permissive license.

> 
> And if I remember correctly some offload features were also disabled in the 
> emX driver a while back as some functions where found to be insecure on die 
> and so it was deemed safer to bring the logic back on CPU.
> 
> So I???m looking for the best 1Gbe NIC that supports the most offloading/best 
> driver support/performance etc.
> 
> Thanks, Andy.
> 
> PS; could we update the official supported hardware lists? ;)
> All the best.
> 
> 
> Sent from a teeny tiny keyboard, so please excuse typos
> 



Re: Dell PE R740, Intel X710 QuadPort & LACP not working

2019-08-02 Thread Joerg Streckfuss

Am 01.08.19 um 14:55 schrieb Joerg Streckfuss:

Hi Misc,

we bought two new Dell PowerEdges R740. Each System has 3 intel X770
based quadport sfp+ nics. Onboard are two further intel i350 based
sfp+ ports.


Correction - Of course I mean 3 intel X710 based quadport sfp+ nics
and two intel x520 based sfp+ ports.

Sorry



smime.p7s
Description: S/MIME Cryptographic Signature


Best 1Gbe NIC

2019-08-02 Thread Andy Lemin
Hi list,

I know this is a rather classic question, but I have searched a lot on this 
again recently, and I just cannot find any conclusive up to date information?

I am looking to buy the best 1Gbe NIC possible for OpenBSD and the only 
official comments I can find relate to 3COM for ISA, or community consensus 
towards Chelsio for 10Gbe.

I know Intel works ok and I’ve used the i350’s before, but my understanding is 
that Intel still doesn’t provide the documentation for their NICs and so the 
emX driver is reverse engineered.

And if I remember correctly some offload features were also disabled in the emX 
driver a while back as some functions where found to be insecure on die and so 
it was deemed safer to bring the logic back on CPU.

So I’m looking for the best 1Gbe NIC that supports the most offloading/best 
driver support/performance etc.

Thanks, Andy.

PS; could we update the official supported hardware lists? ;)
All the best.


Sent from a teeny tiny keyboard, so please excuse typos



Re: su - root => segmentation fault

2019-08-02 Thread dmitry.sensei
Lol!
ORLOV-NB$ kdump -f ktrace.out
 58118 ktrace   RET   ktrace 0
 58118 ktrace   CALL  execve(0x7f7d9100,0x7f7d9710,0x7f7d9730)
 58118 ktrace   NAMI  "*/usr/local/heimdal/bin/su*"
 58118 ktrace   ARGS
[0] = "su"
[1] = "-"
[2] = "root"
ORLOV-NB$ whereis su
/usr/bin/su
ORLOV-NB$

пт, 2 авг. 2019 г. в 04:15, dmitry.sensei :

> Amd64 from 30 jul. What does the "your kernel does not match the
> userspace" mean?
>
> ср, 31 июл. 2019 г., 19:22 Gregory Edigarov :
>
>> On 31.07.19 17:00, Solene Rapenne wrote:
>> > On Wed, Jul 31, 2019 at 04:49:54PM +0500, dmitry.sensei wrote:
>> >> Hi!
>> >> why did it happen?
>> >>
>> >> OpenBSD 6.5 current
>> >> $su - root
>> >> root's password:
>> >> Segmentation fault
>> >> $ doas su - root
>> >> #
>> >>
>> >> --
>> >> Dmitry Orlov
>> > what current? What arch?
>> >
>> > works for me©
>> > OpenBSD 6.5-current (GENERIC.MP) #153: Sun Jul 28 20:33:09 MDT 2019
>> usually it means that your kernel does not match the userspace
>>
>>

-- 
Dmitry Orlov