NetBSD-10.0RC

2023-11-19 Thread Todd Gruhn
On the bottom of nv(4) it says "GeForce GTX" .

I have a GTX-1660 -- will NetBSD now work with this?


Re: iscsid - lfs and ipv6 issues

2023-11-19 Thread Ede Wolf

Am 19.11.23 um 13:42 schrieb Ede Wolf:


Secondly: The iscsi target is ctld on FreeBSD. Currently even without 
any

authentication.
Now I doubt that an offline reporting is the case, because, 
relabelling the

lun to 4.2BSD and mounting it with ffs does work. Without anything else
being done on either side.


Now this is, what probably has to be the most embarassing post ever: Not 
being able to mount LFS via ISCSI is not an issue of NetBSD, nor it's 
alpha port, nor iscsi: I have just taken lfs support out of the kernel 
and of course, alpha does not support loading kernel modules by default




Just for the records: Compiling lfs support into the kernel again and 
all works. Sorry again for the noise.


Re: iscsid - lfs and ipv6 issues

2023-11-19 Thread Ede Wolf

Am 19.11.23 um 13:23 schrieb Michael van Elst:

On Sun, Nov 19, 2023 at 12:34:13PM +0100, Ede Wolf wrote:

Hello,

first of all a very big thanks to all of you. Since it seems I am the only
one who is using ipv6 with iscsi, I do not need a fix. I can live (as I am
doing now) with ipv4.
I just know I do not have to waste any time in trying to figure out the
syntax.


I think I've fixed IPv6 handling now (needs patches to iscsictl and
iscsid), at least it works here.


Wow, that was fast. Thanks very much!



Secondly: The iscsi target is ctld on FreeBSD. Currently even without any
authentication.
Now I doubt that an offline reporting is the case, because, relabelling the
lun to 4.2BSD and mounting it with ffs does work. Without anything else
being done on either side.


Now this is, what probably has to be the most embarassing post ever: Not 
being able to mount LFS via ISCSI is not an issue of NetBSD, nor it's 
alpha port, nor iscsi: I have just taken lfs support out of the kernel 
and of course, alpha does not support loading kernel modules by default


Completely forgot about this. Sorry for your wasted time on this one.

I am now officially going back to my room to play with my bob the 
builder toys.


Ede


Re: Trouble with re driver

2023-11-19 Thread Martin Husemann
On Sun, Nov 19, 2023 at 12:48:47PM +0100, BERTRAND Joël wrote:
>   I have done a mistake this morning. I have replaced /netbsd kernel and
> copied /netbsd into /netbsd.old (thus, I have deleted my -10-beta
> kernel) but I'm pretty sure my -10-beta was built less that three months
> ago.

I reviewed the CHANGES-10 entries from that period again and still nothing
plausible. All pcidevs changes were only cosmetic (name changes) or
additions and no PCI resource mapping relevant files have been changed.

Martin


Re: iscsid - lfs and ipv6 issues

2023-11-19 Thread Michael van Elst
On Sun, Nov 19, 2023 at 12:34:13PM +0100, Ede Wolf wrote:
> Hello,
> 
> first of all a very big thanks to all of you. Since it seems I am the only
> one who is using ipv6 with iscsi, I do not need a fix. I can live (as I am
> doing now) with ipv4.
> I just know I do not have to waste any time in trying to figure out the
> syntax.

I think I've fixed IPv6 handling now (needs patches to iscsictl and
iscsid), at least it works here.


> Secondly: The iscsi target is ctld on FreeBSD. Currently even without any
> authentication.
> Now I doubt that an offline reporting is the case, because, relabelling the
> lun to 4.2BSD and mounting it with ffs does work. Without anything else
> being done on either side.

I can currently only test against istgt (from pkgsrc) on NetBSD and
a qnap NAS.

The error message (ENODEV) is only generated by LFS on failures to mount
a root filesystem. So no idea yet what error is triggered.

On the other hand, attempting to use LFS showed errors with handling
block sizes and running the filesystem tester (fsx) somewhen paniced
the system. Whatever was "fixed" in LFS was not good enough :-/



Greetings,
-- 
Michael van Elst
Internet: mlel...@serpens.de
"A potential Snark may lurk in every tree."


Re: Trouble with re driver

2023-11-19 Thread BERTRAND Joël
Martin Husemann a écrit :
> On Sat, Nov 18, 2023 at 08:14:32PM +0100, BERTRAND Joël wrote:
>>  If I restart this server with a -10.0-Beta kernel, faulty ethernet
>> adapter is autoconfigured without trouble.
> 
> Can you give more details of that kernel? Ideally source update time,
> or kernel build time? That way we can narrow down the range of pullups
> in between the broken and the non-broken kernel.

I have done a mistake this morning. I have replaced /netbsd kernel and
copied /netbsd into /netbsd.old (thus, I have deleted my -10-beta
kernel) but I'm pretty sure my -10-beta was built less that three months
ago.

> There are no obvious changes at first glance, so this is a bit of a riddle.
> It would be best if you could bisect the breakage to an individual pullup
> (as we have no other reports of broken re(4) so far and as you noticed it
> seems to be pretty hardware dependend).

Hardware configuration: Asus motherboard with 16 GB and i7 4470
(chipset Z97).

legendre# lspci
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM
Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core
Processor PCI Express x16 Controller (rev 06)
00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core
Processor PCI Express x8 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th
Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core
Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB
xHCI Controller
00:16.0 Communication controller: Intel Corporation 9 Series Chipset
Family ME Interface #1
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2)
I218-V
00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB
EHCI Controller #2
00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio
Controller
00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI
Express Root Port 1 (rev d0)
00:1c.3 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
00:1c.4 PCI bridge: Intel Corporation 9 Series Chipset Family PCI
Express Root Port 5 (rev d0)
00:1c.5 PCI bridge: Intel Corporation 9 Series Chipset Family PCI
Express Root Port 6 (rev d0)
00:1c.7 PCI bridge: Intel Corporation 9 Series Chipset Family PCI
Express Root Port 8 (rev d0)
00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB
EHCI Controller #1
00:1f.0 ISA bridge: Intel Corporation Z97 Chipset LPC Controller
00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA
Controller [AHCI Mode]
00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
02:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network
Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network
Connection (rev 01)
04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI
Bridge (rev 04)
05:00.0 Ethernet controller: D-Link System Inc DGE-528T Gigabit Ethernet
Adapter (rev 10)
06:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network
Connection
07:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network
Connection
08:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA
6Gb/s Controller (rev 12)
08:00.1 IDE interface: Marvell Technology Group Ltd. 88SE912x IDE
Controller (rev 12)

I can revert some patches against -10 kernel, but I cannot do extensive
tests. When this server is down, all workstations on LAN side must be
down too...

Best regards,

JB



signature.asc
Description: OpenPGP digital signature


Re: iscsid - lfs and ipv6 issues

2023-11-19 Thread Ede Wolf

Am 17.11.23 um 23:22 schrieb Michael van Elst:

lis...@nebelschwaden.de (Ede Wolf) writes:


I am having two issues with iscsid/iscsictl. First, it seems, I cannot
mount an lfs formatted iscsi lun, no matter wether this drive is
gpt/wedge or plain disklabelled:



# mount -t lfs /dev/dk0 /import/
mount_lfs: /dev/dk0 on /import: Operation not supported by device


This works here (as far as lfs works), but the error message is
rare. One possibility is that the SCSI driver reports an offline
unit. Can you tell what the iSCSI server is ?


Hello,

first of all a very big thanks to all of you. Since it seems I am the 
only one who is using ipv6 with iscsi, I do not need a fix. I can live 
(as I am doing now) with ipv4.
I just know I do not have to waste any time in trying to figure out the 
syntax.


Secondly: The iscsi target is ctld on FreeBSD. Currently even without 
any authentication.
Now I doubt that an offline reporting is the case, because, relabelling 
the lun to 4.2BSD and mounting it with ffs does work. Without anything 
else being done on either side.


Now, the initiator is running on alpha, so if it works for you, this may 
be port related and we can leave it here. I will retest with x64, as 
soon as the ip4 routing is in place.


It was just for benchmarking and playing around with lfs aynway, as I 
have never used it and the 9 release notes stated some fixes. Again, I 
am running 10Rc1.


Will evenutally end up being ffs anyway for reliability reasons.

Thanks again, that helped quite a lot.



Re: Trouble with re driver

2023-11-19 Thread Martin Husemann
On Sat, Nov 18, 2023 at 10:59:14PM +0100, Tobias Nygren wrote:
> I suspect the regression originates with an acpica update and some
> firmware bug might be a prerequisite to trigger it.

There have been no acpica updates on the -10 branch.

Martin


Re: Trouble with re driver

2023-11-19 Thread Martin Husemann
On Sat, Nov 18, 2023 at 08:14:32PM +0100, BERTRAND Joël wrote:
>   If I restart this server with a -10.0-Beta kernel, faulty ethernet
> adapter is autoconfigured without trouble.

Can you give more details of that kernel? Ideally source update time,
or kernel build time? That way we can narrow down the range of pullups
in between the broken and the non-broken kernel.

There are no obvious changes at first glance, so this is a bit of a riddle.
It would be best if you could bisect the breakage to an individual pullup
(as we have no other reports of broken re(4) so far and as you noticed it
seems to be pretty hardware dependend).

Martin


Re: Trouble with re driver

2023-11-19 Thread BERTRAND Joël
Tobias Nygren a écrit :
> On Sat, 18 Nov 2023 20:14:32 +0100
> BERTRAND Joël  wrote:
> 
>>  Maybe something is broken in recent changes in
>> src/sys/dev/pci/pci_resource.c, pcidevs_data.h or pcidevs.h.
> 
> Since the attach function runs it does not seems to be a pcidevs problem.
> pci_resource.c is only used on ARM platforms. You can find the equivalent
> x86 code in pci_map.c. It has not been changed recently from what I can tell.
> 
>> pci_mem_find: expected mem type 0004, found 
> 
> This error means we expected a 64-bit mem range assigned to the card
> by the ACPI firmware but found a 32-bit range. It is a strange error
> to get in this context because the card according to the config space
> dump clearly only has a 32-bit BAR so it has to use 32-bit bus space.
> We can reach that situation if the re driver incorrectly tried to map
> the BAR with PCI_MAPREG_MEM_TYPE_64BIT.
> 
> I suspect the regression originates with an acpica update and some
> firmware bug might be a prerequisite to trigger it.
> 
> You'll need to figure out why the expected mem type check fires.

Unfortunately, I cannot quickly test on this server and I don't have
other server in the same configuration. I will test as soon as possible,
but I have to poweroff all diskless workstations on LAN side (that run
complex simulations).

> A good place to start digging is to dissect this code and find
> out what the value retured by pci_mapreg_type is:
> https://github.com/NetBSD/src/blob/d7465f61f231e4328d26a5628c5ccb266f168f3a/sys/dev/pci/if_re_pci.c#L210
> 
> Since you mentioned you have lots of other network adapters it might
> also be that the system has ran out of 32-bit bus space and
> is attempting to use 64-bit as a last resort.

Strange. -10-beta ran fine in the same configuration: wm[0-4], re0,
bridge0, lagg0, npflog0.

Best regards,

JB



signature.asc
Description: OpenPGP digital signature


Re: CGD - unable to open after closing, using '-V gpt', argon2id and adiantum

2023-11-19 Thread Michael van Elst
luisvmen...@yandex.com (Luis Mendes) writes:

>== Now, trying to open the container again:
>cgdconfig -V gpt cgd0 NAME=nvme-crypt /etc/cgd/nvme-crypt
>After entering the four zeroes password, there's the message:
>"cgdconfig: verification failed, please reenter passphrase".

>What is wrong with this setup?

This way cgdconfig is looking inside the container for a GPT
label for validation. Did you create one ?

If you don't need to partition the container, you could
format a ffs filesystem for the whole disk (cgd0) and use
the 'ffs' verification method, which checks for a ffs 
superblock.

For other filesystems you need to partition (disklabel, mbr
or gpt) if you want the validation step and validate using
the partition information. But it's not strictly necessary to
validate, -V none will accept a wrong passphrase but e.g.
a mount will likely see garbage and fail.