Re: pci-arbiter + rumpdisk boots!

2021-03-01 Thread Almudena Garcia
> This is really not supposed to happen: a guest is not supposed to be
> able to crash qemu :)
> So it could be a bug in qemu's ahci support actually.

Could be interesting try it in real hardware ;)

El lun, 1 mar 2021 a las 14:29, Samuel Thibault ()
escribió:

> Damien Zammit, le mar. 02 mars 2021 00:14:29 +1100, a ecrit:
> > I got the arbiter to play nice with rumpdisk!
>
> Yay!!
>
> > But as soon as you log in it crashes the disk and qemu, I think probably
> because there is a mismatched
> > libpciaccess.so in userspace versus the statically linked ones in the
> pci/disk servers?
>
> Well, I'd rather first make sure about concurrent pci-arbiter processes
>
> Then
>
> > ahcisata0 channel 0: setting WDCTL_RST failed for drive 0
> > ./noide: line 4: 535871 Segmentation fault  (core dumped)
> /extra/qemu/bin/qemu-system-i386
>
> This is really not supposed to happen: a guest is not supposed to be
> able to crash qemu :)
> So it could be a bug in qemu's ahci support actually.
>
> Samuel
>
>


Re: pci-arbiter + rumpdisk boots!

2021-03-01 Thread Samuel Thibault
Damien Zammit, le mar. 02 mars 2021 00:14:29 +1100, a ecrit:
> I got the arbiter to play nice with rumpdisk!

Yay!!

> But as soon as you log in it crashes the disk and qemu, I think probably 
> because there is a mismatched
> libpciaccess.so in userspace versus the statically linked ones in the 
> pci/disk servers?

Well, I'd rather first make sure about concurrent pci-arbiter processes

Then

> ahcisata0 channel 0: setting WDCTL_RST failed for drive 0
> ./noide: line 4: 535871 Segmentation fault  (core dumped) 
> /extra/qemu/bin/qemu-system-i386

This is really not supposed to happen: a guest is not supposed to be
able to crash qemu :)
So it could be a bug in qemu's ahci support actually.

Samuel



Re: pci-arbiter + rumpdisk

2020-11-21 Thread Samuel Thibault
Hello,

Damien Zammit, le mar. 17 nov. 2020 20:56:07 +1100, a ecrit:
> Somehow I was able to boot / via rumpdisk and then the arbiter
> still worked afterwards, so networking via netdde started working.
> This is the first time I've had a rumpdisk / with network access!
> 
> Alas, I cannot seem to make it work via the arbiter though.

Did you make libpciaccess's hurd backend try to device_open("pci") in
order to get access to pci-arbiter?

Since at that point the FS does not exist yet it can't open
/servers/bus/pci, that is not surprising.

Samuel



Re: pci-arbiter + rumpdisk

2020-11-17 Thread Damien Zammit
Somehow I was able to boot / via rumpdisk and then the arbiter
still worked afterwards, so networking via netdde started working.
This is the first time I've had a rumpdisk / with network access!

Alas, I cannot seem to make it work via the arbiter though.
My latest attempt looks like the arbiter started but could not give the i/o 
ports
access to rumpdisk:

start pci-arbiter: PCI start
PCI machdev start
Hurd bootstrap pci Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003,
 2004, 2005,
2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016
The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.

NetBSD 7.99.34 (RUMP-ROAST)
total memory = unlimited (host limit)
timecounter: Timecounters tick every 10.000 msec
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
cpu0 at thinair0: rump virtual cpu
root file system type: rumpfs
kern.module.path=/stand/i386/7.99.34/modules
mainbus0 (root)
pci: I/O space init error 22, I/O space not available
pci0 at mainbus0 bus 0
pci0: memory space enabled, rd/line, rd/mult, wr/inv ok

Damien



Re: pci-arbiter + rumpdisk

2020-11-16 Thread Damien Zammit
Hi,

On 16/11/20 9:02 pm, Samuel Thibault wrote:
> ? Like rumpdisk does?

start pci-arbiter: Hurd bootstrap pci pci-arbiter: Must be started as a 
translator
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016
The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.

NetBSD 7.99.34 (RUMP-ROAST)
total memory = unlimited (host limit)
timecounter: Timecounters tick every 10.000 msec
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
cpu0 at thinair0: rump virtual cpu
root file system type: rumpfs
kern.module.path=/stand/i386/7.99.34/modules
mainbus0 (root)
pci: I/O space init error 22, I/O space not available
pci0 at mainbus0 bus 0
pci0: memory space enabled, rd/line, rd/mult, wr/inv ok

I'm hitting netfs_startup(bootstrap, O_READ) where bootstrap is
being passed in as MACH_PORT_NULL, therefore netfs cannot start up.
What should bootstrap be set to in pci-arbiter/main.c ? 
I have modified it with a patch to introduce machdev,
but machdev overwrites this port to MACH_PORT_NULL at the end of its 
initialisation.

Damien



Re: pci-arbiter + rumpdisk

2020-11-16 Thread Samuel Thibault
Damien Zammit, le lun. 16 nov. 2020 20:54:27 +1100, a ecrit:
> How do I expose the hurdish pci subsystem that has no underlying node to 
> attach to for a netfs
> in pci-arbiter during bootstrap?

? Like rumpdisk does?

When rumpdisk receives the fsys_init call, it installs itself as
translator.

Just like diskfs_S_fsys_init does it: trivfs_S_fsys_init should call
fsys_init on its bootstrap port. That way rumpdisk's trivfs_S_fsys_init
will call pci-arbiter's trivfs_S_fsys_init which will be able to install
itself.

Samuel



Re: pci-arbiter + rumpdisk

2020-11-16 Thread Damien Zammit
Hi,

How do I expose the hurdish pci subsystem that has no underlying node to attach 
to for a netfs
in pci-arbiter during bootstrap?

I have almost completed the loop with rumpdisk and the arbiter, but I cannot 
start pci-arbiter
as it has no underlying node to attach to, and the specific implementation 
seems to rely on having
a kind of filesystem to traverse and access the nodes...

Damien



Re: pci-arbiter + rumpdisk

2020-11-14 Thread Samuel Thibault
Damien Zammit, le sam. 14 nov. 2020 17:59:30 +1100, a ecrit:
> > youpi: pci-arbiter could be exposed as a device name in the master device 
> > port and the userland pci-arbiter running on /server/bus/pci can try to 
> > open that
> > youpi: just like netdde tries to open eth0 to check whether there are 
> > already device drivers in gnumach, in which case it shouldn't handle 
> > network cards
> > youpi: in the end, when we know for sure that pci-arbiter is run as a 
> > bootstrap translator, we can make /server/bus/pci a mere device node
> 
> How do i make a bootstrap translator such as pci-arbiter expose a device name 
> in the master device port?

By registering it with machdev_register, like you already did for
rumpdisk. The idea is that the "master device port" that applications
see is just a chain of master. Applications get it from the / ext2fs,
but ext2fs got it from rumpdisk, whose ds_device_open catchs device_open
calls on wd*, and otherwise calls device_open on its own master port,
gotten from the previous translator in the bootstrap chain, which will
now be pci-arbiter, which itself will catch device_open calls on pci,
and otherwise calls device_open on its own master port, gotten from the
kernel.

Samuel