El 13/9/20 a les 15:54, Samuel Thibault ha escrit:
> Hello,
>
> Joan Lledó, le dim. 13 sept. 2020 08:38:48 +0200, a ecrit:
>> El 10/9/20 a les 0:29, Samuel Thibault ha escrit:
>>> Now fixed in libpciaccess 0.16-1+hurd.6 and upstream.
>>
>> Then should I merge jlledom-pciaccess-map into master?
Hello,
Joan Lledó, le dim. 13 sept. 2020 08:38:48 +0200, a ecrit:
> El 10/9/20 a les 0:29, Samuel Thibault ha escrit:
> > Now fixed in libpciaccess 0.16-1+hurd.6 and upstream.
>
> Then should I merge jlledom-pciaccess-map into master?
>
>
El 10/9/20 a les 0:29, Samuel Thibault ha escrit:
> Now fixed in libpciaccess 0.16-1+hurd.6 and upstream.
>
Then should I merge jlledom-pciaccess-map into master?
http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=jlledom-pciaccess-map
Samuel Thibault, le mer. 09 sept. 2020 23:45:48 +0200, a ecrit:
> Damien Zammit, le lun. 07 sept. 2020 11:16:13 +1000, a ecrit:
> > On 6/9/20 11:17 pm, Samuel Thibault wrote:
> > > One issue remains, however: Xorg's vesa driver produces
> > >
> > > [1669282.478] (II) VESA(0): initializing int10
>
Damien Zammit, le lun. 07 sept. 2020 11:16:13 +1000, a ecrit:
> On 6/9/20 11:17 pm, Samuel Thibault wrote:
> > One issue remains, however: Xorg's vesa driver produces
> >
> > [1669282.478] (II) VESA(0): initializing int10
> > [1669282.478] (EE) VESA(0): Cannot read int vect
> >
> > which comes
On 6/9/20 11:17 pm, Samuel Thibault wrote:
>> I have uploaded libpciaccess_0.16-1+hurd.5 with the latest upstream
>> version.
Thanks!
> One issue remains, however: Xorg's vesa driver produces
>
> [1669282.478] (II) VESA(0): initializing int10
> [1669282.478] (EE) VESA(0): Cannot read int vect
>
Samuel Thibault, le dim. 06 sept. 2020 15:17:51 +0200, a ecrit:
> Samuel Thibault, le dim. 06 sept. 2020 15:14:27 +0200, a ecrit:
> > Thanks for working on this!
> >
> > I have uploaded libpciaccess_0.16-1+hurd.5 with the latest upstream
> > version.
>
> One issue remains, however: Xorg's vesa
Samuel Thibault, le dim. 06 sept. 2020 15:14:27 +0200, a ecrit:
> Thanks for working on this!
>
> I have uploaded libpciaccess_0.16-1+hurd.5 with the latest upstream
> version.
One issue remains, however: Xorg's vesa driver produces
[1669282.478] (II) VESA(0): initializing int10
[1669282.478]
Hello,
Thanks for working on this!
I have uploaded libpciaccess_0.16-1+hurd.5 with the latest upstream
version.
Samuel
Hi,
El 26/8/20 a les 11:13, Damien Zammit ha escrit:
> If you think everything is okay with this, I will squash the last patch and
> submit patches upstream.
Yes it's OK for me
Hi,
On 23/8/20 8:47 pm, Joan Lledó wrote:
> http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=jlledom-pciaccess-map
Thanks for doing this, I tried it locally and fixed two bugs in my libpciaccess
patches:
diff --git a/src/x86_pci.c b/src/x86_pci.c
index 1614729..1e70f35 100644
---
Hi, I made my changes on the arbiter and works fine, you can check my
code at
http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=jlledom-pciaccess-map
On the other hand, I found a couple of issues in your patch
In map_dev_mem():
+memfd = open("/dev/mem", flags | O_CLOEXEC);
+if
Hi,
El 22/8/20 a les 15:10, Damien Zammit ha escrit:
> Hi Joan,
>
> I found another probe() call in hurd_pci.c that should not be there.
> (So I dropped a second incorrect patch).
> Can you please confirm this final branch looks correct?
>
>
Hi Joan,
I found another probe() call in hurd_pci.c that should not be there.
(So I dropped a second incorrect patch).
Can you please confirm this final branch looks correct?
http://git.zammit.org/libpciaccess.git/log/?h=rumpdisk-upstream
Thanks,
Damien
On 22/8/20 8:38 pm, Joan Lledó wrote:
> However, I think the problem here is the x86 backend, not the common
> interface. If we take a look at all other backends we'll see that:
>
> 1.- Neither of them call its probe() from its create(). So it's the
> client who must call pci_device_probe(), it's
Hi,
> I have removed my latest patch from my upstream merge request and
replaced it
> with a patch that fixes the problem:
I took a look at your patch.
> mappings[devp->num_mappings].flags = map_flags;
> mappings[devp->num_mappings].memory = NULL;
>
> -if
Joan,
On 18/8/20 6:51 am, Joan Lledó wrote:
> El 17/8/20 a les 1:51, Damien Zammit ha escrit:
>> Perhaps a better way to fix the mapping problem I encountered
>> is by removing the check for previous mappings when trying to map regions,
I have removed my latest patch from my upstream merge
On 18/8/20 6:51 am, Joan Lledó wrote:
> El 17/8/20 a les 1:51, Damien Zammit ha escrit:
>> Perhaps a better way to fix the mapping problem I encountered
>> is by removing the check for previous mappings when trying to map regions,
>
> I could check the pointer before reading from it at
El 17/8/20 a les 1:51, Damien Zammit ha escrit:
> It's probably due to this patch:
It's surely for that
> Perhaps a better way to fix the mapping problem I encountered
> is by removing the check for previous mappings when trying to map regions,
I could check the pointer before reading from it
Hi there,
On 17/8/20 1:04 am, Joan Lledó wrote:
> I found the same issue, investigating a bit more I found that in
> func_files.c:201[1], the value of region->memory is 0x0, so reading from
> there raises a segfault. That pointer should be filled in libpciacces,
> at x86_pci.c:601[2] during the
El 16/8/20 a les 4:46, Damien Zammit ha escrit:
> Hi there,
>
> On 15/8/20 9:49 pm, Joan Lledó wrote
>> I downloaded and tried the last qemu image "debian-hurd-20200731.img".
>> When I try to read the memory mapped content of region files in the
>> arbiter, it crashes and shows the message
Hi there,
On 15/8/20 9:49 pm, Joan Lledó wrote
> I downloaded and tried the last qemu image "debian-hurd-20200731.img".
> When I try to read the memory mapped content of region files in the
> arbiter, it crashes and shows the message "Real-time signal 0".
I am also getting this on my latest hurd
Hello,
I downloaded and tried the last qemu image "debian-hurd-20200731.img".
When I try to read the memory mapped content of region files in the
arbiter, it crashes and shows the message "Real-time signal 0".
This happens when executing "hexdump -Cn 256
/servers/bus/pci//00/02/0/region0"
23 matches
Mail list logo