The commit that introduced the new symbol should also have bumped the
kernel version... That's how we keep modules and kernel in sync...
On Tue, 28 May 2019, m...@netbsd.org wrote:
Found the commit - looks like newer modules than kernel.
Found the commit - looks like newer modules than kernel.
https://v4.freshbsd.org/commit/netbsd/src/IH8Jag0YCI3N6boB
On Tue, 28 May 2019, m...@netbsd.org wrote:
On Tue, May 28, 2019 at 08:27:20PM +0200, Petr Topiarz wrote:
May 28 18:55:46 poweredge /netbsd: [ 236.3881944] kobj_checksyms, 988:
[zfs]: linker error: symbol `disk_rename' not found
Usually this happens if kernel and modules are mismatched.
I'm
Petr,
your kernel is elder than your ZFS module.
Please update to a current kernel and try again.
--
J. Hannken-Illjes - hann...@eis.cs.tu-bs.de - TU Braunschweig
> On 28. May 2019, at 20:27, Petr Topiarz wrote:
>
> Hi Tech-kern,
>
> I run two machines with NetBSD amd64 with ZFS, one is
On Tue, May 28, 2019 at 08:27:20PM +0200, Petr Topiarz wrote:
> May 28 18:55:46 poweredge /netbsd: [ 236.3881944] kobj_checksyms, 988:
> [zfs]: linker error: symbol `disk_rename' not found
Usually this happens if kernel and modules are mismatched.
I'm not sure what happened to cause it, but
On 28.05.2019 20:08, Martin Husemann wrote:
> On Tue, May 28, 2019 at 07:50:34PM +0200, Micha? Górny wrote:
>> Well, if we are only to consider new registers, then we're talking about
>> 16 'pure' ymm registers + 32 zmm registers + 8 kN registers + 1 state
>> register, multiply by two... 114 PT_*
Hi Tech-kern,
I run two machines with NetBSD amd64 with ZFS, one is with 8.99.34
kernel from february,
the other is the latest today, 201905260520Z,
It all runs fine with the first one, but as I upgraded the other, ZFS
does not load and tels me:
modload: zfs: Exec format error
and to
> On May 28, 2019, at 11:16 AM, Michał Górny wrote:
>
>> We already have very strange ones (XMMREGS and VECREGS). Maybe we should just
>> have one ALLREGS thing (identical to the core note) and then discuss how
>> to properly make that sanely versioned and self describing?
>>
>
> That is
On Tue, 2019-05-28 at 20:08 +0200, Martin Husemann wrote:
> On Tue, May 28, 2019 at 07:50:34PM +0200, Micha? Górny wrote:
> > Well, if we are only to consider new registers, then we're talking about
> > 16 'pure' ymm registers + 32 zmm registers + 8 kN registers + 1 state
> > register, multiply by
On Tue, May 28, 2019 at 07:50:34PM +0200, Micha? Górny wrote:
> Well, if we are only to consider new registers, then we're talking about
> 16 'pure' ymm registers + 32 zmm registers + 8 kN registers + 1 state
> register, multiply by two... 114 PT_* requests?
Integers are plenty, but the core file
On Tue, May 28, 2019 at 10:54:45AM -0700, Jason Thorpe wrote:
> The registers are dumped in an ELF note in the same format that
> ptrace gets. We don't currently handle anything other than integer
> registers and basic FP registers in core files at the moment. Look for
> "coredump_note" in
> On May 28, 2019, at 10:48 AM, Martin Husemann wrote:
>> It would make things a bit awkward for core files.
>
> Please excuse my ignorance, but how is ptrace(2) related to core files?
The registers are dumped in an ELF note in the same format that ptrace gets.
We don't currently handle
On Tue, 2019-05-28 at 19:37 +0200, Martin Husemann wrote:
> Stupid question: since this is all very rare and non-performance critical,
> why isn't it done as a single register per call? Adding more registers
> when thy arrive in newer cpu variants, and not worrying about how they
> are saved
On Tue, May 28, 2019 at 10:46:44AM -0700, Jason Thorpe wrote:
>
> > On May 28, 2019, at 10:37 AM, Martin Husemann wrote:
> >
> > Stupid question: since this is all very rare and non-performance critical,
> > why isn't it done as a single register per call? Adding more registers
> > when thy
> On May 28, 2019, at 10:37 AM, Martin Husemann wrote:
>
> Stupid question: since this is all very rare and non-performance critical,
> why isn't it done as a single register per call? Adding more registers
> when thy arrive in newer cpu variants, and not worrying about how they
> are saved
Stupid question: since this is all very rare and non-performance critical,
why isn't it done as a single register per call? Adding more registers
when thy arrive in newer cpu variants, and not worrying about how they
are saved (XSAVE or similar) nor what format is used in the kernel?
So a
On Tue, 2019-05-28 at 19:26 +0200, Kamil Rytarowski wrote:
> On 28.05.2019 18:34, Michał Górny wrote:
> > There is no difference in internal layout or logic between b. and c.
> > In either case, we need to perform XSAVE, process it and copy the data
> > into internal structure. The only
On 28.05.2019 18:34, Michał Górny wrote:
> There is no difference in internal layout or logic between b. and c.
> In either case, we need to perform XSAVE, process it and copy the data
> into internal structure. The only difference is that in b. we handle it
> all in one request, and in c. we do
I'm hoping that whatever solution is arrived at, it does not introduce
any new #ifdef ... #endif variations of structures that might get passed
between kernel and module code. Such variations create dependencies in
the modules which are at best "difficult" to deal with at run-time.
(We
On Tue, 2019-05-28 at 18:08 +0200, Kamil Rytarowski wrote:
> On 28.05.2019 15:20, Michał Górny wrote:
> > Hi,
> >
> > After implementing most of PT_GETXSTATE/PT_SETXSTATE and getting some
> > comments requiring major changes anyway, I'm starting to wonder whether
> > the approach I've followed is
On 27.05.2019 21:03, Michał Górny wrote:
> Currently, the compat32 passes PT_* request values to kernel functions
> without translation. This works fine for low PT_* requests that happen
> to have the same values both on i386 and amd64. However, for requests
> higher than PT_SETFPREGS, the value
On 28.05.2019 15:20, Michał Górny wrote:
> Hi,
>
> After implementing most of PT_GETXSTATE/PT_SETXSTATE and getting some
> comments requiring major changes anyway, I'm starting to wonder whether
> the approach I've followed is actually the best one. This is especially
> important now that I'm
Hi,
After implementing most of PT_GETXSTATE/PT_SETXSTATE and getting some
comments requiring major changes anyway, I'm starting to wonder whether
the approach I've followed is actually the best one. This is especially
important now that I'm pretty sure that we can't rely on fixed offsets
in
Hi all,
> attach always 'succeeds' in the sense that after attach has been called,
> detach will always be called. The detach routine should tear down
> everything that needs tearing down and not do things that will fail.
> Perhaps the init could simply be done before the attach routine gets
24 matches
Mail list logo