On Mon, Nov 11, 2019 at 01:22:09PM +0100, Hans Petter Selasky wrote:
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> index a6e0a16ae..0697d70f4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> +++
On Wed, Nov 13, 2019 at 04:22:19PM +0100, Hans Petter Selasky wrote:
> On 2019-11-13 15:52, Steve Kargl wrote:
> > at /usr/src/sys/amd64/amd64/trap.c:743
> > #7 0x808b0468 in trap (frame=0xfe00b460e0c0)
> > at /usr/src/sys/amd64/amd64/trap.c:407
> > #8
> > #9
On 2019-11-13 15:52, Steve Kargl wrote:
at /usr/src/sys/amd64/amd64/trap.c:743
#7 0x808b0468 in trap (frame=0xfe00b460e0c0)
at /usr/src/sys/amd64/amd64/trap.c:407
#8
#9 0x in ?? ()
#10 0x817d2c0f in radeon_ttm_tt_to_gtt (ttm=0xf80061eeb248)
On Wed, Nov 13, 2019 at 09:10:06AM +0100, Hans Petter Selasky wrote:
> On 2019-11-13 01:30, Steve Kargl wrote:
> >
> > I installed the 2nd seqlock.diff, rebuilt drm-current-kmod-4.16.g20191023,
> > rebooting, and have been pounding on the system with workloads that are
> > similar to what the
On 2019-11-13 01:30, Steve Kargl wrote:
On Tue, Nov 12, 2019 at 06:48:22PM +0100, Hans Petter Selasky wrote:
On 2019-11-12 18:31, Steve Kargl wrote:
Can you open the radeonkms.ko in gdb83 from ports and type:
l *(radeon_gem_busy_ioctl+0x30)
% /boot/modules/radeonkms.ko
(gdb) l
On Tue, Nov 12, 2019 at 06:48:22PM +0100, Hans Petter Selasky wrote:
> On 2019-11-12 18:31, Steve Kargl wrote:
> >> Can you open the radeonkms.ko in gdb83 from ports and type:
> >>
> >> l *(radeon_gem_busy_ioctl+0x30)
> >>
> > % /boot/modules/radeonkms.ko
> > (gdb) l *(radeon_gem_busy_ioctl+0x30)
On 2019-11-12 18:31, Steve Kargl wrote:
Can you open the radeonkms.ko in gdb83 from ports and type:
l *(radeon_gem_busy_ioctl+0x30)
% /boot/modules/radeonkms.ko
(gdb) l *(radeon_gem_busy_ioctl+0x30)
0xa12b0 is in radeon_gem_busy_ioctl
On Mon, Nov 11, 2019 at 10:34:23AM +0100, Hans Petter Selasky wrote:
> Hi,
>
> Can you open the radeonkms.ko in gdb83 from ports and type:
>
> l *(radeon_gem_busy_ioctl+0x30)
>
% /boot/modules/radeonkms.ko
(gdb) l *(radeon_gem_busy_ioctl+0x30)
0xa12b0 is in radeon_gem_busy_ioctl
On Mon, Nov 11, 2019 at 02:22:55PM +0100, Hans Petter Selasky wrote:
> On 2019-11-08 23:09, Steve Kargl wrote:
> > Here's 'procstat -kk' for the stuck process with the long line wrapped.
>
> Can you run this command a couple of times and see if the backtrace changes?
>
> --HPS
I was AFK for a
On 2019-11-08 23:09, Steve Kargl wrote:
Here's 'procstat -kk' for the stuck process with the long line wrapped.
Can you run this command a couple of times and see if the backtrace changes?
--HPS
___
freebsd-current@freebsd.org mailing list
On Mon, Nov 11, 2019 at 01:22:09PM +0100, Hans Petter Selasky wrote:
> On 2019-11-11 11:44, Hans Petter Selasky wrote:
> > Seems like we can optimise away one more write memory barrier.
> >
> > If you are building from ports, simply:
> >
> > cd work/kms-drm*
> > cat seqlock.diff | patch -p1
> >
On 2019-11-11 11:44, Hans Petter Selasky wrote:
Seems like we can optimise away one more write memory barrier.
If you are building from ports, simply:
cd work/kms-drm*
cat seqlock.diff | patch -p1
Hi,
Here is one more debug patch you can try. See if you get that print
added in the patch
Seems like we can optimise away one more write memory barrier.
If you are building from ports, simply:
cd work/kms-drm*
cat seqlock.diff | patch -p1
--HPS
diff --git a/linuxkpi/gplv2/include/linux/reservation.h b/linuxkpi/gplv2/include/linux/reservation.h
index b975f792c..0ce922a0e 100644
---
On 2019-11-11 10:34, Hans Petter Selasky wrote:
Hi,
Can you open the radeonkms.ko in gdb83 from ports and type:
l *(radeon_gem_busy_ioctl+0x30)
Hi,
I suspect there is a memory race in the seqlock framework. Can you try
the attached patch and re-build?
Is this issue easily reproducible?
Hi,
Can you open the radeonkms.ko in gdb83 from ports and type:
l *(radeon_gem_busy_ioctl+0x30)
--HPS
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to
On Thu, Nov 07, 2019 at 03:32:23PM -0500, Mark Johnston wrote:
> On Thu, Nov 07, 2019 at 12:29:19PM -0800, Steve Kargl wrote:
> > I haven't seen anyone post about an unkillable process
> > (even by root), which consumes 100% cpu.
> >
> > last pid: 4592; load averages: 1.24, 1.08, 0.74 up
On Thu, Nov 07, 2019 at 03:32:23PM -0500, Mark Johnston wrote:
> On Thu, Nov 07, 2019 at 12:29:19PM -0800, Steve Kargl wrote:
> > I haven't seen anyone post about an unkillable process
> > (even by root), which consumes 100% cpu.
> >
> > last pid: 4592; load averages: 1.24, 1.08, 0.74 up
On Thu, Nov 07, 2019 at 12:29:19PM -0800, Steve Kargl wrote:
> I haven't seen anyone post about an unkillable process
> (even by root), which consumes 100% cpu.
>
> last pid: 4592; load averages: 1.24, 1.08, 0.74 up 13+20:21:20
> 12:26:29
> 68 processes: 2 running, 66 sleeping
> CPU:
18 matches
Mail list logo