Hi
Am 17.11.20 um 17:22 schrieb Ville Syrjälä:
> On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
>> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
>> the update might not make it to the screen.
>>
>> Signed-off-by: Thomas Zimmermann
>> Fixes: 222ec45f4c6
On 2020/11/18 下午2:57, Mike Christie wrote:
On 11/17/20 11:17 PM, Jason Wang wrote:
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
The following kernel patches were made over Michael's vhost branch:
https://urldefense.com/v3/__http
On 2020/10/26 上午10:59, Jason Wang wrote:
On 2020/10/23 下午11:34, Michael S. Tsirkin wrote:
On Fri, Oct 23, 2020 at 03:08:53PM +0300, Dan Carpenter wrote:
The copy_to/from_user() functions return the number of bytes which we
weren't able to copy but the ioctl should return -EFAULT if they fail.
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
The following kernel patches were made over Michael's vhost branch:
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
and the vhost-scsi bug fix patchset:
htt
On Thu, 12 Nov 2020 14:38:13 +0100 David Hildenbrand wrote:
> virtio-mem soon wants to use offline_and_remove_memory() memory that
> exceeds a single Linux memory block (memory_block_size_bytes()). Let's
> remove that restriction.
>
> Let's remember the old state and try to restore that if anyth
On 2020/11/18 上午11:15, Sergey Senozhatsky wrote:
On (20/11/18 11:46), Sergey Senozhatsky wrote:
[..]
Because I'm not sure where the xmit_lock is taken while holding the
target_list_lock.
I don't see where does this happen. It seems to me that the report
is not about broken locking order, but m
On (20/11/18 11:46), Sergey Senozhatsky wrote:
[..]
> > Because I'm not sure where the xmit_lock is taken while holding the
> > target_list_lock.
>
> I don't see where does this happen. It seems to me that the report
> is not about broken locking order, but more about:
> - soft-irq can be preempte
On (20/11/17 09:33), Steven Rostedt wrote:
> > [ 21.149601] IN-HARDIRQ-W at:
> > [ 21.149602] __lock_acquire+0xa78/0x1a94
> > [ 21.149603] lock_acquire.part.0+0x170/0x360
> > [ 21.149604] lock_acquire+0x68/0x8c
>
> From: Jakub Kicinski
> Sent: Tuesday, November 17, 2020 3:53 AM
>
> On Thu, 12 Nov 2020 08:39:58 +0200 Parav Pandit wrote:
> > FAQs:
> > -
> > 1. Where does userspace vdpa tool reside which users can use?
> > Ans: vdpa tool can possibly reside in iproute2 [1] as it enables user
> > to cr
> From: Stefan Hajnoczi
> Sent: Monday, November 16, 2020 3:11 PM
> Great! A few questions and comments:
>
> How are configuration parameters passed in during device creation (e.g.
> MAC address, number of queues)?
During device creation time more parameters to be added.
>
> Can configuration
On Tue, Nov 17, 2020 at 09:33:25AM -0500, Steven Rostedt wrote:
> On Tue, 17 Nov 2020 12:23:41 +0200
> Leon Romanovsky wrote:
>
> > Hi,
> >
> > Approximately two weeks ago, our regression team started to experience those
> > netconsole splats. The tested code is Linus's master (-rc4) + netdev
> >
On Tue, Nov 17, 2020 at 04:43:42PM +, Stefan Hajnoczi wrote:
On Tue, Nov 17, 2020 at 03:16:20PM +0100, Stefano Garzarella wrote:
On Tue, Nov 17, 2020 at 11:11:21AM +, Stefan Hajnoczi wrote:
> On Fri, Nov 13, 2020 at 02:47:04PM +0100, Stefano Garzarella wrote:
> > +static void vdpasim_blk
On Tue, Nov 17, 2020 at 03:16:20PM +0100, Stefano Garzarella wrote:
> On Tue, Nov 17, 2020 at 11:11:21AM +, Stefan Hajnoczi wrote:
> > On Fri, Nov 13, 2020 at 02:47:04PM +0100, Stefano Garzarella wrote:
> > > +static void vdpasim_blk_work(struct work_struct *work)
> > > +{
> > > + struct vdpasi
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
> The following kernel patches were made over Michael's vhost branch:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>
> and the vhost-scsi bug fix patchset:
>
> https://lore.kernel.org/linux-scsi/2020
On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
> the update might not make it to the screen.
>
> Signed-off-by: Thomas Zimmermann
> Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory"
On Thu, Nov 12, 2020 at 05:19:09PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 2f98b81..e953031 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -1736,6 +1736,28 @@ static long vhost_vring_set_num_addr(struct vhost_dev
> *
On Thu, Nov 12, 2020 at 05:19:08PM -0600, Mike Christie wrote:
> The next patch adds a callout so drivers can perform some action when we
> get a VHOST_SET_VRING_ENABLE, so this patch moves the msg_handler callout
> to a new vhost_dev_ops struct just to keep all the callouts better
> organized.
>
On Thu, Nov 12, 2020 at 05:19:07PM -0600, Mike Christie wrote:
> With one worker we will always send the scsi cmd responses then send the
> TMF rsp, because LIO will always complete the scsi cmds first which
> calls vhost_scsi_release_cmd to add them to the work queue.
>
> When the next patch adds
On Thu, Nov 12, 2020 at 05:19:06PM -0600, Mike Christie wrote:
> In the last patches we are going to have a worker thread per IO vq.
> This patch separates the scsi cmd completion code paths so we can
> complete cmds based on their vq instead of having all cmds complete
> on the same worker thread.
On Thu, Nov 12, 2020 at 05:19:05PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index d229515..9eeb8c7 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -187,13 +187,15 @@ void vhost_work_init(struct vhost_work *work,
> vhost_work
On Tue, Nov 17, 2020 at 10:57:09AM +, Stefan Hajnoczi wrote:
On Fri, Nov 13, 2020 at 02:47:01PM +0100, Stefano Garzarella wrote:
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 2754f3069738..fb0411594963 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -22,6 +22
On Tue, 17 Nov 2020 12:23:41 +0200
Leon Romanovsky wrote:
> Hi,
>
> Approximately two weeks ago, our regression team started to experience those
> netconsole splats. The tested code is Linus's master (-rc4) + netdev net-next
> + netdev net-rc.
>
> Such splats are random and we can't bisect beca
On Tue, Nov 17, 2020 at 11:36:36AM +, Stefan Hajnoczi wrote:
On Fri, Nov 13, 2020 at 02:47:12PM +0100, Stefano Garzarella wrote:
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
index 8e41b3ab98d5..68e74383322f 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_
On Tue, Nov 17, 2020 at 11:23:05AM +, Stefan Hajnoczi wrote:
On Fri, Nov 13, 2020 at 02:47:06PM +0100, Stefano Garzarella wrote:
Move device properties used during the entire life cycle in a new
structure to simplify the copy of these fields during the vdpasim
initialization.
Signed-off-by:
On Tue, Nov 17, 2020 at 11:11:21AM +, Stefan Hajnoczi wrote:
On Fri, Nov 13, 2020 at 02:47:04PM +0100, Stefano Garzarella wrote:
+static void vdpasim_blk_work(struct work_struct *work)
+{
+ struct vdpasim *vdpasim = container_of(work, struct vdpasim, work);
+ u8 status = VIRTIO_B
On Tue, Nov 17, 2020 at 03:00:32PM +0100, Arnaud POULIQUEN wrote:
> The dma_declare_coherent_memory allows to associate vdev0buffer memory region
> to the remoteproc virtio device (vdev parent). This region is used to
> allocated
> the rpmsg buffers.
> The memory for the rpmsg buffer is allocated
On 11/16/20 5:39 PM, Christoph Hellwig wrote:
> Btw, I also still don't understand why remoteproc is using
> dma_declare_coherent_memory to start with. The virtio code has exactly
> one call to dma_alloc_coherent vring_alloc_queue, a function that
> already switches between two different alloca
SHMEM-buffer backing storage is allocated from system memory; which is
typically cachable. The default mode for SHMEM objects is writecombine
though.
Unify SHMEM semantics by defaulting to cached mappings. The exception
is pages imported via dma-buf. DMA memory is usually not cached.
DRM drivers
By default, SHMEM GEM helpers map pages using writecombine. Only a few
drivers require this setting. Others revert it to default mappings
flags. Some could benefit from caching, but don't care.
Unify the behaviour by switching the SHMEM GEM code to use cached
mappings (i.e., PAGE_KERNEL actually);
Cached page mappings are now the default for SHMEM GEM objects. Remove
the obsolete create function for cached mappings.
Signed-off-by: Thomas Zimmermann
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 26 --
drivers/gpu/drm/mgag200/mgag200_drv.c | 1 -
drivers/gpu/drm/udl
On Thu, Nov 12, 2020 at 05:19:02PM -0600, Mike Christie wrote:
> The vhost work flush function was flushing the entire work queue, so
> there is no need for the double vhost_work_dev_flush calls in
> vhost_scsi_flush.
>
> And we do not need to call vhost_poll_flush for each poller because
> that c
On Thu, Nov 12, 2020 at 05:19:03PM -0600, Mike Christie wrote:
> We use like 3 coding styles in this struct. Switch to just tabs.
>
> Signed-off-by: Mike Christie
> Reviewed-by: Chaitanya Kulkarni
> ---
> drivers/vhost/vhost.h | 12 ++--
> 1 file changed, 6 insertions(+), 6 deletions(-)
On Thu, Nov 12, 2020 at 05:19:01PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index f22fce5..8795fd3 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1468,8 +1468,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> /*
On Thu, Nov 12, 2020 at 05:19:00PM -0600, Mike Christie wrote:
> +static int vhost_kernel_set_vring_enable(struct vhost_dev *dev, int enable)
> +{
> +struct vhost_vring_state s;
> +int i, ret;
> +
> +s.num = 1;
> +for (i = 0; i < dev->nvqs; ++i) {
> +s.index = i;
> +
> +
On Fri, Nov 13, 2020 at 02:47:12PM +0100, Stefano Garzarella wrote:
> diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
> b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
> index 8e41b3ab98d5..68e74383322f 100644
> --- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
> +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
> @@
On Fri, Nov 13, 2020 at 02:47:10PM +0100, Stefano Garzarella wrote:
> vringh_getdesc_iotlb() manages 2 iovs for writable and readable
> descriptors. This is very useful for the block device, where for
> each request we have both types of descriptor.
>
> Let's split the vdpasim_virtqueue's iov fiel
On Fri, Nov 13, 2020 at 02:47:06PM +0100, Stefano Garzarella wrote:
> Move device properties used during the entire life cycle in a new
> structure to simplify the copy of these fields during the vdpasim
> initialization.
>
> Signed-off-by: Stefano Garzarella
> ---
> drivers/vdpa/vdpa_sim/vdpa_s
On Fri, Nov 13, 2020 at 02:47:04PM +0100, Stefano Garzarella wrote:
> +static void vdpasim_blk_work(struct work_struct *work)
> +{
> + struct vdpasim *vdpasim = container_of(work, struct vdpasim, work);
> + u8 status = VIRTIO_BLK_S_OK;
> + int i;
> +
> + spin_lock(&vdpasim->lock);
>
On Fri, Nov 13, 2020 at 02:47:01PM +0100, Stefano Garzarella wrote:
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 2754f3069738..fb0411594963 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -22,6 +22,7 @@
> #include
> #include
> #include
> +#include
Hi,
Approximately two weeks ago, our regression team started to experience those
netconsole splats. The tested code is Linus's master (-rc4) + netdev net-next
+ netdev net-rc.
Such splats are random and we can't bisect because there is no stable
reproducer.
Any idea, what is the root cause?
[
On Thu, Oct 29, 2020 at 02:33:47PM +0100, Daniel Vetter wrote:
> These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev
> emulation code").
>
> v2: Somehow these structs provided the struct qxl_device pre-decl,
> reorder the header to not anger compilers.
>
> Acked-by: Gerd Hoffma
On Mon, Nov 16, 2020 at 04:22:57PM +0100, Juergen Gross wrote:
> Eliminate the usergs_sysret64 paravirt call completely and switch
> the swapgs one to use ALTERNATIVE instead. This requires to fix the
> IST based exception entries for Xen PV to use the same mechanism as
> NMI and debug exception al
On 16.11.20 17:28, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross wrote:
USERGS_SYSRET64 is used to return from a syscall via sysret, but
a Xen PV guest will nevertheless use the iret hypercall, as there
is no sysret PV hypercall defined.
So instead of testing all the pr
43 matches
Mail list logo