WorldCIST'2016 Call for Papers - Deadline: November 15, 2015

2015-10-30 Thread ML
*
** Apologize if you receive multiple copies of this email, or if its content is 
irrelevant for you.
** Please forward for your contacts. Thank you very much!
*


-
WorldCIST'16 - 4th World Conference on Information Systems and Technologies 
Recife, PE, Brazil
22th-24th of March 2016
http://www.aisti.eu/worldcist16/
---


SCOPE

The WorldCist'16 - 4th World Conference on Information Systems and Technologies 
( http://www.aisti.eu/worldcist16/ ), to be held at Recife, PE, Brazil, 22 - 24 
March 2016, is a global forum for researchers and practitioners to present and 
discuss the most recent innovations, trends, results, experiences and concerns 
in the several perspectives of Information Systems and Technologies.

We are pleased to invite you to submit your papers to WorldCist'16. All 
submissions will be reviewed on the basis of relevance, originality, importance 
and clarity.


THEMES

Submitted papers should be related with one or more of the main themes proposed 
for the Conference:

A) Information and Knowledge Management (IKM);
B) Organizational Models and Information Systems (OMIS);
C) Software and Systems Modeling (SSM);
D) Software Systems, Architectures, Applications and Tools (SSAAT);
E) Multimedia Systems and Applications (MSA);
F) Computer Networks, Mobility and Pervasive Systems (CNMPS);
G) Intelligent and Decision Support Systems (IDSS);
H) Big Data Analytics and Applications (BDAA);
I) Human-Computer Interaction (HCI);
J) Health Informatics (HIS);
K) Information Technologies in Education (ITE);
L) Information Technologies in Radiocommunications (ITR).


TYPES OF SUBMISSIONS AND DECISIONS

Four types of papers can be submitted:

- Full paper: Finished or consolidated R works, to be included in one of the 
Conference themes. These papers are assigned a 10-page limit.

- Short paper: Ongoing works with relevant preliminary results, open to 
discussion. These papers are assigned a 7-page limit.

-Poster paper: Initial work with relevant ideas, open to discussion. These 
papers are assigned to a 4-page limit.

- Company paper: Companies' papers that show practical experience, R & D, 
tools, etc., focused on some topics of the conference. These papers are 
assigned to a 4-page limit.

Submitted papers must comply with the format of Advances in Intelligent Systems 
and Computing Series (see Instructions for Authors at Springer Website or 
download a DOC example) be written in English, must not have been published 
before, not be under review for any other conference or publication and not 
include any information leading to the authors’ identification. Therefore, the 
authors’ names, affiliations and bibliographic references should not be 
included in the version for evaluation by the Program Committee. This 
information should only be included in the camera-ready version, saved in Word 
or Latex format and also in PDF format. These files must be accompanied by the 
Consent to Publication form filled out, in a ZIP file, and uploaded at the 
conference management system.

All papers will be subjected to a “double-blind review” by at least two members 
of the Program Committee.

Based on Program Committee evaluation, a paper can be rejected or accepted by 
the Conference Chairs. In the later case, it can be accepted as the type 
originally submitted or as another type. Thus, full papers can be accepted as 
short papers or poster papers only. Similarly, short papers can be accepted as 
poster papers only. In these cases, the authors will be allowed to maintain the 
original number of pages in the camera-ready version.

The authors of accepted poster papers must also build and print a poster to be 
exhibited during the Conference. This poster must follow an A1 or A2 vertical 
format. The Conference can includes Work Sessions where these posters are 
presented and orally discussed, with a 5 minute limit per poster.

The authors of accepted full papers will have 15 minutes to present their work 
in a Conference Work Session; approximately 5 minutes of discussion will follow 
each presentation. The authors of accepted short papers and company papers will 
have 11 minutes to present their work in a Conference Work Session; 
approximately 4 minutes of discussion will follow each presentation.


PUBLICATION AND INDEXING

To ensure that a full paper, short paper, poster paper or company paper is 
published in the Proceedings, at least one of the authors must be fully 
registered by the 27th of December 2015, and the paper must comply with the 
suggested layout and page-limit. Additionally, all recommended changes must be 
addressed by the authors before they submit the camera-ready version.

No more than one paper per registration will be published in the Conference 
Proceedings. An extra fee must be paid for publication of additional papers, 
with a maximum of one additional paper per registration.

Full and short papers will be published in Proceedings by 

[PATCH] vhost: move is_le setup to the backend

2015-10-30 Thread Greg Kurz
The vq->is_le field is used to fix endianness when accessing the vring via
the cpu_to_vhost16() and vhost16_to_cpu() helpers in the following cases:

1) host is big endian and device is modern virtio

2) host has cross-endian support and device is legacy virtio with a different
   endianness than the host

Both cases rely on the VHOST_SET_FEATURES ioctl, but 2) also needs the
VHOST_SET_VRING_ENDIAN ioctl to be called by userspace. Since vq->is_le
is only needed when the backend is active, it was decided to set it at
backend start.

This is currently done in vhost_init_used()->vhost_init_is_le() but it
obfuscates the core vhost code. This patch moves the is_le setup to a
dedicated function that is called from the backend code.

Note vhost_net is the only backend that can pass vq->private_data == NULL to
vhost_init_used(), hence the "if (sock)" branch.

No behaviour change.

Signed-off-by: Greg Kurz 
---
 drivers/vhost/net.c   |6 ++
 drivers/vhost/scsi.c  |3 +++
 drivers/vhost/test.c  |2 ++
 drivers/vhost/vhost.c |   12 +++-
 drivers/vhost/vhost.h |1 +
 5 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 9eda69e40678..d6319cb2664c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -917,6 +917,12 @@ static long vhost_net_set_backend(struct vhost_net *n, 
unsigned index, int fd)
 
vhost_net_disable_vq(n, vq);
vq->private_data = sock;
+
+   if (sock)
+   vhost_set_is_le(vq);
+   else
+   vq->is_le = virtio_legacy_is_little_endian();
+
r = vhost_init_used(vq);
if (r)
goto err_used;
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index e25a23692822..e2644a301fa5 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1276,6 +1276,9 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
vq = >vqs[i].vq;
mutex_lock(>mutex);
vq->private_data = vs_tpg;
+
+   vhost_set_is_le(vq);
+
vhost_init_used(vq);
mutex_unlock(>mutex);
}
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index f2882ac98726..b1c7df502211 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -196,6 +196,8 @@ static long vhost_test_run(struct vhost_test *n, int test)
oldpriv = vq->private_data;
vq->private_data = priv;
 
+   vhost_set_is_le(vq);
+
r = vhost_init_used(>vqs[index]);
 
mutex_unlock(>mutex);
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index eec2f11809ff..6be863dcbd13 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -113,6 +113,12 @@ static void vhost_init_is_le(struct vhost_virtqueue *vq)
 }
 #endif /* CONFIG_VHOST_CROSS_ENDIAN_LEGACY */
 
+void vhost_set_is_le(struct vhost_virtqueue *vq)
+{
+   vhost_init_is_le(vq);
+}
+EXPORT_SYMBOL_GPL(vhost_set_is_le);
+
 static void vhost_poll_func(struct file *file, wait_queue_head_t *wqh,
poll_table *pt)
 {
@@ -1156,12 +1162,8 @@ int vhost_init_used(struct vhost_virtqueue *vq)
 {
__virtio16 last_used_idx;
int r;
-   if (!vq->private_data) {
-   vq->is_le = virtio_legacy_is_little_endian();
+   if (!vq->private_data)
return 0;
-   }
-
-   vhost_init_is_le(vq);
 
r = vhost_update_used_flags(vq);
if (r)
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 4772862b71a7..8a62041959fe 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -162,6 +162,7 @@ bool vhost_enable_notify(struct vhost_dev *, struct 
vhost_virtqueue *);
 
 int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
unsigned int log_num, u64 len);
+void vhost_set_is_le(struct vhost_virtqueue *vq);
 
 #define vq_err(vq, fmt, ...) do {  \
pr_debug(pr_fmt(fmt), ##__VA_ARGS__);   \

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net

2015-10-30 Thread Jason Wang


On 10/29/2015 04:45 PM, Jason Wang wrote:
> Hi all:
>
> This series tries to add basic busy polling for vhost net. The idea is
> simple: at the end of tx processing, busy polling for new tx added
> descriptor and rx receive socket for a while. The maximum number of
> time (in us) could be spent on busy polling was specified through
> module parameter.
>
> Test were done through:
>
> - 50 us as busy loop timeout
> - Netperf 2.6
> - Two machines with back to back connected mlx4
> - Guest with 8 vcpus and 1 queue
>
> Result shows very huge improvement on both tx (at most 158%) and rr
> (at most 53%) while rx is as much as in the past. Most cases the cpu
> utilization is also improved:
>

Just notice there's something wrong in the setup. So the numbers are
incorrect here. Will re-run and post correct number here.

Sorry.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 2/6] virtio_ring: Support DMA APIs

2015-10-30 Thread Christian Borntraeger
Am 30.10.2015 um 13:01 schrieb Cornelia Huck:
> On Thu, 29 Oct 2015 18:09:47 -0700
> Andy Lutomirski  wrote:
> 
>> virtio_ring currently sends the device (usually a hypervisor)
>> physical addresses of its I/O buffers.  This is okay when DMA
>> addresses and physical addresses are the same thing, but this isn't
>> always the case.  For example, this never works on Xen guests, and
>> it is likely to fail if a physical "virtio" device ever ends up
>> behind an IOMMU or swiotlb.
>>
>> The immediate use case for me is to enable virtio on Xen guests.
>> For that to work, we need vring to support DMA address translation
>> as well as a corresponding change to virtio_pci or to another
>> driver.
>>
>> With this patch, if enabled, virtfs survives kmemleak and
>> CONFIG_DMA_API_DEBUG.
>>
>> Signed-off-by: Andy Lutomirski 
>> ---
>>  drivers/virtio/Kconfig   |   2 +-
>>  drivers/virtio/virtio_ring.c | 190 
>> +++
>>  tools/virtio/linux/dma-mapping.h |  17 
>>  3 files changed, 172 insertions(+), 37 deletions(-)
>>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
>>  static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
>>  {
>> -unsigned int i;
>> +unsigned int i, j;
>> +u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
>>
>>  /* Clear data ptr. */
>> -vq->data[head] = NULL;
>> +vq->desc_state[head].data = NULL;
>>
>> -/* Put back on free list: find end */
>> +/* Put back on free list: unmap first-level descriptors and find end */
>>  i = head;
>>
>> -/* Free the indirect table */
>> -if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, 
>> VRING_DESC_F_INDIRECT))
>> -kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, 
>> vq->vring.desc[i].addr)));
>> -
>> -while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, 
>> VRING_DESC_F_NEXT)) {
>> +while (vq->vring.desc[i].flags & nextflag) {
>> +vring_unmap_one(vq, >vring.desc[i]);
>>  i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
>>  vq->vq.num_free++;
>>  }
>>
>> +vring_unmap_one(vq, >vring.desc[i]);
>>  vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
>>  vq->free_head = head;
>> +
>>  /* Plus final descriptor */
>>  vq->vq.num_free++;
>> +
>> +/* Free the indirect table, if any, now that it's unmapped. */
>> +if (vq->desc_state[head].indir_desc) {
>> +struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
>> +u32 len = vq->vring.desc[head].len;
> 
> This one needs to be virtio32_to_cpu(...) as well.

Yes, just did the exact same change
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index f269e1c..f2249df 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -556,7 +556,7 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned 
int head)
/* Free the indirect table, if any, now that it's unmapped. */
if (vq->desc_state[head].indir_desc) {
struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
-   u32 len = vq->vring.desc[head].len;
+   u32 len = virtio32_to_cpu(vq->vq.vdev, 
vq->vring.desc[head].len);
 
BUG_ON(!(vq->vring.desc[head].flags &
 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));


now it boots.
> 
>> +
>> +BUG_ON(!(vq->vring.desc[head].flags &
>> + cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
>> +BUG_ON(len == 0 || len % sizeof(struct vring_desc));
>> +
>> +for (j = 0; j < len / sizeof(struct vring_desc); j++)
>> +vring_unmap_one(vq, _desc[j]);
>> +
>> +kfree(vq->desc_state[head].indir_desc);
>> +vq->desc_state[head].indir_desc = NULL;
>> +}
>>  }
> 
> With that change on top of your current branch, I can boot (root on
> virtio-blk, either virtio-1 or legacy virtio) on current qemu master
> with kvm enabled on s390. Haven't tried anything further.
> 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 1/6] virtio-net: Stop doing DMA from the stack

2015-10-30 Thread Andy Lutomirski
On Fri, Oct 30, 2015 at 6:55 AM, Christian Borntraeger
 wrote:
> Am 30.10.2015 um 02:09 schrieb Andy Lutomirski:
>> From: "Michael S. Tsirkin" 
>>
>> Once virtio starts using the DMA API, we won't be able to safely DMA
>> from the stack.  virtio-net does a couple of config DMA requests
>> from small stack buffers -- switch to using dynamically-allocated
>> memory.
>>
>> This should have no effect on any performance-critical code paths.
>>
>> [I wrote the subject and commit message.  mst wrote the code. --luto]
>>
>> Signed-off-by: Andy Lutomirski 
>> signed-off-by: Michael S. Tsirkin 
>
> I still get an error when using multiqueue:
>
> #  ethtool -L eth0 combined 4
> [   33.534686] virtio_ccw 0.0.000d: DMA-API: device driver maps memory from 
> stack [addr=629e7c06]

Fixed in my branch, I think.

--Andy
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCHv2 0/3] dma ops and virtio

2015-10-30 Thread Christian Borntraeger
here is the 2nd version of providing an DMA API for s390.

There are some attempts to unify the dma ops (Christoph) as well
as some attempts to make virtio use the dma API (Andy).

At kernel summit we concluded that we want to use the same code on all
platforms, whereever possible, so having a dummy dma_op might be the
easiest solution to keep virtio-ccw as similar as possible to
virtio-pci.Together with a fixed up patch set from Andy Lutomirski
this seems to work.  

We will also need a fixup for powerc and QEMU changes to make virtio
work with iommu on power and x86.

TODO:
- future add-on patches to also fold in x86 no iommu
- dma_mask
- checking?
- make compilation of dma-noop dependent on something

v1->v2:
- initial testing
- always use dma_noop_ops if device has no private dma_ops
- get rid of setup in virtio_ccw,kvm_virtio
- set CONFIG_HAS_DMA(ATTRS) for virtio (fixes compile for !PCI)
- rename s390_dma_ops to s390_pci_dma_ops

Christian Borntraeger (3):
  Provide simple noop dma ops
  alpha: use common noop dma ops
  s390/dma: Allow per device dma ops

 arch/alpha/kernel/pci-noop.c| 46 ++
 arch/s390/Kconfig   |  3 +-
 arch/s390/include/asm/device.h  |  6 ++-
 arch/s390/include/asm/dma-mapping.h |  6 ++-
 arch/s390/pci/pci.c |  1 +
 arch/s390/pci/pci_dma.c |  4 +-
 include/linux/dma-mapping.h |  2 +
 lib/Makefile|  2 +-
 lib/dma-noop.c  | 77 +
 9 files changed, 98 insertions(+), 49 deletions(-)
 create mode 100644 lib/dma-noop.c

-- 
2.4.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 1/3] Provide simple noop dma ops

2015-10-30 Thread Christian Borntraeger
We are going to require dma_ops for several common drivers, even for
systems that do have an identity mapping. Lets provide some minimal
no-op dma_ops that can be used for that purpose.

Signed-off-by: Christian Borntraeger 
---
 include/linux/dma-mapping.h |  2 ++
 lib/Makefile|  2 +-
 lib/dma-noop.c  | 77 +
 3 files changed, 80 insertions(+), 1 deletion(-)
 create mode 100644 lib/dma-noop.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index ac07ff0..7912f54 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -66,6 +66,8 @@ struct dma_map_ops {
int is_phys;
 };
 
+extern struct dma_map_ops dma_noop_ops;
+
 #define DMA_BIT_MASK(n)(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
 #define DMA_MASK_NONE  0x0ULL
diff --git a/lib/Makefile b/lib/Makefile
index 13a7c6a..b04ba71 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -13,7 +13,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 sha1.o md5.o irq_regs.o argv_split.o \
 proportions.o flex_proportions.o ratelimit.o show_mem.o \
 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
-earlycpio.o seq_buf.o nmi_backtrace.o
+earlycpio.o seq_buf.o nmi_backtrace.o dma-noop.o
 
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 lib-$(CONFIG_MMU) += ioremap.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
new file mode 100644
index 000..3ce31302
--- /dev/null
+++ b/lib/dma-noop.c
@@ -0,0 +1,77 @@
+/*
+ * lib/dma-noop.c
+ *
+ * Stub DMA noop-ops
+ */
+#include 
+#include 
+#include 
+#include 
+
+static void *dma_noop_alloc(struct device *dev, size_t size,
+   dma_addr_t *dma_handle, gfp_t gfp,
+   struct dma_attrs *attrs)
+{
+   void *ret;
+
+   ret = (void *)__get_free_pages(gfp, get_order(size));
+   if (ret) {
+   memset(ret, 0, size);
+   *dma_handle = virt_to_phys(ret);
+   }
+   return ret;
+}
+
+static void dma_noop_free(struct device *dev, size_t size,
+ void *cpu_addr, dma_addr_t dma_addr,
+ struct dma_attrs *attrs)
+{
+   free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+   return page_to_phys(page) + offset;
+}
+
+static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int 
nents,
+enum dma_data_direction dir, struct dma_attrs 
*attrs)
+{
+   int i;
+   struct scatterlist *sg;
+
+   for_each_sg(sgl, sg, nents, i) {
+   void *va;
+
+   BUG_ON(!sg_page(sg));
+   va = sg_virt(sg);
+   sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+   sg_dma_len(sg) = sg->length;
+   }
+
+   return nents;
+}
+
+static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+   return 0;
+}
+
+static int dma_noop_supported(struct device *dev, u64 mask)
+{
+   return 1;
+}
+
+struct dma_map_ops dma_noop_ops = {
+   .alloc  = dma_noop_alloc,
+   .free   = dma_noop_free,
+   .map_page   = dma_noop_map_page,
+   .map_sg = dma_noop_map_sg,
+   .mapping_error  = dma_noop_mapping_error,
+   .dma_supported  = dma_noop_supported,
+};
+
+EXPORT_SYMBOL(dma_noop_ops);
-- 
2.4.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 3/3] s390/dma: Allow per device dma ops

2015-10-30 Thread Christian Borntraeger
As virtio-ccw now has dma ops, we can no longer default to the PCI ones.
Make use of dev_archdata to keep the dma_ops per device. The pci devices
now use that to override the default, and the default is changed to use
the noop ops for everything that is not PCI. To compile without PCI
support we also have to enable the DMA api with virtio.

Signed-off-by: Christian Borntraeger 
---
 arch/s390/Kconfig   | 3 ++-
 arch/s390/include/asm/device.h  | 6 +-
 arch/s390/include/asm/dma-mapping.h | 6 --
 arch/s390/pci/pci.c | 1 +
 arch/s390/pci/pci_dma.c | 4 ++--
 5 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 1d57000..04f0e02 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -113,6 +113,7 @@ config S390
select GENERIC_FIND_FIRST_BIT
select GENERIC_SMP_IDLE_THREAD
select GENERIC_TIME_VSYSCALL
+   select HAS_DMA
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_EARLY_PFN_TO_NID
@@ -124,6 +125,7 @@ config S390
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
select HAVE_DEBUG_KMEMLEAK
+   select HAVE_DMA_ATTRS
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_FTRACE_MCOUNT_RECORD
@@ -580,7 +582,6 @@ config QDIO
 
 menuconfig PCI
bool "PCI support"
-   select HAVE_DMA_ATTRS
select PCI_MSI
help
  Enable PCI support.
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index d8f9872..4a9f35e 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -3,5 +3,9 @@
  *
  * This file is released under the GPLv2
  */
-#include 
+struct dev_archdata {
+   struct dma_map_ops *dma_ops;
+};
 
+struct pdev_archdata {
+};
diff --git a/arch/s390/include/asm/dma-mapping.h 
b/arch/s390/include/asm/dma-mapping.h
index b3fd54d..cb05f5c 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -11,11 +11,13 @@
 
 #define DMA_ERROR_CODE (~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_dma_ops;
+extern struct dma_map_ops s390_pci_dma_ops;
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-   return _dma_ops;
+   if (dev && dev->archdata.dma_ops)
+   return dev->archdata.dma_ops;
+   return _noop_ops;
 }
 
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 7ef12a3..fa41605 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -649,6 +649,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 
zdev->pdev = pdev;
pdev->dev.groups = zpci_attr_groups;
+   pdev->dev.archdata.dma_ops = _pci_dma_ops;
zpci_map_resources(pdev);
 
for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 37505b8..ea39c3f 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -495,7 +495,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_dma_ops = {
+struct dma_map_ops s390_pci_dma_ops = {
.alloc  = s390_dma_alloc,
.free   = s390_dma_free,
.map_sg = s390_dma_map_sg,
@@ -506,7 +506,7 @@ struct dma_map_ops s390_dma_ops = {
.is_phys= 0,
/* dma_supported is unconditionally true without a callback */
 };
-EXPORT_SYMBOL_GPL(s390_dma_ops);
+EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
 
 static int __init s390_iommu_setup(char *str)
 {
-- 
2.4.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 2/3] alpha: use common noop dma ops

2015-10-30 Thread Christian Borntraeger
Some of the alpha pci noop dma ops are identical to the common ones.
Use them.

Signed-off-by: Christian Borntraeger 
---
 arch/alpha/kernel/pci-noop.c | 46 
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 2b1f4a1..8e735b5e 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -123,44 +123,6 @@ static void *alpha_noop_alloc_coherent(struct device *dev, 
size_t size,
return ret;
 }
 
-static void alpha_noop_free_coherent(struct device *dev, size_t size,
-void *cpu_addr, dma_addr_t dma_addr,
-struct dma_attrs *attrs)
-{
-   free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-static dma_addr_t alpha_noop_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size,
- enum dma_data_direction dir,
- struct dma_attrs *attrs)
-{
-   return page_to_pa(page) + offset;
-}
-
-static int alpha_noop_map_sg(struct device *dev, struct scatterlist *sgl, int 
nents,
-enum dma_data_direction dir, struct dma_attrs 
*attrs)
-{
-   int i;
-   struct scatterlist *sg;
-
-   for_each_sg(sgl, sg, nents, i) {
-   void *va;
-
-   BUG_ON(!sg_page(sg));
-   va = sg_virt(sg);
-   sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
-   sg_dma_len(sg) = sg->length;
-   }
-
-   return nents;
-}
-
-static int alpha_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-   return 0;
-}
-
 static int alpha_noop_supported(struct device *dev, u64 mask)
 {
return mask < 0x00ffUL ? 0 : 1;
@@ -168,10 +130,10 @@ static int alpha_noop_supported(struct device *dev, u64 
mask)
 
 struct dma_map_ops alpha_noop_ops = {
.alloc  = alpha_noop_alloc_coherent,
-   .free   = alpha_noop_free_coherent,
-   .map_page   = alpha_noop_map_page,
-   .map_sg = alpha_noop_map_sg,
-   .mapping_error  = alpha_noop_mapping_error,
+   .free   = dma_noop_free_coherent,
+   .map_page   = dma_noop_map_page,
+   .map_sg = dma_noop_map_sg,
+   .mapping_error  = dma_noop_mapping_error,
.dma_supported  = alpha_noop_supported,
 };
 
-- 
2.4.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 1/6] virtio-net: Stop doing DMA from the stack

2015-10-30 Thread Christian Borntraeger
Am 30.10.2015 um 02:09 schrieb Andy Lutomirski:
> From: "Michael S. Tsirkin" 
> 
> Once virtio starts using the DMA API, we won't be able to safely DMA
> from the stack.  virtio-net does a couple of config DMA requests
> from small stack buffers -- switch to using dynamically-allocated
> memory.
> 
> This should have no effect on any performance-critical code paths.
> 
> [I wrote the subject and commit message.  mst wrote the code. --luto]
> 
> Signed-off-by: Andy Lutomirski 
> signed-off-by: Michael S. Tsirkin 

I still get an error when using multiqueue:

#  ethtool -L eth0 combined 4
[   33.534686] virtio_ccw 0.0.000d: DMA-API: device driver maps memory from 
stack [addr=629e7c06]
[   33.534704] [ cut here ]
[   33.534705] WARNING: at lib/dma-debug.c:1169
[   33.534706] Modules linked in: dm_multipath
[   33.534709] CPU: 1 PID: 1087 Comm: ethtool Not tainted 4.3.0-rc3+ #269
[   33.534710] task: 616f9978 ti: 629e4000 task.ti: 
629e4000
[   33.534712] Krnl PSW : 0704d0018000 005869d2 
(check_for_stack+0xb2/0x118)
[   33.534716]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:1 PM:0 
EA:3
Krnl GPRS: 006a 00d60f44 005a 64ee0870
[   33.534718]005869ce  0001 
629e7c06
[   33.534719] 0c06 0002 
6467f800
[   33.534720]64673428 629e7c06 005869ce 
629e7928
[   33.534726] Krnl Code: 005869c2: c0200024ad4elarl
%r2,a1c45e
   005869c8: c0e5ffe6d6fc   brasl   %r14,2617c0
  #005869ce: a7f40001   brc 15,5869d0
  >005869d2: c010003465eb   larl%r1,c135a8
   005869d8: e3101012   lt  %r1,0(%r1)
   005869de: a784000a   brc 8,5869f2
   005869e2: e340f0b4   lg  %r4,176(%r15)
   005869e8: ebcff0a4   lmg %r12,%r15,160(%r15)
[   33.534736] Call Trace:
[   33.534737] ([<005869ce>] check_for_stack+0xae/0x118)
[   33.534738]  [<00586e3c>] debug_dma_map_page+0x114/0x160
[   33.534740]  [<005a31f8>] vring_map_one_sg.isra.7+0x98/0xc0
[   33.534742]  [<005a3b72>] virtqueue_add_sgs+0x1e2/0x788
[   33.534744]  [<00618afc>] virtnet_send_command+0xcc/0x140
[   33.534745]  [<00618c0c>] virtnet_set_queues+0x9c/0x110
[   33.534747]  [<00619928>] virtnet_set_channels+0x78/0xe0
[   33.534748]  [<006f63ea>] ethtool_set_channels+0x62/0x88
[   33.534750]  [<006f8900>] dev_ethtool+0x10d8/0x1a48
[   33.534752]  [<0070c540>] dev_ioctl+0x190/0x510
[   33.534754]  [<006cf2da>] sock_do_ioctl+0x7a/0x90
[   33.534755]  [<006cf840>] sock_ioctl+0x1e8/0x2d0
[   33.534758]  [<002e6c78>] do_vfs_ioctl+0x3a8/0x508
[   33.534759]  [<002e6e7c>] SyS_ioctl+0xa4/0xb8
[   33.534762]  [<008231ec>] system_call+0x244/0x264
[   33.534763]  [<03ff922026d2>] 0x3ff922026d2
[   33.534764] Last Breaking-Event-Address:
[   33.534765]  [<005869ce>] check_for_stack+0xae/0x118
[   33.534766] ---[ end trace 2379df65f4decfc4 ]---


> ---
>  drivers/net/virtio_net.c | 34 +++---
>  1 file changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index d8838dedb7a4..f94ab786088f 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -140,6 +140,12 @@ struct virtnet_info {
> 
>   /* CPU hot plug notifier */
>   struct notifier_block nb;
> +
> + /* Control VQ buffers: protected by the rtnl lock */
> + struct virtio_net_ctrl_hdr ctrl_hdr;
> + virtio_net_ctrl_ack ctrl_status;
> + u8 ctrl_promisc;
> + u8 ctrl_allmulti;
>  };
> 
>  struct padded_vnet_hdr {
> @@ -976,31 +982,30 @@ static bool virtnet_send_command(struct virtnet_info 
> *vi, u8 class, u8 cmd,
>struct scatterlist *out)
>  {
>   struct scatterlist *sgs[4], hdr, stat;
> - struct virtio_net_ctrl_hdr ctrl;
> - virtio_net_ctrl_ack status = ~0;
>   unsigned out_num = 0, tmp;
> 
>   /* Caller should know better */
>   BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ));
> 
> - ctrl.class = class;
> - ctrl.cmd = cmd;
> + vi->ctrl_status = ~0;
> + vi->ctrl_hdr.class = class;
> + vi->ctrl_hdr.cmd = cmd;
>   /* Add header */
> - sg_init_one(, , sizeof(ctrl));
> + sg_init_one(, >ctrl_hdr, sizeof(vi->ctrl_hdr));
>   sgs[out_num++] = 
> 
>   if (out)
>   sgs[out_num++] = out;
> 
>   /* Add return status. */
> - sg_init_one(, , sizeof(status));
> + sg_init_one(, >ctrl_status, sizeof(vi->ctrl_status));
>   sgs[out_num] = 
> 
>  

Re: [PATCH v3 0/3] virtio DMA API core stuff

2015-10-30 Thread David Woodhouse
(Sorry, missed part of this before).

On Thu, 2015-10-29 at 11:01 +0200, Michael S. Tsirkin wrote:
> Isn't this specified by the hypervisor? I don't think this is a good
> way to do this: guest security should be up to guest.

And it is. When the guest sees an IOMMU, it can choose to use it, or
choose not to (or choose to put it in passthrough mode). But as Jörg
says, we don't have a way for an individual  device driver to *request*
passthrough mode or not yet; the choice is made by the core IOMMU code
(iommu=pt on the command line) — or by the platform simply stating that
a given device isn't *covered* by an IOMMU, if that is indeed the case.

In *no* circumstance is it sane for a device driver just to "opt out"
of using the correct DMA API function calls, and expect that to
*magically* cause the IOMMU to be bypassed.

> > Everyone seems to agree that x86's emulated Q35 thing
> > is just buggy right now and should be taught to use the existing ACPI
> > mechanism for enumerating passthrough devices.
> 
> I'm not sure what ACPI has to do with it.
> It's about a way for guest users to specify whether
> they want to bypass an IOMMU for a given device.

No, it absolutely isn't. You might want that — and see the discussion
about DMA_ATTR_IOMMU_BYPASS if you do. But that is *utterly* irrelevant
to *this* discussion, in which you seem to be advocating that the
virtio drivers should remain buggy by just unilaterally not using the
DMA API.

> By the way, a bunch of code is missing on the QEMU side
> to make this useful:
> 1. virtio ignores the iommu
> 2. vhost user ignores the iommu
> 3. dataplane ignores the iommu
> 4. vhost-net ignores the iommu
> 5. VFIO ignores the iommu

No, those things are not useful for fixing the virtio driver bug under
discussion here. All we need to do is make the virtio drivers correctly
use the DMA API. They should never have passed review and been accepted
into the Linux kernel without that.

All we need to do first is make sure that the bug we have in the
PowerPC IOMMU code (and potentially ARM and/or SPARC?) is fixed, and
that it doesn't attempt to use an IOMMU that doesn't exist. And ensure
that the virtualised IOMMU on qemu/x86 isn't lying and claiming that it
translates for the virtio devices when it doesn't.

There are other things we might want to do — like fixing the IOMMU that
qemu can emulate, and actually making it work with real assigned
devices (currently it's totally hosed because it doesn't handle that
case at all). And potentially making the virtualised IOMMU actually
*do* translation for virtio devices (as opposed to just admitting
correctly that it doesn't). But those aren't strictly relevant here,
yet.

It's not clear what specific uses of the IOMMU you had in mind in your
above list — could you elucidate?

-- 
dwmw2



smime.p7s
Description: S/MIME cryptographic signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v4 0/6] virtio core DMA API conversion

2015-10-30 Thread Christian Borntraeger
Am 30.10.2015 um 02:09 schrieb Andy Lutomirski:
> This switches virtio to use the DMA API unconditionally.  I'm sure
> it breaks things, but it seems to work on x86 using virtio-pci, with
> and without Xen, and using both the modern 1.0 variant and the
> legacy variant.
> 
> This appears to work on native and Xen x86_64 using both modern and
> legacy virtio-pci.  It also appears to work on arm and arm64.
> 
> It definitely won't work as-is on s390x, and I haven't been able to
> test Christian's patches because I can't get virtio-ccw to work in
> QEMU at all.  I don't know what I'm doing wrong.


[...]
>   virtio-net: Stop doing DMA from the stack
> 
>  drivers/net/virtio_net.c   |  34 ++--
>  drivers/virtio/Kconfig |   2 +-
>  drivers/virtio/virtio_mmio.c   |  67 ++-
>  drivers/virtio/virtio_pci_common.h |   6 -
>  drivers/virtio/virtio_pci_legacy.c |  42 ++---
>  drivers/virtio/virtio_pci_modern.c |  61 ++-
>  drivers/virtio/virtio_ring.c   | 348 
> ++---

do you also have an untested patch for drivers/s390/virtio/* ?



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization