Re: BUG in copy_page_to_iter() when iscsi sets ENABLE_CLUSTERING

2018-12-07 Thread Christoph Hellwig
Note that independent of what we do in the Linux iSCSI initiator
this is a network DOS, so we'll have to fix it.

On Wed, Dec 05, 2018 at 12:09:40PM -0800, Lee Duncan wrote:
> I recently found what I believe is a bug, and I'd appreciate feedback
> on if that is correct, and if so how to proceed.
> 
> BACKGROUND
> 
> Recently Christoph Hellwig sent an email to driver maintainers for
> drivers that set ".use_clustering" to DISABLE_CLUSTERING in their SCSI
> Host templates, asking if the setting could be changed to
> ENABLE_CLUSTERING.
> 
> As part of answering that question, I set ENABLE_CLUSTERING in
> drivers/scsi/iscsi_tcp.c and tested both the iscsi initiator and
> target.
> 
> As a reminder, setting ENABLE_CLUSTERING means that adjacent bio-s can
> be merged. This can make IO faster, but it means that drivers must be
> able to deal with IOs that cross page boundaries, since bio merges can
> create such IOs.
> 
> RESULTS
> 
> The iscsi initiator code can handle ENABLE_CLUSTERING just fine, but
> the iscsi target code fails. It seems to assume that IOs do *NOT*
> cross a page boundary.
> 
> The problem is in iscsi lib/iov_iter.c, in the functions
> copy_page_to_iter() and page_copy_sane() (see below for how to
> reproduce):
> 
> >> static inline bool page_copy_sane(struct page *page, size_t offset, size_t 
> >> n)
> >> {
> >> struct page *head = compound_head(page);
> >> size_t v = n + offset + page_address(page) - page_address(head);
> >> 
> >> if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head
> >> return true;
> >> WARN_ON(1);
> >> return false;
> >> }
> >> 
> >> size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
> >>  struct iov_iter *i)
> >> {
> >> if (unlikely(!page_copy_sane(page, offset, bytes)))
> >> return 0;
> >> if (i->type & (ITER_BVEC|ITER_KVEC)) {
> >> void *kaddr = kmap_atomic(page);
> >> size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
> >> kunmap_atomic(kaddr);
> >> return wanted;
> >> } else if (unlikely(iov_iter_is_discard(i)))
> >> return bytes;
> >> else if (likely(!iov_iter_is_pipe(i)))
> >> return copy_page_to_iter_iovec(page, offset, bytes, i);
> >> else
> >> return copy_page_to_iter_pipe(page, offset, bytes, i);
> >> }
> 
> Causing the following WARN_ON stack trace (repeatedly):
> 
> >> ...
> >> [   78.644559] WARNING: CPU: 0 PID: 2192 at lib/iov_iter.c:830 
> >> copy_page_to_iter+0x1a6/0x2e0
> >> [   78.644561] Modules linked in: iscsi_tcp(E) libiscsi_tcp(E) libiscsi(E) 
> >> scsi_transport_iscsi(E) rfcomm(E) iscsi_target_mod(E) target_core_pscsi(E) 
> >> target_core_file(E) target_core_iblock(E) target_core_user(E) uio(E) 
> >> target_core_mod(E) configfs(E) af_packet(E) iscsi_ibft(E) 
> >> iscsi_boot_sysfs(E) vmw_vsock_vmci_transport(E) vsock(E) bnep(E) fuse(E) 
> >> crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) xfs(E) 
> >> aesni_intel(E) snd_ens1371(E) aes_x86_64(E) snd_ac97_codec(E) 
> >> crypto_simd(E) cryptd(E) ac97_bus(E) glue_helper(E) snd_rawmidi(E) 
> >> vmw_balloon(E) snd_seq_device(E) snd_pcm(E) pcspkr(E) snd_timer(E) snd(E) 
> >> uvcvideo(E) btusb(E) videobuf2_vmalloc(E) btrtl(E) videobuf2_memops(E) 
> >> videobuf2_v4l2(E) btbcm(E) btintel(E) videodev(E) bluetooth(E) 
> >> videobuf2_common(E) vmw_vmci(E) ecdh_generic(E) rfkill(E) soundcore(E) 
> >> mptctl(E) gameport(E) joydev(E) i2c_piix4(E) e1000(E) ac(E) button(E) 
> >> btrfs(E) libcrc32c(E) xor(E) raid6_pq(E) hid_generic(E) usbhid(E) 
> >> sr_mod(E) cdrom(E) ata_generic(E)
> >> [   78.644583]  crc32c_intel(E) serio_raw(E) mptspi(E) 
> >> scsi_transport_spi(E) mptscsih(E) ata_piix(E) uhci_hcd(E) ehci_pci(E) 
> >> ehci_hcd(E) vmwgfx(E) drm_kms_helper(E) syscopyarea(E) sysfillrect(E) 
> >> sysimgblt(E) fb_sys_fops(E) usbcore(E) ttm(E) mptbase(E) drm(E) sg(E) 
> >> dm_multipath(E) dm_mod(E) scsi_dh_rdac(E) scsi_dh_emc(E) scsi_dh_alua(E)
> >> [   78.644593] CPU: 0 PID: 2192 Comm: iscsi_trx Tainted: GE
> >>  4.20.0-rc4-1-default+ #1 openSUSE Tumbleweed (unreleased)
> >> [   78.644593] Hardware name: VMware, Inc. VMware Virtual Platform/440BX 
> >> Desktop Reference P

[PATCH 09/10] gdth: remove interrupt coalescing support

2018-12-06 Thread Christoph Hellwig
This code has been under a never defined ifdef since the beginning
of time (or at least history), and has just bitrotted.  Nuke it.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c | 151 
 1 file changed, 12 insertions(+), 139 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index b8a033f18d7d..14f1d15cb6eb 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -89,10 +89,6 @@
  * phase:   unused
  */
 
-
-/* interrupt coalescing */
-/* #define INT_COAL */
-
 /* statistics */
 #define GDTH_STATISTICS
 
@@ -192,9 +188,6 @@ static u8   DebugState = DEBUG_GDTH;
 
 #ifdef GDTH_STATISTICS
 static u32 max_rq=0, max_index=0, max_sg=0;
-#ifdef INT_COAL
-static u32 max_int_coal=0;
-#endif
 static u32 act_ints=0, act_ios=0, act_stats=0, act_rq=0;
 static struct timer_list gdth_timer;
 #endif
@@ -1189,9 +1182,6 @@ static int gdth_search_drives(gdth_ha_str *ha)
 gdth_arcdl_str *alst;
 gdth_alist_str *alst2;
 gdth_oem_str_ioctl *oemstr;
-#ifdef INT_COAL
-gdth_perf_modes *pmod;
-#endif
 
 TRACE(("gdth_search_drives() hanum %d\n", ha->hanum));
 ok = 0;
@@ -1234,35 +1224,6 @@ static int gdth_search_drives(gdth_ha_str *ha)
 cdev_cnt = (u16)ha->info;
 ha->fw_vers = ha->service;
 
-#ifdef INT_COAL
-if (ha->type == GDT_PCIMPR) {
-/* set perf. modes */
-pmod = (gdth_perf_modes *)ha->pscratch;
-pmod->version  = 1;
-pmod->st_mode  = 1;/* enable one status buffer */
-*((u64 *)>st_buff_addr1) = ha->coal_stat_phys;
-pmod->st_buff_indx1= COALINDEX;
-pmod->st_buff_addr2= 0;
-pmod->st_buff_u_addr2  = 0;
-pmod->st_buff_indx2= 0;
-pmod->st_buff_size = sizeof(gdth_coal_status) * MAXOFFSETS;
-pmod->cmd_mode = 0;// disable all cmd buffers
-pmod->cmd_buff_addr1   = 0;
-pmod->cmd_buff_u_addr1 = 0;
-pmod->cmd_buff_indx1   = 0;
-pmod->cmd_buff_addr2   = 0;
-pmod->cmd_buff_u_addr2 = 0;
-pmod->cmd_buff_indx2   = 0;
-pmod->cmd_buff_size= 0;
-pmod->reserved1= 0;
-pmod->reserved2= 0;
-if (gdth_internal_cmd(ha, CACHESERVICE, GDT_IOCTL, SET_PERF_MODES,
-  INVALID_CHANNEL,sizeof(gdth_perf_modes))) {
-printk("GDT-HA %d: Interrupt coalescing activated\n", ha->hanum);
-}
-}
-#endif
-
 /* detect number of buses - try new IOCTL */
 iocr = (gdth_raw_iochan_str *)ha->pscratch;
 iocr->hdr.version= 0x;
@@ -2538,12 +2499,6 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
 u8 IStatus;
 u16 Service;
 unsigned long flags = 0;
-#ifdef INT_COAL
-int coalesced = FALSE;
-int next = FALSE;
-gdth_coal_status *pcs = NULL;
-int act_int_coal = 0;   
-#endif
 
 TRACE(("gdth_interrupt() IRQ %d\n", ha->irq));
 
@@ -2570,24 +2525,6 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
 ++act_ints;
 #endif
 
-#ifdef INT_COAL
-/* See if the fw is returning coalesced status */
-if (IStatus == COALINDEX) {
-/* Coalesced status.  Setup the initial status 
-   buffer pointer and flags */
-pcs = ha->coal_stat;
-coalesced = TRUE;
-next = TRUE;
-}
-
-do {
-if (coalesced) {
-/* For coalesced requests all status
-   information is found in the status buffer */
-IStatus = (u8)(pcs->status & 0xff);
-}
-#endif
-
 if (ha->type == GDT_PCI) {
 dp6_ptr = ha->brd;
 if (IStatus & 0x80) {   /* error flag */
@@ -2620,28 +2557,15 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
 dp6m_ptr = ha->brd;
 if (IStatus & 0x80) {   /* error flag */
 IStatus &= ~0x80;
-#ifdef INT_COAL
-if (coalesced)
-ha->status = pcs->ext_status & 0x;
-else 
-#endif
-ha->status = readw(_ptr->i960r.status);
+ha->status = readw(_ptr->i960r.status);
 TRACE2(("gdth_interrupt() error %d/%d\n",IStatus,ha->status));
 } else  /* no error */
 ha->status = S_OK;
-#ifdef INT_COAL
-/* get information */
-if (coalesced) {
-ha->info = pcs->info0;
-ha->info2 = pcs->info1;
-ha->service = (pcs->ext_status >> 16) & 0x;
-} else
-#endif
-{
-ha->info = readl(_ptr->i960r.info[0]);
- 

[PATCH 04/10] gdth: remove ISA and EISA support

2018-12-06 Thread Christoph Hellwig
The non-PCI code has bitrotted for quite a while and will just oops
on load because it passes a NULL pointer to the PCI DMA routines.

Lets kill it for good - if someone really wants to use one of these
cards I'll help mentoring them to write a proper driver glue.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/Kconfig |   2 +-
 drivers/scsi/gdth.c  | 709 ++-
 drivers/scsi/gdth.h  |  30 --
 3 files changed, 24 insertions(+), 717 deletions(-)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index f07444d30b21..0cfa385625d8 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -676,7 +676,7 @@ config SCSI_DMX3191D
 
 config SCSI_GDTH
tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller 
support"
-   depends on (ISA || EISA || PCI) && SCSI && ISA_DMA_API
+   depends on PCI && SCSI
---help---
  Formerly called GDT SCSI Disk Array Controller Support.
 
diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 7174e7a88da2..45ddecd0284c 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -1,6 +1,6 @@
 /
  * Linux driver for *  
- * ICP vortex GmbH:GDT ISA/EISA/PCI Disk Array Controllers  *
+ * ICP vortex GmbH:GDT PCI Disk Array Controllers   *
  * Intel Corporation:  Storage RAID Controllers *
  *  *
  * gdth.c   *
@@ -32,15 +32,10 @@
  /
 
 /* All GDT Disk Array Controllers are fully supported by this driver.
- * This includes the PCI/EISA/ISA SCSI Disk Array Controllers and the
+ * This includes the PCI SCSI Disk Array Controllers and the
  * PCI Fibre Channel Disk Array Controllers. See gdth.h for a complete
  * list of all controller types.
  * 
- * If you have one or more GDT3000/3020 EISA controllers with 
- * controller BIOS disabled, you have to set the IRQ values with the 
- * command line option "gdth=irq1,irq2,...", where the irq1,irq2,... are
- * the IRQ values for the EISA controllers.
- * 
  * After the optional list of IRQ values, other possible 
  * command line options are:
  * disable:Ydisable driver
@@ -61,14 +56,12 @@
  *  access a shared resource from several nodes, 
  *  appropriate controller firmware required
  * shared_access:N  enable driver reserve/release protocol
- * probe_eisa_isa:Y scan for EISA/ISA controllers
- * probe_eisa_isa:N do not scan for EISA/ISA controllers
  * force_dma32:Yuse only 32 bit DMA mode
  * force_dma32:Nuse 64 bit DMA mode, if supported
  *
  * The default values are: "gdth=disable:N,reserve_mode:1,reverse_scan:N,
  *  max_ids:127,rescan:N,hdr_channel:0,
- *  shared_access:Y,probe_eisa_isa:N,force_dma32:N".
+ *  shared_access:Y,force_dma32:N".
  * Here is another example: "gdth=reserve_list:0,1,2,0,0,1,3,0,rescan:Y".
  * 
  * When loading the gdth driver as a module, the same options are available. 
@@ -79,7 +72,7 @@
  * 
  * Default: "modprobe gdth disable=0 reserve_mode=1 reverse_scan=0
  *   max_ids=127 rescan=0 hdr_channel=0 shared_access=0
- *   probe_eisa_isa=0 force_dma32=0"
+ *   force_dma32=0"
  * The other example: "modprobe gdth reserve_list=0,1,2,0,0,1,3,0 rescan=1".
  */
 
@@ -286,12 +279,6 @@ static struct timer_list gdth_timer;
 
 #define BUS_L2P(a,b)((b)>(a)->virt_bus ? (b-1):(b))
 
-#ifdef CONFIG_ISA
-static u8   gdth_drq_tab[4] = {5,6,7,7};/* DRQ table */
-#endif
-#if defined(CONFIG_EISA) || defined(CONFIG_ISA)
-static u8   gdth_irq_tab[6] = {0,10,11,12,14,0};/* IRQ table */
-#endif
 static u8   gdth_polling;   /* polling if TRUE */
 static int  gdth_ctr_count  = 0;/* controller count */
 static LIST_HEAD(gdth_instances);   /* controller list */
@@ -325,10 +312,6 @@ static u8 gdth_direction_tab[0x100] = {
 };
 
 /* LILO and modprobe/insmod parameters */
-/* IRQ list for GDT3000/3020 EISA controllers */
-static int irq[MAXHA] __initdata = 
-{0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
- 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff};
 /* disable driver flag */
 static int disable __initdata = 0;
 /* reserve flag */
@@ -348,13 +331,10 @@ static int max_ids = MAXID;
 static int rescan = 0;
 /* shared access */
 static int shared_access = 1;
-/* enable support for EISA and ISA controllers */
-static int probe_eisa_isa = 0;
 /* 64 bit DMA 

[PATCH 03/10] gdth: remove gdth_{alloc,free}_ioctl

2018-12-06 Thread Christoph Hellwig
Out of the three callers once insists on the scratch buffer, and the
others are fine with a new allocation.  Switch those two to juse use
pci_alloc_consistent directly, and open code the scratch buffer
allocation in the remaining one.  This avoids a case where we might
be doing a memory allocation under a spinlock with irqs disabled.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c  |  7 ++--
 drivers/scsi/gdth_proc.c | 71 
 drivers/scsi/gdth_proc.h |  3 --
 3 files changed, 25 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 45e67d4cb3af..7174e7a88da2 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -4232,7 +4232,7 @@ static int ioc_general(void __user *arg, char *cmnd)
gdth_ioctl_general gen;
gdth_ha_str *ha;
char *buf = NULL;
-   u64 paddr;
+   dma_addr_t paddr;
int rval;
 
if (copy_from_user(, arg, sizeof(gdth_ioctl_general)))
@@ -4251,7 +4251,8 @@ static int ioc_general(void __user *arg, char *cmnd)
if (gen.data_len + gen.sense_len == 0)
goto execute;
 
-   buf = gdth_ioctl_alloc(ha, gen.data_len + gen.sense_len, FALSE, );
+buf = pci_alloc_consistent(ha->pdev, gen.data_len + gen.sense_len,
+   );
if (!buf)
return -EFAULT;
 
@@ -4286,7 +4287,7 @@ static int ioc_general(void __user *arg, char *cmnd)
 
rval = 0;
 out_free_buf:
-   gdth_ioctl_free(ha, gen.data_len+gen.sense_len, buf, paddr);
+   pci_free_consistent(ha->pdev, gen.data_len + gen.sense_len, buf, paddr);
return rval;
 }
  
diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index bd5532a80b0e..8e77f8fd8641 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -31,7 +31,6 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 int i, found;
 gdth_cmd_strgdtcmd;
 gdth_cpar_str   *pcpar;
-u64 paddr;
 
 charcmnd[MAX_COMMAND_SIZE];
 memset(cmnd, 0xff, 12);
@@ -113,13 +112,23 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 }
 
 if (wb_mode) {
-if (!gdth_ioctl_alloc(ha, sizeof(gdth_cpar_str), TRUE, ))
-return(-EBUSY);
+   unsigned long flags;
+
+   BUILD_BUG_ON(sizeof(gdth_cpar_str) > GDTH_SCRATCH);
+
+   spin_lock_irqsave(>smp_lock, flags);
+   if (ha->scratch_busy) {
+   spin_unlock_irqrestore(>smp_lock, flags);
+return -EBUSY;
+   }
+   ha->scratch_busy = TRUE;
+   spin_unlock_irqrestore(>smp_lock, flags);
+
 pcpar = (gdth_cpar_str *)ha->pscratch;
 memcpy( pcpar, >cpar, sizeof(gdth_cpar_str) );
 gdtcmd.Service = CACHESERVICE;
 gdtcmd.OpCode = GDT_IOCTL;
-gdtcmd.u.ioctl.p_param = paddr;
+gdtcmd.u.ioctl.p_param = ha->scratch_phys;
 gdtcmd.u.ioctl.param_size = sizeof(gdth_cpar_str);
 gdtcmd.u.ioctl.subfunc = CACHE_CONFIG;
 gdtcmd.u.ioctl.channel = INVALID_CHANNEL;
@@ -127,7 +136,10 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 
 gdth_execute(host, , cmnd, 30, NULL);
 
-gdth_ioctl_free(ha, GDTH_SCRATCH, ha->pscratch, paddr);
+   spin_lock_irqsave(>smp_lock, flags);
+   ha->scratch_busy = FALSE;
+   spin_unlock_irqrestore(>smp_lock, flags);
+
 printk("Done.\n");
 return(orig_length);
 }
@@ -143,7 +155,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 int id, i, j, k, sec, flag;
 int no_mdrv = 0, drv_no, is_mirr;
 u32 cnt;
-u64 paddr;
+dma_addr_t paddr;
 int rc = -ENOMEM;
 
 gdth_cmd_str *gdtcmd;
@@ -232,7 +244,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nPhysical Devices:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, size, FALSE, );
+buf = pci_alloc_consistent(ha->pdev, size, );
 if (!buf) 
 goto stop_output;
 for (i = 0; i < ha->bus_cnt; ++i) {
@@ -406,7 +418,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_printf(m,
" To Array Drv.:\t%s\n", hrec);
 }   
-
+
 if (!flag)
 seq_puts(m, "\n --\n");
 
@@ -500,7 +512,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, size, buf, paddr);
+   pci_free_consistent(ha->pdev, size, buf, paddr);
 
 for (i = 0; i < MAX_HDRIVES; ++i) {
 if (!(ha->hdr[i].present))
@@ -553,47 +565,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 return rc;
 }
 
-static char *gdth_ioctl_alloc(gdth_ha_str *ha, int size, int scratch,
-   

[PATCH 02/10] gdth: reuse dma coherent allocation in gdth_show_info

2018-12-06 Thread Christoph Hellwig
gdth_show_info currently allocs and frees a dma buffer four times,
which isn't very efficient. Reuse a single allocation instead.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth_proc.c | 20 +---
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index 3a9751a80225..bd5532a80b0e 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -226,11 +226,13 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 #endif
 
 if (ha->more_proc) {
+size_t size = max_t(size_t, GDTH_SCRATCH, sizeof(gdth_hget_str));
+
 /* more information: 2. about physical devices */
 seq_puts(m, "\nPhysical Devices:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
+buf = gdth_ioctl_alloc(ha, size, FALSE, );
 if (!buf) 
 goto stop_output;
 for (i = 0; i < ha->bus_cnt; ++i) {
@@ -323,7 +325,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
 
 if (!flag)
 seq_puts(m, "\n --\n");
@@ -332,9 +333,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nLogical Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
-if (!buf) 
-goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
 if (!ha->hdr[i].is_logdrv)
 continue;
@@ -408,7 +406,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_printf(m,
" To Array Drv.:\t%s\n", hrec);
 }   
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
 
 if (!flag)
 seq_puts(m, "\n --\n");
@@ -417,9 +414,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nArray Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
-if (!buf) 
-goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
 if (!(ha->hdr[i].is_arraydrv && ha->hdr[i].is_master))
 continue;
@@ -468,8 +462,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
hrec);
 }
 }
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
-
+
 if (!flag)
 seq_puts(m, "\n --\n");
 
@@ -477,9 +470,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nHost Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, sizeof(gdth_hget_str), FALSE, );
-if (!buf) 
-goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
 if (!ha->hdr[i].is_logdrv || 
 (ha->hdr[i].is_arraydrv && !ha->hdr[i].is_master))
@@ -510,7 +500,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, sizeof(gdth_hget_str), buf, paddr);
+gdth_ioctl_free(ha, size, buf, paddr);
 
 for (i = 0; i < MAX_HDRIVES; ++i) {
 if (!(ha->hdr[i].present))
-- 
2.19.1



[PATCH 01/10] gdth: refactor ioc_general

2018-12-06 Thread Christoph Hellwig
This function is a huge mess with duplicated error handling.  Split out
a few useful helpers and use goto labels to untangle the error handling
and no-data ioctl handling.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c | 244 +++-
 1 file changed, 126 insertions(+), 118 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 16709735b546..45e67d4cb3af 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -4155,131 +4155,139 @@ static int ioc_resetdrv(void __user *arg, char *cmnd)
 return 0;
 }
 
-static int ioc_general(void __user *arg, char *cmnd)
+static void gdth_ioc_addr32(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
 {
-gdth_ioctl_general gen;
-char *buf = NULL;
-u64 paddr; 
-gdth_ha_str *ha;
-int rval;
+   if (ha->cache_feat & SCATTER_GATHER) {
+   gen->command.u.cache.DestAddr = 0x;
+   gen->command.u.cache.sg_canz = 1;
+   gen->command.u.cache.sg_lst[0].sg_ptr = (u32)paddr;
+   gen->command.u.cache.sg_lst[0].sg_len = gen->data_len;
+   gen->command.u.cache.sg_lst[1].sg_len = 0;
+   } else {
+   gen->command.u.cache.DestAddr = paddr;
+   gen->command.u.cache.sg_canz = 0;
+   }
+}
 
-if (copy_from_user(, arg, sizeof(gdth_ioctl_general)))
-return -EFAULT;
-ha = gdth_find_ha(gen.ionode);
-if (!ha)
-return -EFAULT;
+static void gdth_ioc_addr64(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
+{
+   if (ha->cache_feat & SCATTER_GATHER) {
+   gen->command.u.cache64.DestAddr = (u64)-1;
+   gen->command.u.cache64.sg_canz = 1;
+   gen->command.u.cache64.sg_lst[0].sg_ptr = paddr;
+   gen->command.u.cache64.sg_lst[0].sg_len = gen->data_len;
+   gen->command.u.cache64.sg_lst[1].sg_len = 0;
+   } else {
+   gen->command.u.cache64.DestAddr = paddr;
+   gen->command.u.cache64.sg_canz = 0;
+   }
+}
 
-if (gen.data_len > INT_MAX)
-return -EINVAL;
-if (gen.sense_len > INT_MAX)
-return -EINVAL;
-if (gen.data_len + gen.sense_len > INT_MAX)
-return -EINVAL;
+static void gdth_ioc_cacheservice(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
+{
+   if (ha->cache_feat & GDT_64BIT) {
+   /* copy elements from 32-bit IOCTL structure */
+   gen->command.u.cache64.BlockCnt = gen->command.u.cache.BlockCnt;
+   gen->command.u.cache64.BlockNo = gen->command.u.cache.BlockNo;
+   gen->command.u.cache64.DeviceNo = gen->command.u.cache.DeviceNo;
 
-if (gen.data_len + gen.sense_len != 0) {
-if (!(buf = gdth_ioctl_alloc(ha, gen.data_len + gen.sense_len,
- FALSE, )))
-return -EFAULT;
-if (copy_from_user(buf, arg + sizeof(gdth_ioctl_general),  
-   gen.data_len + gen.sense_len)) {
-gdth_ioctl_free(ha, gen.data_len+gen.sense_len, buf, paddr);
-return -EFAULT;
-}
+   gdth_ioc_addr64(ha, gen, paddr);
+   } else {
+   gdth_ioc_addr32(ha, gen, paddr);
+   }
+}
 
-if (gen.command.OpCode == GDT_IOCTL) {
-gen.command.u.ioctl.p_param = paddr;
-} else if (gen.command.Service == CACHESERVICE) {
-if (ha->cache_feat & GDT_64BIT) {
-/* copy elements from 32-bit IOCTL structure */
-gen.command.u.cache64.BlockCnt = gen.command.u.cache.BlockCnt;
-gen.command.u.cache64.BlockNo = gen.command.u.cache.BlockNo;
-gen.command.u.cache64.DeviceNo = gen.command.u.cache.DeviceNo;
-/* addresses */
-if (ha->cache_feat & SCATTER_GATHER) {
-gen.command.u.cache64.DestAddr = (u64)-1;
-gen.command.u.cache64.sg_canz = 1;
-gen.command.u.cache64.sg_lst[0].sg_ptr = paddr;
-gen.command.u.cache64.sg_lst[0].sg_len = gen.data_len;
-gen.command.u.cache64.sg_lst[1].sg_len = 0;
-} else {
-gen.command.u.cache64.DestAddr = paddr;
-gen.command.u.cache64.sg_canz = 0;
-}
-} else {
-if (ha->cache_feat & SCATTER_GATHER) {
-gen.command.u.cache.DestAddr = 0x;
-gen.command.u.cache.sg_canz = 1;
-gen.command.u.cache.sg_lst[0].sg_ptr = (u32)paddr;
-gen.command.u.cache.sg_lst[0].sg_len = gen.data_len;
-gen.command.u.cache.sg_lst[1].sg_len = 0;
-} else {
-gen.command.u.cache.Dest

[PATCH 10/10] gdth: use generic DMA API

2018-12-06 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.  Also switch
to dma_map_single from pci_map_page in one case where this makes the code
simpler.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c  | 59 +++-
 drivers/scsi/gdth_proc.c |  4 +--
 2 files changed, 30 insertions(+), 33 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 14f1d15cb6eb..e1071fc2249b 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -2077,9 +2077,9 @@ static int gdth_fill_cache_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp,
 
 if (scsi_bufflen(scp)) {
 cmndinfo->dma_dir = (read_write == 1 ?
-PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE);   
-sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
-   cmndinfo->dma_dir);
+DMA_TO_DEVICE : DMA_FROM_DEVICE);   
+sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
+  scsi_sg_count(scp), cmndinfo->dma_dir);
 if (mode64) {
 struct scatterlist *sl;
 
@@ -2153,8 +2153,6 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 dma_addr_t sense_paddr;
 int cmd_index, sgcnt, mode64;
 u8 t,l;
-struct page *page;
-unsigned long offset;
 struct gdth_cmndinfo *cmndinfo;
 
 t = scp->device->id;
@@ -2196,10 +2194,8 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 }
 
 } else {
-page = virt_to_page(scp->sense_buffer);
-offset = (unsigned long)scp->sense_buffer & ~PAGE_MASK;
-sense_paddr = pci_map_page(ha->pdev,page,offset,
-   16,PCI_DMA_FROMDEVICE);
+sense_paddr = dma_map_single(>pdev->dev, scp->sense_buffer, 16,
+DMA_FROM_DEVICE);
 
cmndinfo->sense_paddr  = sense_paddr;
 cmdp->OpCode   = GDT_WRITE; /* always */
@@ -2240,9 +2236,9 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 }
 
 if (scsi_bufflen(scp)) {
-cmndinfo->dma_dir = PCI_DMA_BIDIRECTIONAL;
-sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
-   cmndinfo->dma_dir);
+cmndinfo->dma_dir = DMA_BIDIRECTIONAL;
+sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
+  scsi_sg_count(scp), cmndinfo->dma_dir);
 if (mode64) {
 struct scatterlist *sl;
 
@@ -2750,12 +2746,12 @@ static int gdth_sync_event(gdth_ha_str *ha, int 
service, u8 index,
 return 2;
 }
 if (scsi_bufflen(scp))
-pci_unmap_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
+dma_unmap_sg(>pdev->dev, scsi_sglist(scp), scsi_sg_count(scp),
  cmndinfo->dma_dir);
 
 if (cmndinfo->sense_paddr)
-pci_unmap_page(ha->pdev, cmndinfo->sense_paddr, 16,
-   PCI_DMA_FROMDEVICE);
+dma_unmap_page(>pdev->dev, cmndinfo->sense_paddr, 16,
+  DMA_FROM_DEVICE);
 
 if (ha->status == S_OK) {
 cmndinfo->status = S_OK;
@@ -3659,8 +3655,8 @@ static int ioc_general(void __user *arg, char *cmnd)
if (gen.data_len + gen.sense_len == 0)
goto execute;
 
-buf = pci_alloc_consistent(ha->pdev, gen.data_len + gen.sense_len,
-   );
+buf = dma_alloc_coherent(>pdev->dev, gen.data_len + gen.sense_len,
+   , GFP_KERNEL);
if (!buf)
return -EFAULT;
 
@@ -3695,7 +3691,8 @@ static int ioc_general(void __user *arg, char *cmnd)
 
rval = 0;
 out_free_buf:
-   pci_free_consistent(ha->pdev, gen.data_len + gen.sense_len, buf, paddr);
+   dma_free_coherent(>pdev->dev, gen.data_len + gen.sense_len, buf,
+   paddr);
return rval;
 }
  
@@ -4140,14 +4137,14 @@ static int gdth_pci_probe_one(gdth_pci_str *pcistr, 
gdth_ha_str **ha_out)
 
error = -ENOMEM;
 
-   ha->pscratch = pci_alloc_consistent(ha->pdev, GDTH_SCRATCH,
-   _dma_handle);
+   ha->pscratch = dma_alloc_coherent(>pdev->dev, GDTH_SCRATCH,
+   _dma_handle, GFP_KERNEL);
if (!ha->pscratch)
goto out_free_irq;
ha->scratch_phys = scratch_dma_handle;
 
-   ha->pmsg = pci_alloc_consistent(ha->pdev, sizeof(gdth_msg_str),
-   _dma_handle);
+   ha->pmsg = dma_alloc_coherent(>pdev->dev, sizeof(gdth_msg_str),
+   _dma_handle, GFP_KERNE

[PATCH 07/10] gdth: remove dead dma statistics code

2018-12-06 Thread Christoph Hellwig
This code can't be built into the kernel without editing the source
file and is not generall useful.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c  | 18 --
 drivers/scsi/gdth_proc.c |  8 
 2 files changed, 26 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 63d704301875..b8a033f18d7d 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -2126,12 +2126,6 @@ static int gdth_fill_cache_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp,
 cmdp->u.cache64.sg_canz = sgcnt;
 scsi_for_each_sg(scp, sl, sgcnt, i) {
 cmdp->u.cache64.sg_lst[i].sg_ptr = sg_dma_address(sl);
-#ifdef GDTH_DMA_STATISTICS
-if (cmdp->u.cache64.sg_lst[i].sg_ptr > (u64)0x)
-ha->dma64_cnt++;
-else
-ha->dma32_cnt++;
-#endif
 cmdp->u.cache64.sg_lst[i].sg_len = sg_dma_len(sl);
 }
 } else {
@@ -2141,9 +2135,6 @@ static int gdth_fill_cache_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp,
 cmdp->u.cache.sg_canz = sgcnt;
 scsi_for_each_sg(scp, sl, sgcnt, i) {
 cmdp->u.cache.sg_lst[i].sg_ptr = sg_dma_address(sl);
-#ifdef GDTH_DMA_STATISTICS
-ha->dma32_cnt++;
-#endif
 cmdp->u.cache.sg_lst[i].sg_len = sg_dma_len(sl);
 }
 }
@@ -2298,12 +2289,6 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 cmdp->u.raw64.sg_ranz = sgcnt;
 scsi_for_each_sg(scp, sl, sgcnt, i) {
 cmdp->u.raw64.sg_lst[i].sg_ptr = sg_dma_address(sl);
-#ifdef GDTH_DMA_STATISTICS
-if (cmdp->u.raw64.sg_lst[i].sg_ptr > (u64)0x)
-ha->dma64_cnt++;
-else
-ha->dma32_cnt++;
-#endif
 cmdp->u.raw64.sg_lst[i].sg_len = sg_dma_len(sl);
 }
 } else {
@@ -2313,9 +2298,6 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 cmdp->u.raw.sg_ranz = sgcnt;
 scsi_for_each_sg(scp, sl, sgcnt, i) {
 cmdp->u.raw.sg_lst[i].sg_ptr = sg_dma_address(sl);
-#ifdef GDTH_DMA_STATISTICS
-ha->dma32_cnt++;
-#endif
 cmdp->u.raw.sg_lst[i].sg_len = sg_dma_len(sl);
 }
 }
diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index 8e77f8fd8641..fc36c49a5334 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -229,14 +229,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
" Serial No.:   \t0x%8X\tCache RAM size:\t%d KB\n",
ha->binfo.ser_no, ha->binfo.memsize / 1024);
 
-#ifdef GDTH_DMA_STATISTICS
-/* controller statistics */
-seq_puts(m, "\nController Statistics:\n");
-seq_printf(m,
-   " 32-bit DMA buffer:\t%lu\t64-bit DMA buffer:\t%lu\n",
-   ha->dma32_cnt, ha->dma64_cnt);
-#endif
-
 if (ha->more_proc) {
 size_t size = max_t(size_t, GDTH_SCRATCH, sizeof(gdth_hget_str));
 
-- 
2.19.1



[PATCH 08/10] gdth: remove dead code under #ifdef GDTH_IOCTL_PROC

2018-12-06 Thread Christoph Hellwig
This can't ever be compiled into the kernel, so remove it.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth_ioctl.h | 89 ---
 drivers/scsi/gdth_proc.c  | 18 
 2 files changed, 107 deletions(-)

diff --git a/drivers/scsi/gdth_ioctl.h b/drivers/scsi/gdth_ioctl.h
index 4c91894ac244..ee4c9bf1022a 100644
--- a/drivers/scsi/gdth_ioctl.h
+++ b/drivers/scsi/gdth_ioctl.h
@@ -27,11 +27,7 @@
 #define GDTH_MAXSG  32  /* max. s/g elements */
 
 #define MAX_LDRIVES 255 /* max. log. drive count */
-#ifdef GDTH_IOCTL_PROC
-#define MAX_HDRIVES 100 /* max. host drive count */
-#else
 #define MAX_HDRIVES MAX_LDRIVES /* max. host drive count */
-#endif
 
 /* scatter/gather element */
 typedef struct {
@@ -178,91 +174,6 @@ typedef struct {
 gdth_evt_data   event_data;
 } __attribute__((packed)) gdth_evt_str;
 
-
-#ifdef GDTH_IOCTL_PROC
-/* IOCTL structure (write) */
-typedef struct {
-u32 magic;  /* IOCTL magic */
-u16  ioctl;  /* IOCTL */
-u16  ionode; /* controller number */
-u16  service;/* controller service */
-u16  timeout;/* timeout */
-union {
-struct {
-u8  command[512];   /* controller command */
-u8  data[1];/* add. data */
-} general;
-struct {
-u8  lock;   /* lock/unlock */
-u8  drive_cnt;  /* drive count */
-u16  drives[MAX_HDRIVES];/* drives */
-} lockdrv;
-struct {
-u8  lock;   /* lock/unlock */
-u8  channel;/* channel */
-} lockchn;
-struct {
-int erase;  /* erase event ? */
-int handle;
-u8  evt[EVENT_SIZE];/* event structure */
-} event;
-struct {
-u8  bus;/* SCSI bus */
-u8  target; /* target ID */
-u8  lun;/* LUN */
-u8  cmd_len;/* command length */
-u8  cmd[12];/* SCSI command */
-} scsi;
-struct {
-u16  hdr_no; /* host drive number */
-u8  flag;   /* old meth./add/remove */
-} rescan;
-} iu;
-} gdth_iowr_str;
-
-/* IOCTL structure (read) */
-typedef struct {
-u32 size;   /* buffer size */
-u32 status; /* IOCTL error code */
-union {
-struct {
-u8  data[1];/* data */
-} general;
-struct {
-u16  version;/* driver version */
-} drvers;
-struct {
-u8  type;   /* controller type */
-u16  info;   /* slot etc. */
-u16  oem_id; /* OEM ID */
-u16  bios_ver;   /* not used */
-u16  access; /* not used */
-u16  ext_type;   /* extended type */
-u16  device_id;  /* device ID */
-u16  sub_device_id;  /* sub device ID */
-} ctrtype;
-struct {
-u8  version;/* OS version */
-u8  subversion; /* OS subversion */
-u16  revision;   /* revision */
-} osvers;
-struct {
-u16  count;  /* controller count */
-} ctrcnt;
-struct {
-int handle;
-u8  evt[EVENT_SIZE];/* event structure */
-} event;
-struct {
-u8  bus;/* SCSI bus, 0xff: invalid */
-u8  target; /* target ID */
-u8  lun;/* LUN */
-u8  cluster_type;   /* cluster properties */
-} hdr_list[MAX_HDRIVES];/* index is host drive number 
*/
-} iu;
-} gdth_iord_str;
-#endif
-
 /* GDTIOCTL_GENERAL */
 typedef struct {
 u16 ionode;  /* controller number */
diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index fc36c49a5334..5a13ccac8dee 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -557,24 +557,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 return rc;
 }
 
-#ifdef GDTH_IOCTL_PROC
-static int gdth_ioctl_check_bin(gdth_ha_str *ha, u16 size)
-{
-unsigned long flags;
-int ret_val

[PATCH 05/10] gdth: remove direct serial port access

2018-12-06 Thread Christoph Hellwig
Remove never compile in support for sending debug traces straight to
the serial port using direct port access.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c | 70 -
 1 file changed, 70 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 45ddecd0284c..e8121b80233c 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -185,79 +185,9 @@ static void gdth_scsi_done(struct scsi_cmnd *scp);
 
 #ifdef DEBUG_GDTH
 static u8   DebugState = DEBUG_GDTH;
-
-#ifdef __SERIAL__
-#define MAX_SERBUF 160
-static void ser_init(void);
-static void ser_puts(char *str);
-static void ser_putc(char c);
-static int  ser_printk(const char *fmt, ...);
-static char strbuf[MAX_SERBUF+1];
-#ifdef __COM2__
-#define COM_BASE 0x2f8
-#else
-#define COM_BASE 0x3f8
-#endif
-static void ser_init()
-{
-unsigned port=COM_BASE;
-
-outb(0x80,port+3);
-outb(0,port+1);
-/* 19200 Baud, if 9600: outb(12,port) */
-outb(6, port);
-outb(3,port+3);
-outb(0,port+1);
-/*
-ser_putc('I');
-ser_putc(' ');
-*/
-}
-
-static void ser_puts(char *str)
-{
-char *ptr;
-
-ser_init();
-for (ptr=str;*ptr;++ptr)
-ser_putc(*ptr);
-}
-
-static void ser_putc(char c)
-{
-unsigned port=COM_BASE;
-
-while ((inb(port+5) & 0x20)==0);
-outb(c,port);
-if (c==0x0a)
-{
-while ((inb(port+5) & 0x20)==0);
-outb(0x0d,port);
-}
-}
-
-static int ser_printk(const char *fmt, ...)
-{
-va_list args;
-int i;
-
-va_start(args,fmt);
-i = vsprintf(strbuf,fmt,args);
-ser_puts(strbuf);
-va_end(args);
-return i;
-}
-
-#define TRACE(a){if (DebugState==1) {ser_printk a;}}
-#define TRACE2(a)   {if (DebugState==1 || DebugState==2) {ser_printk a;}}
-#define TRACE3(a)   {if (DebugState!=0) {ser_printk a;}}
-
-#else /* !__SERIAL__ */
 #define TRACE(a){if (DebugState==1) {printk a;}}
 #define TRACE2(a)   {if (DebugState==1 || DebugState==2) {printk a;}}
 #define TRACE3(a)   {if (DebugState!=0) {printk a;}}
-#endif
-
 #else /* !DEBUG */
 #define TRACE(a)
 #define TRACE2(a)
-- 
2.19.1



[PATCH 06/10] gdth: remove dead rtc code

2018-12-06 Thread Christoph Hellwig
This code has been under the never defined GDTH_RTC ifdef forever,
nuke it.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c | 32 
 1 file changed, 32 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index e8121b80233c..63d704301875 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -115,10 +115,6 @@
 #include 
 #include 
 #include 
-
-#ifdef GDTH_RTC
-#include 
-#endif
 #include 
 
 #include 
@@ -1197,11 +1193,6 @@ static int gdth_search_drives(gdth_ha_str *ha)
 gdth_perf_modes *pmod;
 #endif
 
-#ifdef GDTH_RTC
-u8 rtc[12];
-unsigned long flags;
-#endif 
-   
 TRACE(("gdth_search_drives() hanum %d\n", ha->hanum));
 ok = 0;
 
@@ -1221,29 +1212,6 @@ static int gdth_search_drives(gdth_ha_str *ha)
 }
 TRACE2(("gdth_search_drives(): SCREENSERVICE initialized\n"));
 
-#ifdef GDTH_RTC
-/* read realtime clock info, send to controller */
-/* 1. wait for the falling edge of update flag */
-spin_lock_irqsave(_lock, flags);
-for (j = 0; j < 100; ++j)
-if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP)
-break;
-for (j = 0; j < 100; ++j)
-if (!(CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
-break;
-/* 2. read info */
-do {
-for (j = 0; j < 12; ++j) 
-rtc[j] = CMOS_READ(j);
-} while (rtc[0] != CMOS_READ(0));
-spin_unlock_irqrestore(_lock, flags);
-TRACE2(("gdth_search_drives(): RTC: %x/%x/%x\n",*(u32 *)[0],
-*(u32 *)[4], *(u32 *)[8]));
-/* 3. send to controller firmware */
-gdth_internal_cmd(ha, SCREENSERVICE, GDT_REALTIME, *(u32 *)[0],
-  *(u32 *)[4], *(u32 *)[8]);
-#endif  
- 
 /* unfreeze all IOs */
 gdth_internal_cmd(ha, CACHESERVICE, GDT_UNFREEZE_IO, 0, 0, 0);
  
-- 
2.19.1



various fixups for gdth

2018-12-06 Thread Christoph Hellwig
Cleans up various oddities found during a code audit, and drops the
legacy ISA support which hasn't had a chance to actually work for a
long time.


Re: DISABLE_CLUSTERING in scsi drivers

2018-11-25 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 10:11:33AM +1300, Michael Schmitz wrote:
> Christoph,
> for Atari SCSI, commands can only be merged if the physical addresses
> of all buffers are contiguous (limitation of the Falcon DMA engine).
> Documentation/scsi/scsi_mid_low_api.tx does not spell out whether that
> is the case.
> 
> Atari SCSI disables scatter/gather, so if that's sufficient to cue
> midlevel or bio to not undertake any merging, the flag is no longer
> needed.

Yes, if scatter/gather is disable (sg_tablesize == 1 or 0), there will
just be a single, contiguos segment up to .max_sectors, which might
straddle a page boundary if it is larger than PAGE_SIZE.  If that is
ok for the ata SCSI hardware we can remove the DISABLE_CLUSTERING
setting.


Re: DISABLE_CLUSTERING in scsi drivers

2018-11-25 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 09:02:13AM +1100, Finn Thain wrote:
> > you in the To list maintain or wrote SCSI drivers that set the
> > DISABLE_CLUSTERING flag, which basically disable merges of any
> > bio segments.  We already have the actual max_segment size limit
> > to say which length a segment should have, independent of merged
> > or originally created, so this limit generally should rarely if
> > ever be required, and mostly is an old cut an paste error.
> > 
> 
> Are you referring to
>   blk_queue_max_segment_size(q, dma_get_max_seg_size(dev));
> in drivers/scsi/scsi_lib.c?
> 
> Is the segment size limitation of the DMA controller the only reason to 
> want DISABLE_CLUSTERING?

DISABLE_CLUSTERING mixes up two not really related things:

 1) limit the size of each segment to a single page size
 2) limit each segment to not actually span a page boundary.

Both could be valid limit for DMA engines, but also might be particularly
relevant for pio, if you e.g. kmap each page of a scatterlist do do
pio you'd want to see both limits.


Re: DISABLE_CLUSTERING in scsi drivers

2018-11-25 Thread Christoph Hellwig
On Fri, Nov 23, 2018 at 09:09:49AM +0100, Juergen Gross wrote:
> On 21/11/2018 10:41, Christoph Hellwig wrote:
> > Hi all,
> > 
> > you in the To list maintain or wrote SCSI drivers that set the
> > DISABLE_CLUSTERING flag, which basically disable merges of any
> > bio segments.  We already have the actual max_segment size limit
> > to say which length a segment should have, independent of merged
> > or originally created, so this limit generally should rarely if
> > ever be required, and mostly is an old cut an paste error.
> > 
> > Can you go over your drivers and check if it could be removed?
> > 
> 
> xen-scsifront.c doesn't need it. Do you want me to remove it at once
> or are you doing it when removing the support for it?

If it is a plain removal please queue it up yourself.  Eventually I
plan to do a bulk replacement, but that will take a while.


Re: [PATCH 3/3] target: replace fabric_ops.name with fabric_alias

2018-11-25 Thread Christoph Hellwig
On Fri, Nov 23, 2018 at 06:36:13PM +0100, David Disseldorp wrote:
> iscsi_target_mod is the only LIO fabric where fabric_ops.name differs
> from the fabric_ops.fabric_name string.
> fabric_ops.name is used when matching target/$fabric ConfigFS create
> paths, so rename it .fabric_alias and fallback to target/$fabric vs
> .fabric_name comparison if .fabric_alias isn't initialised.
> iscsi_target_mod is the only fabric module to set .fabric_alias . All
> other fabric modules rely on .fabric_name matching and can drop the
> duplicate string.

Looks fine:

Reviewed-by: Christoph Hellwig 


Re: [PATCH 2/3] target: drop unnecessary get_fabric_name() accessor from fabric_ops

2018-11-25 Thread Christoph Hellwig
Looks good,

Reviewed-by: Christoph Hellwig 


Re: [PATCH 1/3] target: drop unused pi_prot_format attribute storage

2018-11-25 Thread Christoph Hellwig
On Fri, Nov 23, 2018 at 06:36:11PM +0100, David Disseldorp wrote:
> On write, the pi_prot_format configfs attribute invokes the device
> format_prot() callback if present. Read dumps the contents of
> se_dev_attrib.pi_prot_format , which is always zero.
> Make the configfs attribute write-only, and drop the always zero
> se_dev_attrib.pi_prot_format storage.

Looks good,

Reviewed-by: Christoph Hellwig 


Re: [PATCH] target: drop unnecessary get_fabric_name() accessor from fabric_ops

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 03:16:23PM +0100, David Disseldorp wrote:
> All fabrics return a const string. In all cases *except* iSCSI the
> get_fabric_name() string matches fabric_ops.name.
>
> Both fabric_ops.get_fabric_name() and fabric_ops.name are user facing,
> with the former being used for PR/ALUA state and the latter for configFS
> (config/target/$name), so we unfortunately need to keep both strings
> around for now.

Would it make sense to just use .name unless .fabric_name is set
to mostly avoid the duplication?


DISABLE_CLUSTERING in scsi drivers

2018-11-21 Thread Christoph Hellwig
Hi all,

you in the To list maintain or wrote SCSI drivers that set the
DISABLE_CLUSTERING flag, which basically disable merges of any
bio segments.  We already have the actual max_segment size limit
to say which length a segment should have, independent of merged
or originally created, so this limit generally should rarely if
ever be required, and mostly is an old cut an paste error.

Can you go over your drivers and check if it could be removed?


Re: [PATCH v5] target: add emulate_pr backstore attr to toggle PR support

2018-11-21 Thread Christoph Hellwig
Looks good,

Reviewed-by: Christoph Hellwig 


Re: [PATCH v3 2/4] target: don't assume t10_wwn.vendor is null terminated

2018-11-20 Thread Christoph Hellwig
This could use a little more explanation, the code doesn't just
add a little if but also changes the existing case.  Also where
can't it be null currently?


Re: [PATCH v3 1/4] target: use consistent left-aligned ASCII INQUIRY data

2018-11-20 Thread Christoph Hellwig
Looks good,

Reviewed-by: Christoph Hellwig 


[PATCH v3] aha1542: convert to DMA mapping API

2018-11-10 Thread Christoph Hellwig
aha1542 is one of the last users of the legacy isa_*_to_bus APIs, which
also isn't portable enough.  Convert it to the proper DMA mapping API.

Thanks to Ondrej Zary for testing and finding and fixing a crucial
bug.

Signed-off-by: Christoph Hellwig 
---

Changes since v2:
 - fix another sizeof of the pointer instead of the pointed to type

Changes since v1:
 - fix a sizeof of the pointer instead of the pointed to type

 drivers/scsi/aha1542.c | 126 +
 1 file changed, 91 insertions(+), 35 deletions(-)

diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c
index 41add33e3f1f..a9c29757172f 100644
--- a/drivers/scsi/aha1542.c
+++ b/drivers/scsi/aha1542.c
@@ -58,8 +58,15 @@ struct aha1542_hostdata {
int aha1542_last_mbi_used;
int aha1542_last_mbo_used;
struct scsi_cmnd *int_cmds[AHA1542_MAILBOXES];
-   struct mailbox mb[2 * AHA1542_MAILBOXES];
-   struct ccb ccb[AHA1542_MAILBOXES];
+   struct mailbox *mb;
+   dma_addr_t mb_handle;
+   struct ccb *ccb;
+   dma_addr_t ccb_handle;
+};
+
+struct aha1542_cmd {
+   struct chain *chain;
+   dma_addr_t chain_handle;
 };
 
 static inline void aha1542_intr_reset(u16 base)
@@ -233,6 +240,21 @@ static int aha1542_test_port(struct Scsi_Host *sh)
return 1;
 }
 
+static void aha1542_free_cmd(struct scsi_cmnd *cmd)
+{
+   struct aha1542_cmd *acmd = scsi_cmd_priv(cmd);
+   struct device *dev = cmd->device->host->dma_dev;
+   size_t len = scsi_sg_count(cmd) * sizeof(struct chain);
+
+   if (acmd->chain) {
+   dma_unmap_single(dev, acmd->chain_handle, len, DMA_TO_DEVICE);
+   kfree(acmd->chain);
+   }
+
+   acmd->chain = NULL;
+   scsi_dma_unmap(cmd);
+}
+
 static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
 {
struct Scsi_Host *sh = dev_id;
@@ -303,7 +325,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
};
 
-   mbo = (scsi2int(mb[mbi].ccbptr) - (isa_virt_to_bus([0]))) / 
sizeof(struct ccb);
+   mbo = (scsi2int(mb[mbi].ccbptr) - aha1542->ccb_handle) / 
sizeof(struct ccb);
mbistatus = mb[mbi].status;
mb[mbi].status = 0;
aha1542->aha1542_last_mbi_used = mbi;
@@ -331,8 +353,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}
my_done = tmp_cmd->scsi_done;
-   kfree(tmp_cmd->host_scribble);
-   tmp_cmd->host_scribble = NULL;
+   aha1542_free_cmd(tmp_cmd);
/* Fetch the sense data, and tuck it away, in the required 
slot.  The
   Adaptec automatically fetches it, and there is no guarantee 
that
   we will still have it in the cdb when we come back */
@@ -369,6 +390,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
 
 static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
 {
+   struct aha1542_cmd *acmd = scsi_cmd_priv(cmd);
struct aha1542_hostdata *aha1542 = shost_priv(sh);
u8 direction;
u8 target = cmd->device->id;
@@ -378,7 +400,6 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
int mbo, sg_count;
struct mailbox *mb = aha1542->mb;
struct ccb *ccb = aha1542->ccb;
-   struct chain *cptr;
 
if (*cmd->cmnd == REQUEST_SENSE) {
/* Don't do the command - we have the sense data already */
@@ -398,15 +419,17 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
print_hex_dump_bytes("command: ", DUMP_PREFIX_NONE, cmd->cmnd, 
cmd->cmd_len);
}
 #endif
-   if (bufflen) {  /* allocate memory before taking host_lock */
-   sg_count = scsi_sg_count(cmd);
-   cptr = kmalloc_array(sg_count, sizeof(*cptr),
-GFP_KERNEL | GFP_DMA);
-   if (!cptr)
-   return SCSI_MLQUEUE_HOST_BUSY;
-   } else {
-   sg_count = 0;
-   cptr = NULL;
+   sg_count = scsi_dma_map(cmd);
+   if (sg_count) {
+   size_t len = sg_count * sizeof(struct chain);
+
+   acmd->chain = kmalloc(len, GFP_DMA);
+   if (!acmd->chain)
+   goto out_unmap;
+   acmd->chain_handle = dma_map_single(sh->dma_dev, acmd->chain,
+   len, DMA_TO_DEVICE);
+   if (dma_mapping_error(sh->dma_dev, acmd->chain_handle))
+   goto out_free_chain;
}
 
/* Use the outgoing mailboxes in a round-robin fashion, because this
@@ -437,7 +460,8 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)

Re: [PATCH] aha1542: convert to DMA mapping API

2018-11-10 Thread Christoph Hellwig
> > @@ -826,7 +881,8 @@ static int aha1542_dev_reset(struct scsi_cmnd *cmd)
> >  
> > aha1542->aha1542_last_mbo_used = mbo;
> >  
> > -   any2scsi(mb[mbo].ccbptr, isa_virt_to_bus([mbo]));   /* This gets 
> > trashed for some reason */
> > +   /* This gets trashed for some reason */
> > +   any2scsi(mb[mbo].ccbptr, aha1542->ccb_handle + mbo * sizeof(ccb));
> ^^^
> This looks wrong too. It's the same code as in aha1542_queuecommand.

Indeed, I'll resend.


Re: [PATCH 4/4] gdth: use generic DMA API

2018-11-09 Thread Christoph Hellwig
On Fri, Oct 19, 2018 at 09:42:28AM +1100, Finn Thain wrote:
> On Thu, 18 Oct 2018, Christoph Hellwig wrote:
> 
> > Switch from the legacy PCI DMA API to the generic DMA API.  Also switch
> > to dma_map_single from pci_map_page in one case where this makes the code
> > simpler.
> > 
> > Signed-off-by: Christoph Hellwig 
> > ---
> >  drivers/scsi/gdth.c  | 111 +++
> >  drivers/scsi/gdth_proc.c |   4 +-
> >  2 files changed, 56 insertions(+), 59 deletions(-)
> > 
> > diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
> > index 7274d09b2a6c..3d856554b1b1 100644
> > --- a/drivers/scsi/gdth.c
> > +++ b/drivers/scsi/gdth.c
> > @@ -2518,9 +2518,9 @@ static int gdth_fill_cache_cmd(gdth_ha_str *ha, 
> > struct scsi_cmnd *scp,
> >  
> >  if (scsi_bufflen(scp)) {
> >  cmndinfo->dma_dir = (read_write == 1 ?
> > -PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE);   
> > -sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), 
> > scsi_sg_count(scp),
> > -   cmndinfo->dma_dir);
> > +DMA_TO_DEVICE : DMA_FROM_DEVICE);   
> > +sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
> > +  scsi_sg_count(scp), cmndinfo->dma_dir);
> >  if (mode64) {
> >  struct scatterlist *sl;
> >  
> > @@ -2603,8 +2603,6 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
> > scsi_cmnd *scp, u8 b)
> >  dma_addr_t sense_paddr;
> >  int cmd_index, sgcnt, mode64;
> >  u8 t,l;
> > -struct page *page;
> > -unsigned long offset;
> >  struct gdth_cmndinfo *cmndinfo;
> >  
> >  t = scp->device->id;
> > @@ -2649,10 +2647,8 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
> > scsi_cmnd *scp, u8 b)
> >  }
> >  
> >  } else {
> > -page = virt_to_page(scp->sense_buffer);
> > -offset = (unsigned long)scp->sense_buffer & ~PAGE_MASK;
> > -sense_paddr = pci_map_page(ha->pdev,page,offset,
> > -   16,PCI_DMA_FROMDEVICE);
> > +sense_paddr = dma_map_single(>pdev->dev, scp->sense_buffer, 16,
> > +DMA_FROM_DEVICE);
> >  
> > cmndinfo->sense_paddr  = sense_paddr;
> >  cmdp->OpCode   = GDT_WRITE; /* always */
> > @@ -2693,9 +2689,9 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
> > scsi_cmnd *scp, u8 b)
> >  }
> >  
> >  if (scsi_bufflen(scp)) {
> > -cmndinfo->dma_dir = PCI_DMA_BIDIRECTIONAL;
> > -sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), 
> > scsi_sg_count(scp),
> > -   cmndinfo->dma_dir);
> > +cmndinfo->dma_dir = DMA_BIDIRECTIONAL;
> > +sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
> > +  scsi_sg_count(scp), cmndinfo->dma_dir);
> >  if (mode64) {
> >  struct scatterlist *sl;
> >  
> > @@ -3313,12 +3309,12 @@ static int gdth_sync_event(gdth_ha_str *ha, int 
> > service, u8 index,
> >  return 2;
> >  }
> >  if (scsi_bufflen(scp))
> > -pci_unmap_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
> > +dma_unmap_sg(>pdev->dev, scsi_sglist(scp), 
> > scsi_sg_count(scp),
> >   cmndinfo->dma_dir);
> >  
> >  if (cmndinfo->sense_paddr)
> > -pci_unmap_page(ha->pdev, cmndinfo->sense_paddr, 16,
> > -   
> > PCI_DMA_FROMDEVICE);
> > +dma_unmap_page(>pdev->dev, cmndinfo->sense_paddr, 16,
> > +  DMA_FROM_DEVICE);
> >  
> >  if (ha->status == S_OK) {
> >  cmndinfo->status = S_OK;
> > @@ -4251,8 +4247,8 @@ static int ioc_general(void __user *arg, char *cmnd)
> > if (gen.data_len + gen.sense_len == 0)
> > goto execute;
> >  
> > -buf = pci_alloc_consistent(ha->pdev, gen.data_len + gen.sense_len,
> > -   );
> > +buf = dma_alloc_coherent(>pdev->dev, gen.data_len + 
> > gen.sense_len,
> > +   , GFP_KERNEL);
> > if (!buf)
> > return -EFAULT;
> >  
> > @@ -4292,7 +4288,8 @@ static in

Re: [PATCH 2/4] gdth: reuse dma coherent allocation in gdth_show_info

2018-11-09 Thread Christoph Hellwig
> >  
> > -buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
> >  if (!buf) 
> >  goto stop_output;
> 
> I think this !buf test is redundant.

Thanks,

fixed.


Re: [PATCH 1/4] gdth: refactor ioc_general

2018-11-09 Thread Christoph Hellwig
> > +   switch (gen.command.OpCode) {
> > +   case GDT_IOCTL:
> > +   gen.command.u.ioctl.p_param = paddr;
> > +   break;
> > +   case CACHESERVICE:
> > +   gdth_ioc_cacheservice(ha, , paddr);
> > +   break;
> > +   case SCSIRAWSERVICE:
> > +   gdth_ioc_scsiraw(ha, , paddr);
> > +   break;
> > +   default:
> > +   goto out_free_buf;
> >  }
> > -}
> 
> AFAICT, CACHESERVICE never gets assigned to command.OpCode.

Thanks, fixed.

> > -}
> > -gdth_ioctl_free(ha, gen.data_len+gen.sense_len, buf, paddr);
> > -return 0;
> > +   return 0;
> 
> This appears to be wrong also. I think you wanted,
>   return rval;

Also fixed.


[PATCH 2/3] wd719x: use per-command private data

2018-11-09 Thread Christoph Hellwig
Add the SCB onto the scsi command allocation and use dma streaming
mappings for it only when in use.  This avoid possibly calling
dma_alloc_coherent under a lock or even in irq context, while also
making the code simpler.

Thanks to Ondrej Zary for testing and various bug fixes.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 98 +++
 drivers/scsi/wd719x.h |  1 -
 2 files changed, 42 insertions(+), 57 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index 7b05bbcfb186..b73e7f24a1c4 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -153,8 +153,6 @@ static int wd719x_direct_cmd(struct wd719x *wd, u8 opcode, 
u8 dev, u8 lun,
 
 static void wd719x_destroy(struct wd719x *wd)
 {
-   struct wd719x_scb *scb;
-
/* stop the RISC */
if (wd719x_direct_cmd(wd, WD719X_CMD_SLEEP, 0, 0, 0, 0,
  WD719X_WAIT_FOR_RISC))
@@ -164,10 +162,6 @@ static void wd719x_destroy(struct wd719x *wd)
 
WARN_ON_ONCE(!list_empty(>active_scbs));
 
-   /* free all SCBs */
-   list_for_each_entry(scb, >free_scbs, list)
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
-   scb->phys);
/* free internal buffers */
pci_free_consistent(wd->pdev, wd->fw_size, wd->fw_virt, wd->fw_phys);
wd->fw_virt = NULL;
@@ -180,18 +174,20 @@ static void wd719x_destroy(struct wd719x *wd)
free_irq(wd->pdev->irq, wd);
 }
 
-/* finish a SCSI command, mark SCB (if any) as free, unmap buffers */
-static void wd719x_finish_cmd(struct scsi_cmnd *cmd, int result)
+/* finish a SCSI command, unmap buffers */
+static void wd719x_finish_cmd(struct wd719x_scb *scb, int result)
 {
+   struct scsi_cmnd *cmd = scb->cmd;
struct wd719x *wd = shost_priv(cmd->device->host);
-   struct wd719x_scb *scb = (struct wd719x_scb *) cmd->host_scribble;
 
-   if (scb) {
-   list_move(>list, >free_scbs);
-   dma_unmap_single(>pdev->dev, cmd->SCp.dma_handle,
-SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
-   scsi_dma_unmap(cmd);
-   }
+   list_del(>list);
+
+   dma_unmap_single(>pdev->dev, scb->phys,
+   sizeof(struct wd719x_scb), DMA_BIDIRECTIONAL);
+   scsi_dma_unmap(cmd);
+   dma_unmap_single(>pdev->dev, cmd->SCp.dma_handle,
+SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+
cmd->result = result << 16;
cmd->scsi_done(cmd);
 }
@@ -201,36 +197,10 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
 {
int i, count_sg;
unsigned long flags;
-   struct wd719x_scb *scb;
+   struct wd719x_scb *scb = scsi_cmd_priv(cmd);
struct wd719x *wd = shost_priv(sh);
-   dma_addr_t phys;
-
-   cmd->host_scribble = NULL;
-
-   /* get a free SCB - either from existing ones or allocate a new one */
-   spin_lock_irqsave(wd->sh->host_lock, flags);
-   scb = list_first_entry_or_null(>free_scbs, struct wd719x_scb, list);
-   if (scb) {
-   list_del(>list);
-   phys = scb->phys;
-   } else {
-   spin_unlock_irqrestore(wd->sh->host_lock, flags);
-   scb = pci_alloc_consistent(wd->pdev, sizeof(struct wd719x_scb),
-  );
-   spin_lock_irqsave(wd->sh->host_lock, flags);
-   if (!scb) {
-   dev_err(>pdev->dev, "unable to allocate SCB\n");
-   wd719x_finish_cmd(cmd, DID_ERROR);
-   spin_unlock_irqrestore(wd->sh->host_lock, flags);
-   return 0;
-   }
-   }
-   memset(scb, 0, sizeof(struct wd719x_scb));
-   list_add(>list, >active_scbs);
 
-   scb->phys = phys;
scb->cmd = cmd;
-   cmd->host_scribble = (char *) scb;
 
scb->CDB_tag = 0;   /* Tagged queueing not supported yet */
scb->devid = cmd->device->id;
@@ -239,10 +209,19 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
/* copy the command */
memcpy(scb->CDB, cmd->cmnd, cmd->cmd_len);
 
+   /* map SCB */
+   scb->phys = dma_map_single(>pdev->dev, scb, sizeof(*scb),
+  DMA_BIDIRECTIONAL);
+
+   if (dma_mapping_error(>pdev->dev, scb->phys))
+   goto out_error;
+
/* map sense buffer */
scb->sense_buf_length = SCSI_SENSE_BUFFERSIZE;
cmd->SCp.dma_handle = dma_map_single(>pdev->dev, cmd->sense_buffer,
SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+   if (dma_mapping_error(>pdev->dev, 

[PATCH 3/3] wd719x: always use generic DMA API

2018-11-09 Thread Christoph Hellwig
The wd719x driver currently uses a mix of the legacy PCI DMA and
the generic DMA APIs.  Switch it over to the generic DMA API entirely.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 32 +---
 1 file changed, 17 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index b73e7f24a1c4..808ba8e952db 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -163,13 +163,14 @@ static void wd719x_destroy(struct wd719x *wd)
WARN_ON_ONCE(!list_empty(>active_scbs));
 
/* free internal buffers */
-   pci_free_consistent(wd->pdev, wd->fw_size, wd->fw_virt, wd->fw_phys);
+   dma_free_coherent(>pdev->dev, wd->fw_size, wd->fw_virt,
+ wd->fw_phys);
wd->fw_virt = NULL;
-   pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
-   wd->hash_phys);
+   dma_free_coherent(>pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
+ wd->hash_phys);
wd->hash_virt = NULL;
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param),
-   wd->params, wd->params_phys);
+   dma_free_coherent(>pdev->dev, sizeof(struct wd719x_host_param),
+ wd->params, wd->params_phys);
wd->params = NULL;
free_irq(wd->pdev->irq, wd);
 }
@@ -316,8 +317,8 @@ static int wd719x_chip_init(struct wd719x *wd)
wd->fw_size = ALIGN(fw_wcs->size, 4) + fw_risc->size;
 
if (!wd->fw_virt)
-   wd->fw_virt = pci_alloc_consistent(wd->pdev, wd->fw_size,
-  >fw_phys);
+   wd->fw_virt = dma_alloc_coherent(>pdev->dev, wd->fw_size,
+>fw_phys, GFP_KERNEL);
if (!wd->fw_virt) {
ret = -ENOMEM;
goto wd719x_init_end;
@@ -804,17 +805,18 @@ static int wd719x_board_found(struct Scsi_Host *sh)
wd->fw_virt = NULL;
 
/* memory area for host (EEPROM) parameters */
-   wd->params = pci_alloc_consistent(wd->pdev,
- sizeof(struct wd719x_host_param),
- >params_phys);
+   wd->params = dma_alloc_coherent(>pdev->dev,
+   sizeof(struct wd719x_host_param),
+   >params_phys, GFP_KERNEL);
if (!wd->params) {
dev_warn(>pdev->dev, "unable to allocate parameter 
buffer\n");
return -ENOMEM;
}
 
/* memory area for the RISC for hash table of outstanding requests */
-   wd->hash_virt = pci_alloc_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE,
->hash_phys);
+   wd->hash_virt = dma_alloc_coherent(>pdev->dev,
+  WD719X_HASH_TABLE_SIZE,
+  >hash_phys, GFP_KERNEL);
if (!wd->hash_virt) {
dev_warn(>pdev->dev, "unable to allocate hash buffer\n");
ret = -ENOMEM;
@@ -846,10 +848,10 @@ static int wd719x_board_found(struct Scsi_Host *sh)
 fail_free_irq:
free_irq(wd->pdev->irq, wd);
 fail_free_hash:
-   pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
+   dma_free_coherent(>pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
wd->hash_phys);
 fail_free_params:
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param),
+   dma_free_coherent(>pdev->dev, sizeof(struct wd719x_host_param),
wd->params, wd->params_phys);
 
return ret;
@@ -882,7 +884,7 @@ static int wd719x_pci_probe(struct pci_dev *pdev, const 
struct pci_device_id *d)
if (err)
goto fail;
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
dev_warn(>dev, "Unable to set 32-bit DMA mask\n");
goto disable_device;
}
-- 
2.19.1



[PATCH 1/3] wd719x: there should be no active SCBs on removal

2018-11-09 Thread Christoph Hellwig
So warn on that case instead of trying to free them, which would be fatal
in case we actuall had active ones.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index 974bfb3f30f4..7b05bbcfb186 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -162,10 +162,9 @@ static void wd719x_destroy(struct wd719x *wd)
/* disable RISC */
wd719x_writeb(wd, WD719X_PCI_MODE_SELECT, 0);
 
+   WARN_ON_ONCE(!list_empty(>active_scbs));
+
/* free all SCBs */
-   list_for_each_entry(scb, >active_scbs, list)
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
-   scb->phys);
list_for_each_entry(scb, >free_scbs, list)
pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
scb->phys);
-- 
2.19.1



dma related cleanups for wd719x v2

2018-11-09 Thread Christoph Hellwig
Various dma relates cleanups

Changes since v1:
 - include important fixes from Ondrej


[PATCH] aha1542: convert to DMA mapping API

2018-11-09 Thread Christoph Hellwig
aha1542 is one of the last users of the legacy isa_*_to_bus APIs, which
also isn't portable enough.  Convert it to the proper DMA mapping API.

Thanks to Ondrej Zary for testing and finding and fixing a crucial
bug.

Signed-off-by: Christoph Hellwig 
Tested-by: Ondrej Zary 
---
 drivers/scsi/aha1542.c | 126 +
 1 file changed, 91 insertions(+), 35 deletions(-)

diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c
index 41add33e3f1f..398fcdc6f4c9 100644
--- a/drivers/scsi/aha1542.c
+++ b/drivers/scsi/aha1542.c
@@ -58,8 +58,15 @@ struct aha1542_hostdata {
int aha1542_last_mbi_used;
int aha1542_last_mbo_used;
struct scsi_cmnd *int_cmds[AHA1542_MAILBOXES];
-   struct mailbox mb[2 * AHA1542_MAILBOXES];
-   struct ccb ccb[AHA1542_MAILBOXES];
+   struct mailbox *mb;
+   dma_addr_t mb_handle;
+   struct ccb *ccb;
+   dma_addr_t ccb_handle;
+};
+
+struct aha1542_cmd {
+   struct chain *chain;
+   dma_addr_t chain_handle;
 };
 
 static inline void aha1542_intr_reset(u16 base)
@@ -233,6 +240,21 @@ static int aha1542_test_port(struct Scsi_Host *sh)
return 1;
 }
 
+static void aha1542_free_cmd(struct scsi_cmnd *cmd)
+{
+   struct aha1542_cmd *acmd = scsi_cmd_priv(cmd);
+   struct device *dev = cmd->device->host->dma_dev;
+   size_t len = scsi_sg_count(cmd) * sizeof(struct chain);
+
+   if (acmd->chain) {
+   dma_unmap_single(dev, acmd->chain_handle, len, DMA_TO_DEVICE);
+   kfree(acmd->chain);
+   }
+
+   acmd->chain = NULL;
+   scsi_dma_unmap(cmd);
+}
+
 static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
 {
struct Scsi_Host *sh = dev_id;
@@ -303,7 +325,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
};
 
-   mbo = (scsi2int(mb[mbi].ccbptr) - (isa_virt_to_bus([0]))) / 
sizeof(struct ccb);
+   mbo = (scsi2int(mb[mbi].ccbptr) - aha1542->ccb_handle) / 
sizeof(struct ccb);
mbistatus = mb[mbi].status;
mb[mbi].status = 0;
aha1542->aha1542_last_mbi_used = mbi;
@@ -331,8 +353,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}
my_done = tmp_cmd->scsi_done;
-   kfree(tmp_cmd->host_scribble);
-   tmp_cmd->host_scribble = NULL;
+   aha1542_free_cmd(tmp_cmd);
/* Fetch the sense data, and tuck it away, in the required 
slot.  The
   Adaptec automatically fetches it, and there is no guarantee 
that
   we will still have it in the cdb when we come back */
@@ -369,6 +390,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id)
 
 static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
 {
+   struct aha1542_cmd *acmd = scsi_cmd_priv(cmd);
struct aha1542_hostdata *aha1542 = shost_priv(sh);
u8 direction;
u8 target = cmd->device->id;
@@ -378,7 +400,6 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
int mbo, sg_count;
struct mailbox *mb = aha1542->mb;
struct ccb *ccb = aha1542->ccb;
-   struct chain *cptr;
 
if (*cmd->cmnd == REQUEST_SENSE) {
/* Don't do the command - we have the sense data already */
@@ -398,15 +419,17 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
print_hex_dump_bytes("command: ", DUMP_PREFIX_NONE, cmd->cmnd, 
cmd->cmd_len);
}
 #endif
-   if (bufflen) {  /* allocate memory before taking host_lock */
-   sg_count = scsi_sg_count(cmd);
-   cptr = kmalloc_array(sg_count, sizeof(*cptr),
-GFP_KERNEL | GFP_DMA);
-   if (!cptr)
-   return SCSI_MLQUEUE_HOST_BUSY;
-   } else {
-   sg_count = 0;
-   cptr = NULL;
+   sg_count = scsi_dma_map(cmd);
+   if (sg_count) {
+   size_t len = sg_count * sizeof(struct chain);
+
+   acmd->chain = kmalloc(len, GFP_DMA);
+   if (!acmd->chain)
+   goto out_unmap;
+   acmd->chain_handle = dma_map_single(sh->dma_dev, acmd->chain,
+   len, DMA_TO_DEVICE);
+   if (dma_mapping_error(sh->dma_dev, acmd->chain_handle))
+   goto out_free_chain;
}
 
/* Use the outgoing mailboxes in a round-robin fashion, because this
@@ -437,7 +460,8 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
shost_printk(KERN_DEBUG, sh, "Sending command (%d %p)...", mbo, 
cmd->scsi_done);
 #endif
 
-   any2scsi(mb[mbo].ccb

Re: [PATCH 5/5] qla2xxx: use lower_32_bits and upper_32_bits instead of reinventing them

2018-10-26 Thread Christoph Hellwig
On Thu, Oct 18, 2018 at 09:43:25PM -0700, Bart Van Assche wrote:
> Hi Christoph,
>
> Have you considered to use put_unaligned_le64() instead of storing the 
> lower and upper 32 bits separately?

I really don't want to touch this old driver all that much, just
get rid of the buggy existing helpers.


Submit Proposals to the 2019 Linux Storage and Filesystems Conference!

2018-10-25 Thread Christoph Hellwig
After a one-year hiatus, the Linux Storage and Filesystems Conference (Vault) 
returns in 2019, under the sponsorship and organization of the USENIX 
Association. Vault brings together practitioners, implementers, users, and 
researchers working on storage in open source and related projects.

We welcome creators and users of open source storage, file systems, and related 
technologies to submit their work and to join us for Vault '19, which will take 
place on February 25 - 26, 2019, in Boston, MA, USA, and will be co-located 
with the 17th USENIX Conference on File and Storage Technologies (FAST '19).

Learn More about Vault '19:
https://www.usenix.org/conference/vault19

Learn More about FAST '19:
https://www.usenix.org/conference/fast19

We are looking for proposals on a diverse range of topics related to storage, 
Linux, and open source. The best talks will share your or your team's 
experience with a new technology, a new idea, a new approach, or inspire the 
audience to think beyond the ways they have always done things. We are also 
accepting proposals for a limited number of workshop sessions, where content 
can be more like a tutorial in nature or include hands-on participation by 
attendees. We encourage new speakers to submit talks as some of the most 
insightful talks often come from people with new experiences to share.

Previous Vault events have drawn multiple hundreds of attendees from a range of 
companies, with backgrounds ranging from individual open source contributors, 
to new startups, through teams within the technology and storage giants, or 
storage end users.

Talk and workshop proposals are due on Thursday, November 15, 2018. Please read 
through the Call for Participation for additional details, including topics of 
interest, and submission instructions.

View the Vault '19 Call for Participation:
https://www.usenix.org/conference/vault19/call-for-participation

We look forward to receiving your proposals!

Christoph Hellwig
Erik Riedel
Ric Wheeler, Red Hat
vault19cha...@usenix.org


Re: [PATCH] bsg: convert to use blk-mq

2018-10-24 Thread Christoph Hellwig
On Mon, Oct 22, 2018 at 03:23:30AM -0600, Jens Axboe wrote:
> JFYI, I also reordered the series to make it correct. You can apply
> this one:
> 
> http://git.kernel.dk/cgit/linux-block/commit/?h=mq-conversions=2b2ffa16193e9a69a076595ed64429b8cc9b42aa
> 
> before the bsg patch, and it should be fine. Or just use the above branch,
> of course.

Hell no on that one.  The behavior of having methods right on the
request_queue which can be changed any time is something we absolutely
must not introduce into blk-mq.

Just add pass a timeout hander to bsg_register_queue which is called
from the bsg ->timeout handler is a much better way to sort our your
problem.  It can also easily be turned into an independent prep patch.


use dma_set_mask and dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
Various SCSI drivers that otherwise use the generic DMA API
still use pci_set_dma_mask, so switch them over to dma_set_mask
and dma_set_mask_and_coherent.


[PATCH 03/12] dpt_i2o: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Also move the dma_get_required_mask check
before actually setting the dma mask, so that we don't end up with
inconsistent settings in corner cases.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/dpt_i2o.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c
index 37de8fb186d7..d5a474d1434f 100644
--- a/drivers/scsi/dpt_i2o.c
+++ b/drivers/scsi/dpt_i2o.c
@@ -934,15 +934,15 @@ static int adpt_install_hba(struct scsi_host_template* 
sht, struct pci_dev* pDev
 *  See if we should enable dma64 mode.
 */
if (sizeof(dma_addr_t) > 4 &&
-   pci_set_dma_mask(pDev, DMA_BIT_MASK(64)) == 0) {
-   if (dma_get_required_mask(>dev) > DMA_BIT_MASK(32))
-   dma64 = 1;
-   }
-   if (!dma64 && pci_set_dma_mask(pDev, DMA_BIT_MASK(32)) != 0)
+   dma_get_required_mask(>dev) > DMA_BIT_MASK(32) &&
+   dma_set_mask(>dev, DMA_BIT_MASK(64)) == 0)
+   dma64 = 1;
+
+   if (!dma64 && dma_set_mask(>dev, DMA_BIT_MASK(32)) != 0)
return -EINVAL;
 
/* adapter only supports message blocks below 4GB */
-   pci_set_consistent_dma_mask(pDev, DMA_BIT_MASK(32));
+   dma_set_coherent_mask(>dev, DMA_BIT_MASK(32));
 
base_addr0_phys = pci_resource_start(pDev,0);
hba_map0_area_size = pci_resource_len(pDev,0);
-- 
2.19.1



[PATCH 12/12] sym53c8xx: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/sym53c8xx_2/sym_glue.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c 
b/drivers/scsi/sym53c8xx_2/sym_glue.c
index 5f10aa9bad9b..6e9b54061d7e 100644
--- a/drivers/scsi/sym53c8xx_2/sym_glue.c
+++ b/drivers/scsi/sym53c8xx_2/sym_glue.c
@@ -1312,9 +1312,9 @@ static struct Scsi_Host *sym_attach(struct 
scsi_host_template *tpnt, int unit,
sprintf(np->s.inst_name, "sym%d", np->s.unit);
 
if ((SYM_CONF_DMA_ADDRESSING_MODE > 0) && (np->features & FE_DAC) &&
-   !pci_set_dma_mask(pdev, DMA_DAC_MASK)) {
+   !dma_set_mask(>dev, DMA_DAC_MASK)) {
set_dac(np);
-   } else if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   } else if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
printf_warning("%s: No suitable DMA available\n", sym_name(np));
goto attach_failed;
}
-- 
2.19.1



[PATCH 01/12] arcmsr: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/arcmsr/arcmsr_hba.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
index d4404eea24fb..11e8e6df50b1 100644
--- a/drivers/scsi/arcmsr/arcmsr_hba.c
+++ b/drivers/scsi/arcmsr/arcmsr_hba.c
@@ -903,9 +903,9 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct 
pci_device_id *id)
if(!host){
goto pci_disable_dev;
}
-   error = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+   error = dma_set_mask(>dev, DMA_BIT_MASK(64));
if(error){
-   error = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+   error = dma_set_mask(>dev, DMA_BIT_MASK(32));
if(error){
printk(KERN_WARNING
   "scsi%d: No suitable DMA mask available\n",
@@ -1049,9 +1049,9 @@ static int arcmsr_resume(struct pci_dev *pdev)
pr_warn("%s: pci_enable_device error\n", __func__);
return -ENODEV;
}
-   error = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+   error = dma_set_mask(>dev, DMA_BIT_MASK(64));
if (error) {
-   error = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+   error = dma_set_mask(>dev, DMA_BIT_MASK(32));
if (error) {
pr_warn("scsi%d: No suitable DMA mask available\n",
   host->host_no);
-- 
2.19.1



[PATCH 02/12] bfa: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Switch it over to the better generic DMA API
helper and also ensure we set the coherent mask as well in the resume
path.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/bfa/bfad.c | 18 +++---
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
index bd7e6a6fc1f1..8ebaf0693098 100644
--- a/drivers/scsi/bfa/bfad.c
+++ b/drivers/scsi/bfa/bfad.c
@@ -739,14 +739,10 @@ bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad)
 
pci_set_master(pdev);
 
-
-   if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) ||
-   (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0)) {
-   if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) ||
-  (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0)) {
-   printk(KERN_ERR "pci_set_dma_mask fail %p\n", pdev);
-   goto out_release_region;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   printk(KERN_ERR "dma_set_mask_and_coherent fail %p\n", pdev);
+   goto out_release_region;
}
 
/* Enable PCIE Advanced Error Recovery (AER) if kernel supports */
@@ -1565,9 +1561,9 @@ bfad_pci_slot_reset(struct pci_dev *pdev)
pci_save_state(pdev);
pci_set_master(pdev);
 
-   if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(64)) != 0)
-   if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0)
-   goto out_disable_device;
+   if (dma_set_mask_and_coherent(>pcidev->dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>pcidev->dev, DMA_BIT_MASK(32)))
+   goto out_disable_device;
 
pci_cleanup_aer_uncorrect_error_status(pdev);
 
-- 
2.19.1



[PATCH 11/12] stex: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Switch it over to the better generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/stex.c | 17 +++--
 1 file changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
index 9b20643ab49d..95f370ad05e0 100644
--- a/drivers/scsi/stex.c
+++ b/drivers/scsi/stex.c
@@ -1617,19 +1617,6 @@ static struct st_card_info stex_card_info[] = {
},
 };
 
-static int stex_set_dma_mask(struct pci_dev * pdev)
-{
-   int ret;
-
-   if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))
-   && !pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)))
-   return 0;
-   ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
-   if (!ret)
-   ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
-   return ret;
-}
-
 static int stex_request_irq(struct st_hba *hba)
 {
struct pci_dev *pdev = hba->pdev;
@@ -1710,7 +1697,9 @@ static int stex_probe(struct pci_dev *pdev, const struct 
pci_device_id *id)
goto out_release_regions;
}
 
-   err = stex_set_dma_mask(pdev);
+   err = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64));
+   if (err)
+   err = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32));
if (err) {
printk(KERN_ERR DRV_NAME "(%s): set dma mask failed\n",
pci_name(pdev));
-- 
2.19.1



[PATCH 10/12] mvumi: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/mvumi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index 2458974d1af6..3d2d026d1ccf 100644
--- a/drivers/scsi/mvumi.c
+++ b/drivers/scsi/mvumi.c
@@ -2620,7 +2620,7 @@ static int __maybe_unused mvumi_resume(struct pci_dev 
*pdev)
}
 
ret = mvumi_pci_set_master(pdev);
-   ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+   ret = dma_set_mask(>dev, DMA_BIT_MASK(32));
if (ret)
goto fail;
ret = pci_request_regions(mhba->pdev, MV_DRIVER_NAME);
-- 
2.19.1



[PATCH 05/12] hisi_sas: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Switch it over to the better generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c 
b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index bd4ce38b98d2..73bf45e52a0a 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -2201,14 +2201,11 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct 
pci_device_id *id)
if (rc)
goto err_out_disable_device;
 
-   if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) ||
-   (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0)) {
-   if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) ||
-  (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0)) {
-   dev_err(dev, "No usable DMA addressing method\n");
-   rc = -EIO;
-   goto err_out_regions;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   dev_err(dev, "No usable DMA addressing method\n");
+   rc = -EIO;
+   goto err_out_regions;
}
 
shost = hisi_sas_shost_alloc_pci(pdev);
-- 
2.19.1



[PATCH 08/12] iscsi: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Switch it over to the better generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/isci/init.c | 19 ---
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/isci/init.c b/drivers/scsi/isci/init.c
index 08c7b1e25fe4..d72edbcbb7c6 100644
--- a/drivers/scsi/isci/init.c
+++ b/drivers/scsi/isci/init.c
@@ -304,21 +304,10 @@ static int isci_pci_init(struct pci_dev *pdev)
 
pci_set_master(pdev);
 
-   err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
-   if (err) {
-   err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
-   if (err)
-   return err;
-   }
-
-   err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
-   if (err) {
-   err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
-   if (err)
-   return err;
-   }
-
-   return 0;
+   err = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64));
+   if (err)
+   err = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32));
+   return err;
 }
 
 static int num_controllers(struct pci_dev *pdev)
-- 
2.19.1



[PATCH 06/12] hptiop: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/hptiop.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
index 2fad7f03aa02..dc52b37a0df8 100644
--- a/drivers/scsi/hptiop.c
+++ b/drivers/scsi/hptiop.c
@@ -1309,11 +1309,11 @@ static int hptiop_probe(struct pci_dev *pcidev, const 
struct pci_device_id *id)
 
/* Enable 64bit DMA if possible */
iop_ops = (struct hptiop_adapter_ops *)id->driver_data;
-   if (pci_set_dma_mask(pcidev, DMA_BIT_MASK(iop_ops->hw_dma_bit_mask))) {
-   if (pci_set_dma_mask(pcidev, DMA_BIT_MASK(32))) {
-   printk(KERN_ERR "hptiop: fail to set dma_mask\n");
-   goto disable_pci_device;
-   }
+   if (dma_set_mask(>dev,
+DMA_BIT_MASK(iop_ops->hw_dma_bit_mask)) ||
+   dma_set_mask(>dev, DMA_BIT_MASK(32))) {
+   printk(KERN_ERR "hptiop: fail to set dma_mask\n");
+   goto disable_pci_device;
}
 
if (pci_request_regions(pcidev, driver_name)) {
-- 
2.19.1



[PATCH 04/12] esas2r: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Also move the dma_get_required_mask check
before actually setting the dma mask, so that we don't end up with
inconsistent settings in corner cases.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/esas2r/esas2r_init.c | 49 +--
 1 file changed, 14 insertions(+), 35 deletions(-)

diff --git a/drivers/scsi/esas2r/esas2r_init.c 
b/drivers/scsi/esas2r/esas2r_init.c
index bbe77db8938d..46b2c83ba21f 100644
--- a/drivers/scsi/esas2r/esas2r_init.c
+++ b/drivers/scsi/esas2r/esas2r_init.c
@@ -266,6 +266,7 @@ int esas2r_init_adapter(struct Scsi_Host *host, struct 
pci_dev *pcid,
int i;
void *next_uncached;
struct esas2r_request *first_request, *last_request;
+   bool dma64 = false;
 
if (index >= MAX_ADAPTERS) {
esas2r_log(ESAS2R_LOG_CRIT,
@@ -286,42 +287,20 @@ int esas2r_init_adapter(struct Scsi_Host *host, struct 
pci_dev *pcid,
a->pcid = pcid;
a->host = host;
 
-   if (sizeof(dma_addr_t) > 4) {
-   const uint64_t required_mask = dma_get_required_mask
-  (>dev);
-   if (required_mask > DMA_BIT_MASK(32)
-   && !pci_set_dma_mask(pcid, DMA_BIT_MASK(64))
-   && !pci_set_consistent_dma_mask(pcid,
-   DMA_BIT_MASK(64))) {
-   esas2r_log_dev(ESAS2R_LOG_INFO,
-  &(a->pcid->dev),
-  "64-bit PCI addressing enabled\n");
-   } else if (!pci_set_dma_mask(pcid, DMA_BIT_MASK(32))
-  && !pci_set_consistent_dma_mask(pcid,
-  DMA_BIT_MASK(32))) {
-   esas2r_log_dev(ESAS2R_LOG_INFO,
-  &(a->pcid->dev),
-  "32-bit PCI addressing enabled\n");
-   } else {
-   esas2r_log(ESAS2R_LOG_CRIT,
-  "failed to set DMA mask");
-   esas2r_kill_adapter(index);
-   return 0;
-   }
-   } else {
-   if (!pci_set_dma_mask(pcid, DMA_BIT_MASK(32))
-   && !pci_set_consistent_dma_mask(pcid,
-   DMA_BIT_MASK(32))) {
-   esas2r_log_dev(ESAS2R_LOG_INFO,
-  &(a->pcid->dev),
-  "32-bit PCI addressing enabled\n");
-   } else {
-   esas2r_log(ESAS2R_LOG_CRIT,
-  "failed to set DMA mask");
-   esas2r_kill_adapter(index);
-   return 0;
-   }
+   if (sizeof(dma_addr_t) > 4 &&
+   dma_get_required_mask(>dev) > DMA_BIT_MASK(32) &&
+   !dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)))
+   dma64 = true;
+
+   if (!dma64 && dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   esas2r_log(ESAS2R_LOG_CRIT, "failed to set DMA mask");
+   esas2r_kill_adapter(index);
+   return 0;
}
+
+   esas2r_log_dev(ESAS2R_LOG_INFO, >dev,
+  "%s-bit PCI addressing enabled\n", dma64 ? "64" : "32");
+
esas2r_adapters[index] = a;
sprintf(a->name, ESAS2R_DRVR_NAME "_%02d", index);
esas2r_debug("new adapter %p, name %s", a, a->name);
-- 
2.19.1



[PATCH 09/12] lpfc: use dma_set_mask_and_coherent

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.  Switch it over to the better generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/lpfc/lpfc_init.c | 34 ++
 1 file changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 323a32e87258..21e4aaf55170 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -7177,26 +7177,19 @@ lpfc_post_init_setup(struct lpfc_hba *phba)
 static int
 lpfc_sli_pci_mem_setup(struct lpfc_hba *phba)
 {
-   struct pci_dev *pdev;
+   struct pci_dev *pdev = phba->pcidev;
unsigned long bar0map_len, bar2map_len;
int i, hbq_count;
void *ptr;
int error = -ENODEV;
 
-   /* Obtain PCI device reference */
-   if (!phba->pcidev)
+   if (!pdev)
return error;
-   else
-   pdev = phba->pcidev;
 
/* Set the device DMA mask size */
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0
-|| pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(64)) != 0) {
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0
-|| pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(32)) != 0) {
-   return error;
-   }
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32)))
+   return error;
 
/* Get the bus address of Bar0 and Bar2 and the number of bytes
 * required by each mapping.
@@ -9558,25 +9551,18 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
 static int
 lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
 {
-   struct pci_dev *pdev;
+   struct pci_dev *pdev = phba->pcidev;
unsigned long bar0map_len, bar1map_len, bar2map_len;
int error = -ENODEV;
uint32_t if_type;
 
-   /* Obtain PCI device reference */
-   if (!phba->pcidev)
+   if (!pdev)
return error;
-   else
-   pdev = phba->pcidev;
 
/* Set the device DMA mask size */
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0
-|| pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(64)) != 0) {
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0
-|| pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(32)) != 0) {
-   return error;
-   }
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32)))
+   return error;
 
/*
 * The BARs and register set definitions and offset locations are
-- 
2.19.1



[PATCH 07/12] initio: use dma_set_mask

2018-10-18 Thread Christoph Hellwig
The driver currently uses pci_set_dma_mask despite otherwise using
the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/initio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/initio.c b/drivers/scsi/initio.c
index 7a91cf3ff173..0a8d786c84ed 100644
--- a/drivers/scsi/initio.c
+++ b/drivers/scsi/initio.c
@@ -2840,7 +2840,7 @@ static int initio_probe_one(struct pci_dev *pdev,
reg = 0;
bios_seg = (bios_seg << 8) + ((u16) ((reg & 0xFF00) >> 8));
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING  "i91u: Could not set 32 bit DMA mask\n");
error = -ENODEV;
goto out_disable_device;
-- 
2.19.1



[PATCH 3/4] gdth: remove gdth_{alloc,free}_ioctl

2018-10-18 Thread Christoph Hellwig
Out of the three callers once insists on the scratch buffer, and the
others are fine with a new allocation.  Switch those two to juse use
pci_alloc_consistent directly, and open code the scratch buffer
allocation in the remaining one.  This avoids a case where we might
be doing a memory allocation under a spinlock with irqs disabled.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c  |  7 ++--
 drivers/scsi/gdth_proc.c | 71 
 drivers/scsi/gdth_proc.h |  3 --
 3 files changed, 25 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 2bec840018ad..7274d09b2a6c 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -4232,7 +4232,7 @@ static int ioc_general(void __user *arg, char *cmnd)
gdth_ioctl_general gen;
gdth_ha_str *ha;
char *buf = NULL;
-   u64 paddr;
+   dma_addr_t paddr;
int rval;
 
if (copy_from_user(, arg, sizeof(gdth_ioctl_general)))
@@ -4251,7 +4251,8 @@ static int ioc_general(void __user *arg, char *cmnd)
if (gen.data_len + gen.sense_len == 0)
goto execute;
 
-   buf = gdth_ioctl_alloc(ha, gen.data_len + gen.sense_len, FALSE, );
+buf = pci_alloc_consistent(ha->pdev, gen.data_len + gen.sense_len,
+   );
if (!buf)
return -EFAULT;
 
@@ -4291,7 +4292,7 @@ static int ioc_general(void __user *arg, char *cmnd)
 
rval = 0;
 out_free_buf:
-   gdth_ioctl_free(ha, gen.data_len+gen.sense_len, buf, paddr);
+   pci_free_consistent(ha->pdev, gen.data_len + gen.sense_len, buf, paddr);
return 0;
 }
  
diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index 63d851398e38..6a6bdab748df 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -31,7 +31,6 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 int i, found;
 gdth_cmd_strgdtcmd;
 gdth_cpar_str   *pcpar;
-u64 paddr;
 
 charcmnd[MAX_COMMAND_SIZE];
 memset(cmnd, 0xff, 12);
@@ -113,13 +112,23 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 }
 
 if (wb_mode) {
-if (!gdth_ioctl_alloc(ha, sizeof(gdth_cpar_str), TRUE, ))
-return(-EBUSY);
+   unsigned long flags;
+
+   BUILD_BUG_ON(sizeof(gdth_cpar_str) > GDTH_SCRATCH);
+
+   spin_lock_irqsave(>smp_lock, flags);
+   if (ha->scratch_busy) {
+   spin_unlock_irqrestore(>smp_lock, flags);
+return -EBUSY;
+   }
+   ha->scratch_busy = TRUE;
+   spin_unlock_irqrestore(>smp_lock, flags);
+
 pcpar = (gdth_cpar_str *)ha->pscratch;
 memcpy( pcpar, >cpar, sizeof(gdth_cpar_str) );
 gdtcmd.Service = CACHESERVICE;
 gdtcmd.OpCode = GDT_IOCTL;
-gdtcmd.u.ioctl.p_param = paddr;
+gdtcmd.u.ioctl.p_param = ha->scratch_phys;
 gdtcmd.u.ioctl.param_size = sizeof(gdth_cpar_str);
 gdtcmd.u.ioctl.subfunc = CACHE_CONFIG;
 gdtcmd.u.ioctl.channel = INVALID_CHANNEL;
@@ -127,7 +136,10 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char 
*buffer,
 
 gdth_execute(host, , cmnd, 30, NULL);
 
-gdth_ioctl_free(ha, GDTH_SCRATCH, ha->pscratch, paddr);
+   spin_lock_irqsave(>smp_lock, flags);
+   ha->scratch_busy = FALSE;
+   spin_unlock_irqrestore(>smp_lock, flags);
+
 printk("Done.\n");
 return(orig_length);
 }
@@ -143,7 +155,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 int id, i, j, k, sec, flag;
 int no_mdrv = 0, drv_no, is_mirr;
 u32 cnt;
-u64 paddr;
+dma_addr_t paddr;
 int rc = -ENOMEM;
 
 gdth_cmd_str *gdtcmd;
@@ -232,7 +244,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nPhysical Devices:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, size, FALSE, );
+buf = pci_alloc_consistent(ha->pdev, size, );
 if (!buf) 
 goto stop_output;
 for (i = 0; i < ha->bus_cnt; ++i) {
@@ -408,7 +420,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_printf(m,
" To Array Drv.:\t%s\n", hrec);
 }   
-
+
 if (!flag)
 seq_puts(m, "\n --\n");
 
@@ -502,7 +514,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, size, buf, paddr);
+   pci_free_consistent(ha->pdev, size, buf, paddr);
 
 for (i = 0; i < MAX_HDRIVES; ++i) {
 if (!(ha->hdr[i].present))
@@ -555,47 +567,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 return rc;
 }
 
-static char *gdth_ioctl_alloc(gdth_ha_str *ha, int size, int scratch,
-

[PATCH 4/4] gdth: use generic DMA API

2018-10-18 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.  Also switch
to dma_map_single from pci_map_page in one case where this makes the code
simpler.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c  | 111 +++
 drivers/scsi/gdth_proc.c |   4 +-
 2 files changed, 56 insertions(+), 59 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 7274d09b2a6c..3d856554b1b1 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -2518,9 +2518,9 @@ static int gdth_fill_cache_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp,
 
 if (scsi_bufflen(scp)) {
 cmndinfo->dma_dir = (read_write == 1 ?
-PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE);   
-sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
-   cmndinfo->dma_dir);
+DMA_TO_DEVICE : DMA_FROM_DEVICE);   
+sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
+  scsi_sg_count(scp), cmndinfo->dma_dir);
 if (mode64) {
 struct scatterlist *sl;
 
@@ -2603,8 +2603,6 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 dma_addr_t sense_paddr;
 int cmd_index, sgcnt, mode64;
 u8 t,l;
-struct page *page;
-unsigned long offset;
 struct gdth_cmndinfo *cmndinfo;
 
 t = scp->device->id;
@@ -2649,10 +2647,8 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 }
 
 } else {
-page = virt_to_page(scp->sense_buffer);
-offset = (unsigned long)scp->sense_buffer & ~PAGE_MASK;
-sense_paddr = pci_map_page(ha->pdev,page,offset,
-   16,PCI_DMA_FROMDEVICE);
+sense_paddr = dma_map_single(>pdev->dev, scp->sense_buffer, 16,
+DMA_FROM_DEVICE);
 
cmndinfo->sense_paddr  = sense_paddr;
 cmdp->OpCode   = GDT_WRITE; /* always */
@@ -2693,9 +2689,9 @@ static int gdth_fill_raw_cmd(gdth_ha_str *ha, struct 
scsi_cmnd *scp, u8 b)
 }
 
 if (scsi_bufflen(scp)) {
-cmndinfo->dma_dir = PCI_DMA_BIDIRECTIONAL;
-sgcnt = pci_map_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
-   cmndinfo->dma_dir);
+cmndinfo->dma_dir = DMA_BIDIRECTIONAL;
+sgcnt = dma_map_sg(>pdev->dev, scsi_sglist(scp),
+  scsi_sg_count(scp), cmndinfo->dma_dir);
 if (mode64) {
 struct scatterlist *sl;
 
@@ -3313,12 +3309,12 @@ static int gdth_sync_event(gdth_ha_str *ha, int 
service, u8 index,
 return 2;
 }
 if (scsi_bufflen(scp))
-pci_unmap_sg(ha->pdev, scsi_sglist(scp), scsi_sg_count(scp),
+dma_unmap_sg(>pdev->dev, scsi_sglist(scp), scsi_sg_count(scp),
  cmndinfo->dma_dir);
 
 if (cmndinfo->sense_paddr)
-pci_unmap_page(ha->pdev, cmndinfo->sense_paddr, 16,
-   PCI_DMA_FROMDEVICE);
+dma_unmap_page(>pdev->dev, cmndinfo->sense_paddr, 16,
+  DMA_FROM_DEVICE);
 
 if (ha->status == S_OK) {
 cmndinfo->status = S_OK;
@@ -4251,8 +4247,8 @@ static int ioc_general(void __user *arg, char *cmnd)
if (gen.data_len + gen.sense_len == 0)
goto execute;
 
-buf = pci_alloc_consistent(ha->pdev, gen.data_len + gen.sense_len,
-   );
+buf = dma_alloc_coherent(>pdev->dev, gen.data_len + gen.sense_len,
+   , GFP_KERNEL);
if (!buf)
return -EFAULT;
 
@@ -4292,7 +4288,8 @@ static int ioc_general(void __user *arg, char *cmnd)
 
rval = 0;
 out_free_buf:
-   pci_free_consistent(ha->pdev, gen.data_len + gen.sense_len, buf, paddr);
+   dma_free_coherent(>pdev->dev, gen.data_len + gen.sense_len, buf,
+   paddr);
return 0;
 }
  
@@ -4749,22 +4746,22 @@ static int __init gdth_isa_probe_one(u32 isa_bios)
 
error = -ENOMEM;
 
-   ha->pscratch = pci_alloc_consistent(ha->pdev, GDTH_SCRATCH,
-   _dma_handle);
+   ha->pscratch = dma_alloc_coherent(>pdev->dev, GDTH_SCRATCH,
+   _dma_handle, GFP_KERNEL);
if (!ha->pscratch)
goto out_dec_counters;
ha->scratch_phys = scratch_dma_handle;
 
-   ha->pmsg = pci_alloc_consistent(ha->pdev, sizeof(gdth_msg_str),
-   _dma_handle);
+   ha->pmsg = dma_alloc_coherent(>pdev->dev, sizeof(gdth_msg_str),
+   _dma_handle, GFP_KERNEL);
i

[PATCH 1/4] gdth: refactor ioc_general

2018-10-18 Thread Christoph Hellwig
This function is a huge mess with duplicated error handling.  Split out
a few useful helpers and use goto labels to untangle the error handling
and no-data ioctl handling.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth.c | 247 +++-
 1 file changed, 130 insertions(+), 117 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index 16709735b546..2bec840018ad 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -4155,131 +4155,144 @@ static int ioc_resetdrv(void __user *arg, char *cmnd)
 return 0;
 }
 
-static int ioc_general(void __user *arg, char *cmnd)
+static void gdth_ioc_addr32(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
 {
-gdth_ioctl_general gen;
-char *buf = NULL;
-u64 paddr; 
-gdth_ha_str *ha;
-int rval;
+   if (ha->cache_feat & SCATTER_GATHER) {
+   gen->command.u.cache.DestAddr = 0x;
+   gen->command.u.cache.sg_canz = 1;
+   gen->command.u.cache.sg_lst[0].sg_ptr = (u32)paddr;
+   gen->command.u.cache.sg_lst[0].sg_len = gen->data_len;
+   gen->command.u.cache.sg_lst[1].sg_len = 0;
+   } else {
+   gen->command.u.cache.DestAddr = paddr;
+   gen->command.u.cache.sg_canz = 0;
+   }
+}
 
-if (copy_from_user(, arg, sizeof(gdth_ioctl_general)))
-return -EFAULT;
-ha = gdth_find_ha(gen.ionode);
-if (!ha)
-return -EFAULT;
+static void gdth_ioc_addr64(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
+{
+   if (ha->cache_feat & SCATTER_GATHER) {
+   gen->command.u.cache64.DestAddr = (u64)-1;
+   gen->command.u.cache64.sg_canz = 1;
+   gen->command.u.cache64.sg_lst[0].sg_ptr = paddr;
+   gen->command.u.cache64.sg_lst[0].sg_len = gen->data_len;
+   gen->command.u.cache64.sg_lst[1].sg_len = 0;
+   } else {
+   gen->command.u.cache64.DestAddr = paddr;
+   gen->command.u.cache64.sg_canz = 0;
+   }
+}
 
-if (gen.data_len > INT_MAX)
-return -EINVAL;
-if (gen.sense_len > INT_MAX)
-return -EINVAL;
-if (gen.data_len + gen.sense_len > INT_MAX)
-return -EINVAL;
+static void gdth_ioc_cacheservice(gdth_ha_str *ha, gdth_ioctl_general *gen,
+   u64 paddr)
+{
+   if (ha->cache_feat & GDT_64BIT) {
+   /* copy elements from 32-bit IOCTL structure */
+   gen->command.u.cache64.BlockCnt = gen->command.u.cache.BlockCnt;
+   gen->command.u.cache64.BlockNo = gen->command.u.cache.BlockNo;
+   gen->command.u.cache64.DeviceNo = gen->command.u.cache.DeviceNo;
 
-if (gen.data_len + gen.sense_len != 0) {
-if (!(buf = gdth_ioctl_alloc(ha, gen.data_len + gen.sense_len,
- FALSE, )))
-return -EFAULT;
-if (copy_from_user(buf, arg + sizeof(gdth_ioctl_general),  
-   gen.data_len + gen.sense_len)) {
-gdth_ioctl_free(ha, gen.data_len+gen.sense_len, buf, paddr);
-return -EFAULT;
-}
+   gdth_ioc_addr64(ha, gen, paddr);
+   } else {
+   gdth_ioc_addr32(ha, gen, paddr);
+   }
+}
 
-if (gen.command.OpCode == GDT_IOCTL) {
-gen.command.u.ioctl.p_param = paddr;
-} else if (gen.command.Service == CACHESERVICE) {
-if (ha->cache_feat & GDT_64BIT) {
-/* copy elements from 32-bit IOCTL structure */
-gen.command.u.cache64.BlockCnt = gen.command.u.cache.BlockCnt;
-gen.command.u.cache64.BlockNo = gen.command.u.cache.BlockNo;
-gen.command.u.cache64.DeviceNo = gen.command.u.cache.DeviceNo;
-/* addresses */
-if (ha->cache_feat & SCATTER_GATHER) {
-gen.command.u.cache64.DestAddr = (u64)-1;
-gen.command.u.cache64.sg_canz = 1;
-gen.command.u.cache64.sg_lst[0].sg_ptr = paddr;
-gen.command.u.cache64.sg_lst[0].sg_len = gen.data_len;
-gen.command.u.cache64.sg_lst[1].sg_len = 0;
-} else {
-gen.command.u.cache64.DestAddr = paddr;
-gen.command.u.cache64.sg_canz = 0;
-}
-} else {
-if (ha->cache_feat & SCATTER_GATHER) {
-gen.command.u.cache.DestAddr = 0x;
-gen.command.u.cache.sg_canz = 1;
-gen.command.u.cache.sg_lst[0].sg_ptr = (u32)paddr;
-gen.command.u.cache.sg_lst[0].sg_len = gen.data_len;
-gen.command.u.cache.sg_lst[1].sg_len = 0;
-} else {
-gen.command.u.cache.Dest

dma related cleanups for gdth

2018-10-18 Thread Christoph Hellwig
Cleans up various oddities found during a code audit.


[PATCH 2/4] gdth: reuse dma coherent allocation in gdth_show_info

2018-10-18 Thread Christoph Hellwig
gdth_show_info currently allocs and frees a dma buffer four times,
which isn't very efficient. Reuse a single allocation instead.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/gdth_proc.c | 18 +-
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/gdth_proc.c b/drivers/scsi/gdth_proc.c
index 3a9751a80225..63d851398e38 100644
--- a/drivers/scsi/gdth_proc.c
+++ b/drivers/scsi/gdth_proc.c
@@ -226,11 +226,13 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 #endif
 
 if (ha->more_proc) {
+size_t size = max(GDTH_SCRATCH, sizeof(gdth_hget_str));
+
 /* more information: 2. about physical devices */
 seq_puts(m, "\nPhysical Devices:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
+buf = gdth_ioctl_alloc(ha, size, FALSE, );
 if (!buf) 
 goto stop_output;
 for (i = 0; i < ha->bus_cnt; ++i) {
@@ -323,7 +325,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
 
 if (!flag)
 seq_puts(m, "\n --\n");
@@ -332,7 +333,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nLogical Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
 if (!buf) 
 goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
@@ -408,7 +408,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_printf(m,
" To Array Drv.:\t%s\n", hrec);
 }   
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
 
 if (!flag)
 seq_puts(m, "\n --\n");
@@ -417,9 +416,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nArray Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, );
-if (!buf) 
-goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
 if (!(ha->hdr[i].is_arraydrv && ha->hdr[i].is_master))
 continue;
@@ -468,8 +464,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
hrec);
 }
 }
-gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
-
+
 if (!flag)
 seq_puts(m, "\n --\n");
 
@@ -477,9 +472,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 seq_puts(m, "\nHost Drives:");
 flag = FALSE;
 
-buf = gdth_ioctl_alloc(ha, sizeof(gdth_hget_str), FALSE, );
-if (!buf) 
-goto stop_output;
 for (i = 0; i < MAX_LDRIVES; ++i) {
 if (!ha->hdr[i].is_logdrv || 
 (ha->hdr[i].is_arraydrv && !ha->hdr[i].is_master))
@@ -510,7 +502,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host 
*host)
 }
 }
 }
-gdth_ioctl_free(ha, sizeof(gdth_hget_str), buf, paddr);
+gdth_ioctl_free(ha, size, buf, paddr);
 
 for (i = 0; i < MAX_HDRIVES; ++i) {
 if (!(ha->hdr[i].present))
-- 
2.19.1



dma related cleanups for pmcraid

2018-10-18 Thread Christoph Hellwig
Cleans up various oddities found during a code audit.


[PATCH 3/3] pmcraid: use generic DMA API

2018-10-18 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/pmcraid.c | 79 +++---
 1 file changed, 36 insertions(+), 43 deletions(-)

diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index 401e543f1723..707d766c1ee9 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi/pmcraid.c
@@ -3514,7 +3514,7 @@ static int pmcraid_build_passthrough_ioadls(
return -ENOMEM;
}
 
-   sglist->num_dma_sg = pci_map_sg(cmd->drv_inst->pdev,
+   sglist->num_dma_sg = dma_map_sg(>drv_inst->pdev->dev,
sglist->scatterlist,
sglist->num_sg, direction);
 
@@ -3563,7 +3563,7 @@ static void pmcraid_release_passthrough_ioadls(
struct pmcraid_sglist *sglist = cmd->sglist;
 
if (buflen > 0) {
-   pci_unmap_sg(cmd->drv_inst->pdev,
+   dma_unmap_sg(>drv_inst->pdev->dev,
 sglist->scatterlist,
 sglist->num_sg,
 direction);
@@ -4699,9 +4699,9 @@ static void
 pmcraid_release_host_rrqs(struct pmcraid_instance *pinstance, int maxindex)
 {
int i;
-   for (i = 0; i < maxindex; i++) {
 
-   pci_free_consistent(pinstance->pdev,
+   for (i = 0; i < maxindex; i++) {
+   dma_free_coherent(>pdev->dev,
HRRQ_ENTRY_SIZE * PMCRAID_MAX_CMD,
pinstance->hrrq_start[i],
pinstance->hrrq_start_bus_addr[i]);
@@ -4728,11 +4728,9 @@ static int pmcraid_allocate_host_rrqs(struct 
pmcraid_instance *pinstance)
 
for (i = 0; i < pinstance->num_hrrq; i++) {
pinstance->hrrq_start[i] =
-   pci_alloc_consistent(
-   pinstance->pdev,
-   buffer_size,
-   &(pinstance->hrrq_start_bus_addr[i]));
-
+   dma_alloc_coherent(>pdev->dev, buffer_size,
+  >hrrq_start_bus_addr[i],
+  GFP_KERNEL);
if (!pinstance->hrrq_start[i]) {
pmcraid_err("pci_alloc failed for hrrq vector : %d\n",
i);
@@ -4761,7 +4759,7 @@ static int pmcraid_allocate_host_rrqs(struct 
pmcraid_instance *pinstance)
 static void pmcraid_release_hcams(struct pmcraid_instance *pinstance)
 {
if (pinstance->ccn.msg != NULL) {
-   pci_free_consistent(pinstance->pdev,
+   dma_free_coherent(>pdev->dev,
PMCRAID_AEN_HDR_SIZE +
sizeof(struct pmcraid_hcam_ccn_ext),
pinstance->ccn.msg,
@@ -4773,7 +4771,7 @@ static void pmcraid_release_hcams(struct pmcraid_instance 
*pinstance)
}
 
if (pinstance->ldn.msg != NULL) {
-   pci_free_consistent(pinstance->pdev,
+   dma_free_coherent(>pdev->dev,
PMCRAID_AEN_HDR_SIZE +
sizeof(struct pmcraid_hcam_ldn),
pinstance->ldn.msg,
@@ -4794,17 +4792,15 @@ static void pmcraid_release_hcams(struct 
pmcraid_instance *pinstance)
  */
 static int pmcraid_allocate_hcams(struct pmcraid_instance *pinstance)
 {
-   pinstance->ccn.msg = pci_alloc_consistent(
-   pinstance->pdev,
+   pinstance->ccn.msg = dma_alloc_coherent(>pdev->dev,
PMCRAID_AEN_HDR_SIZE +
sizeof(struct pmcraid_hcam_ccn_ext),
-   &(pinstance->ccn.baddr));
+   >ccn.baddr, GFP_KERNEL);
 
-   pinstance->ldn.msg = pci_alloc_consistent(
-   pinstance->pdev,
+   pinstance->ldn.msg = dma_alloc_coherent(>pdev->dev,
PMCRAID_AEN_HDR_SIZE +
sizeof(struct pmcraid_hcam_ldn),
-   &(pinstance->ldn.baddr));
+   >ldn.baddr, GFP_KERNEL);
 
if (pinstance->ldn.msg == NULL || pinstance->ccn.msg == NULL) {
pmcraid_release_hcams(pinstance);
@@ -4832,7 +4828,7 @@ static void pmcraid_release_config_buffers(struct 
pmcraid_instance *pinstance)
 {
if (pinstance->cfg_table != NULL &&
pinstance->cfg_table_bus_addr != 0) {
-   

[PATCH 1/3] pmcraid: simplify pmcraid_cancel_all a bit

2018-10-18 Thread Christoph Hellwig
No need for a local cmd_done variable, and pass boolean values as bool
type instead of u32.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/pmcraid.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index 4e86994e10e8..3ba606420247 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi/pmcraid.c
@@ -2491,17 +2491,15 @@ static void pmcraid_request_sense(struct pmcraid_cmd 
*cmd)
 /**
  * pmcraid_cancel_all - cancel all outstanding IOARCBs as part of error 
recovery
  * @cmd: command that failed
- * @sense: true if request_sense is required after cancel all
+ * @need_sense: true if request_sense is required after cancel all
  *
  * This function sends a cancel all to a device to clear the queue.
  */
-static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, u32 sense)
+static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, bool need_sense)
 {
struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd;
struct pmcraid_ioarcb *ioarcb = >ioa_cb->ioarcb;
struct pmcraid_resource_entry *res = scsi_cmd->device->hostdata;
-   void (*cmd_done) (struct pmcraid_cmd *) = sense ? pmcraid_erp_done
-   : pmcraid_request_sense;
 
memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN);
ioarcb->request_flags0 = SYNC_OVERRIDE;
@@ -2519,7 +2517,8 @@ static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, 
u32 sense)
/* writing to IOARRIN must be protected by host_lock, as mid-layer
 * schedule queuecommand while we are doing this
 */
-   pmcraid_send_cmd(cmd, cmd_done,
+   pmcraid_send_cmd(cmd, need_sense ?
+pmcraid_erp_done : pmcraid_request_sense,
 PMCRAID_REQUEST_SENSE_TIMEOUT,
 pmcraid_timeout_handler);
 }
@@ -2612,7 +2611,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd)
struct pmcraid_ioasa *ioasa = >ioa_cb->ioasa;
u32 ioasc = le32_to_cpu(ioasa->ioasc);
u32 masked_ioasc = ioasc & PMCRAID_IOASC_SENSE_MASK;
-   u32 sense_copied = 0;
+   bool sense_copied = false;
 
if (!res) {
pmcraid_info("resource pointer is NULL\n");
@@ -2684,7 +2683,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd)
memcpy(scsi_cmd->sense_buffer,
   ioasa->sense_data,
   data_size);
-   sense_copied = 1;
+   sense_copied = true;
}
 
if (RES_IS_GSCSI(res->cfg_entry))
-- 
2.19.1



[PATCH 2/3] pmcraid: don't allocate a dma coherent buffer for sense data

2018-10-18 Thread Christoph Hellwig
We can just dma map the sense buffer passed with the scsi command,
and that gets us out of the nasty business of doing dma coherent
allocations from irq context.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/pmcraid.c | 24 
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index 3ba606420247..401e543f1723 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi/pmcraid.c
@@ -846,16 +846,9 @@ static void pmcraid_erp_done(struct pmcraid_cmd *cmd)
cmd->ioa_cb->ioarcb.cdb[0], ioasc);
}
 
-   /* if we had allocated sense buffers for request sense, copy the sense
-* release the buffers
-*/
-   if (cmd->sense_buffer != NULL) {
-   memcpy(scsi_cmd->sense_buffer,
-  cmd->sense_buffer,
-  SCSI_SENSE_BUFFERSIZE);
-   pci_free_consistent(pinstance->pdev,
-   SCSI_SENSE_BUFFERSIZE,
-   cmd->sense_buffer, cmd->sense_buffer_dma);
+   if (cmd->sense_buffer) {
+   dma_unmap_single(>pdev->dev, cmd->sense_buffer_dma,
+SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
cmd->sense_buffer = NULL;
cmd->sense_buffer_dma = 0;
}
@@ -2444,13 +2437,12 @@ static void pmcraid_request_sense(struct pmcraid_cmd 
*cmd)
 {
struct pmcraid_ioarcb *ioarcb = >ioa_cb->ioarcb;
struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl;
+   struct device *dev = >drv_inst->pdev->dev;
 
-   /* allocate DMAable memory for sense buffers */
-   cmd->sense_buffer = pci_alloc_consistent(cmd->drv_inst->pdev,
-SCSI_SENSE_BUFFERSIZE,
->sense_buffer_dma);
-
-   if (cmd->sense_buffer == NULL) {
+   cmd->sense_buffer = cmd->scsi_cmd->sense_buffer;
+   cmd->sense_buffer_dma = dma_map_single(dev, cmd->sense_buffer,
+   SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+   if (dma_mapping_error(dev, cmd->sense_buffer_dma)) {
pmcraid_err
("couldn't allocate sense buffer for request sense\n");
pmcraid_erp_done(cmd);
-- 
2.19.1



[PATCH 5/5] qla2xxx: use lower_32_bits and upper_32_bits instead of reinventing them

2018-10-18 Thread Christoph Hellwig
This also moves the optimization for builds with 32-bit dma_addr_t to
the compiler (where it belongs) instead of opencoding it based on
incorrect assumptions.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/qla2xxx/qla_target.c | 8 
 drivers/scsi/qla2xxx/qla_target.h | 8 
 2 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_target.c 
b/drivers/scsi/qla2xxx/qla_target.c
index 39828207bc1d..443711238c0e 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -2660,9 +2660,9 @@ static void qlt_load_cont_data_segments(struct 
qla_tgt_prm *prm)
cnt < QLA_TGT_DATASEGS_PER_CONT_24XX && prm->seg_cnt;
cnt++, prm->seg_cnt--) {
*dword_ptr++ =
-   cpu_to_le32(pci_dma_lo32
+   cpu_to_le32(lower_32_bits
(sg_dma_address(prm->sg)));
-   *dword_ptr++ = cpu_to_le32(pci_dma_hi32
+   *dword_ptr++ = cpu_to_le32(upper_32_bits
(sg_dma_address(prm->sg)));
*dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg));
 
@@ -2704,9 +2704,9 @@ static void qlt_load_data_segments(struct qla_tgt_prm 
*prm)
(cnt < QLA_TGT_DATASEGS_PER_CMD_24XX) && prm->seg_cnt;
cnt++, prm->seg_cnt--) {
*dword_ptr++ =
-   cpu_to_le32(pci_dma_lo32(sg_dma_address(prm->sg)));
+   cpu_to_le32(lower_32_bits(sg_dma_address(prm->sg)));
 
-   *dword_ptr++ = cpu_to_le32(pci_dma_hi32(
+   *dword_ptr++ = cpu_to_le32(upper_32_bits(
sg_dma_address(prm->sg)));
 
*dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg));
diff --git a/drivers/scsi/qla2xxx/qla_target.h 
b/drivers/scsi/qla2xxx/qla_target.h
index 91403269b204..085782db911c 100644
--- a/drivers/scsi/qla2xxx/qla_target.h
+++ b/drivers/scsi/qla2xxx/qla_target.h
@@ -771,14 +771,6 @@ int qla2x00_wait_for_hba_online(struct scsi_qla_host *);
 #defineFC_TM_REJECT4
 #define FC_TM_FAILED5
 
-#if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G)
-#define pci_dma_lo32(a) (a & 0x)
-#define pci_dma_hi32(a) a) >> 16)>>16) & 0x)
-#else
-#define pci_dma_lo32(a) (a & 0x)
-#define pci_dma_hi32(a) 0
-#endif
-
 #define QLA_TGT_SENSE_VALID(sense)  ((sense != NULL) && \
(((const uint8_t *)(sense))[0] & 0x70) == 0x70)
 
-- 
2.19.1



[PATCH 4/5] qla1280: properly handle 64-bit DMA

2018-10-18 Thread Christoph Hellwig
CONFIG_HIGHMEM is not in fact an indicator for > 32-bit dma addressing
Given that the driver is a bit weird and wants a compile time selection
switch to checking CONFIG_ARCH_DMA_ADDR_T_64BIT instead.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/qla1280.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
index f19e8d192d36..9c5b67304a76 100644
--- a/drivers/scsi/qla1280.c
+++ b/drivers/scsi/qla1280.c
@@ -383,10 +383,7 @@
 
 #include "qla1280.h"
 
-#ifndef BITS_PER_LONG
-#error "BITS_PER_LONG not defined!"
-#endif
-#if (BITS_PER_LONG == 64) || defined CONFIG_HIGHMEM
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 #define QLA_64BIT_PTR  1
 #endif
 
-- 
2.19.1



fix up a few drivers for 64-bit dma addresses

2018-10-18 Thread Christoph Hellwig
Some drivers make very odd decisions on when to use support for
64-bit addressing.  Fix this up a bit.


[PATCH 2/5] ips: properly handle 64-bit DMA

2018-10-18 Thread Christoph Hellwig
CONFIG_HIGHMEM64 is only one (and these days unusual) way to indicate
that > 32-bit dma address are possible.  Replace it with a check of the
dma_addr_t size.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/ips.c | 2 +-
 drivers/scsi/ips.h | 6 --
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index 679321e96a86..70a776dc0a02 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -6926,7 +6926,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
 * it!  Also, don't use 64bit addressing if dma addresses
 * are guaranteed to be < 4G.
 */
-   if (IPS_ENABLE_DMA64 && IPS_HAS_ENH_SGLIST(ha) &&
+   if (sizeof(dma_addr_t) > 4 && IPS_HAS_ENH_SGLIST(ha) &&
!dma_set_mask(>pcidev->dev, DMA_BIT_MASK(64))) {
(ha)->flags |= IPS_HA_ENH_SG;
} else {
diff --git a/drivers/scsi/ips.h b/drivers/scsi/ips.h
index 42c180e3938b..6c0678fb9a67 100644
--- a/drivers/scsi/ips.h
+++ b/drivers/scsi/ips.h
@@ -96,12 +96,6 @@
   #define __iomem
#endif
 
-   #if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G)
-  #define IPS_ENABLE_DMA64(1)
-   #else
-  #define IPS_ENABLE_DMA64(0)
-   #endif
-
/*
 * Adapter address map equates
 */
-- 
2.19.1



[PATCH 1/5] ips: use lower_32_bits and upper_32_bits instead of reinventing them

2018-10-18 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/ips.c | 6 +++---
 drivers/scsi/ips.h | 3 ---
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index ee8a1ecd58fd..679321e96a86 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -1801,13 +1801,13 @@ ips_fill_scb_sg_single(ips_ha_t * ha, dma_addr_t 
busaddr,
}
if (IPS_USE_ENH_SGLIST(ha)) {
scb->sg_list.enh_list[indx].address_lo =
-   cpu_to_le32(pci_dma_lo32(busaddr));
+   cpu_to_le32(lower_32_bits(busaddr));
scb->sg_list.enh_list[indx].address_hi =
-   cpu_to_le32(pci_dma_hi32(busaddr));
+   cpu_to_le32(upper_32_bits(busaddr));
scb->sg_list.enh_list[indx].length = cpu_to_le32(e_len);
} else {
scb->sg_list.std_list[indx].address =
-   cpu_to_le32(pci_dma_lo32(busaddr));
+   cpu_to_le32(lower_32_bits(busaddr));
scb->sg_list.std_list[indx].length = cpu_to_le32(e_len);
}
 
diff --git a/drivers/scsi/ips.h b/drivers/scsi/ips.h
index db546171e97f..42c180e3938b 100644
--- a/drivers/scsi/ips.h
+++ b/drivers/scsi/ips.h
@@ -96,9 +96,6 @@
   #define __iomem
#endif
 
-   #define pci_dma_hi32(a) ((a >> 16) >> 16)
-   #define pci_dma_lo32(a) (a & 0x)
-
#if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G)
   #define IPS_ENABLE_DMA64(1)
#else
-- 
2.19.1



[PATCH 3/5] qla1280: use lower_32_bits and upper_32_bits instead of reinventing them

2018-10-18 Thread Christoph Hellwig
This also moves the optimization for builds with 32-bit dma_addr_t to
the compiler (where it belongs) instead of opencoding it.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/qla1280.c | 47 ++
 1 file changed, 20 insertions(+), 27 deletions(-)

diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
index 15a50cc7e4b3..f19e8d192d36 100644
--- a/drivers/scsi/qla1280.c
+++ b/drivers/scsi/qla1280.c
@@ -390,13 +390,6 @@
 #define QLA_64BIT_PTR  1
 #endif
 
-#ifdef QLA_64BIT_PTR
-#define pci_dma_hi32(a)((a >> 16) >> 16)
-#else
-#define pci_dma_hi32(a)0
-#endif
-#define pci_dma_lo32(a)(a & 0x)
-
 #define NVRAM_DELAY()  udelay(500) /* 2 microseconds */
 
 #if defined(__ia64__) && !defined(ia64_platform_is)
@@ -1790,8 +1783,8 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
mb[4] = cnt;
mb[3] = ha->request_dma & 0x;
mb[2] = (ha->request_dma >> 16) & 0x;
-   mb[7] = pci_dma_hi32(ha->request_dma) & 0x;
-   mb[6] = pci_dma_hi32(ha->request_dma) >> 16;
+   mb[7] = upper_32_bits(ha->request_dma) & 0x;
+   mb[6] = upper_32_bits(ha->request_dma) >> 16;
dprintk(2, "%s: op=%d  0x%p = 0x%4x,0x%4x,0x%4x,0x%4x\n",
__func__, mb[0],
(void *)(long)ha->request_dma,
@@ -1810,8 +1803,8 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
mb[4] = cnt;
mb[3] = p_tbuf & 0x;
mb[2] = (p_tbuf >> 16) & 0x;
-   mb[7] = pci_dma_hi32(p_tbuf) & 0x;
-   mb[6] = pci_dma_hi32(p_tbuf) >> 16;
+   mb[7] = upper_32_bits(p_tbuf) & 0x;
+   mb[6] = upper_32_bits(p_tbuf) >> 16;
 
err = qla1280_mailbox_command(ha, BIT_4 | BIT_3 | BIT_2 |
BIT_1 | BIT_0, mb);
@@ -1933,8 +1926,8 @@ qla1280_init_rings(struct scsi_qla_host *ha)
mb[3] = ha->request_dma & 0x;
mb[2] = (ha->request_dma >> 16) & 0x;
mb[4] = 0;
-   mb[7] = pci_dma_hi32(ha->request_dma) & 0x;
-   mb[6] = pci_dma_hi32(ha->request_dma) >> 16;
+   mb[7] = upper_32_bits(ha->request_dma) & 0x;
+   mb[6] = upper_32_bits(ha->request_dma) >> 16;
if (!(status = qla1280_mailbox_command(ha, BIT_7 | BIT_6 | BIT_4 |
   BIT_3 | BIT_2 | BIT_1 | BIT_0,
   [0]))) {
@@ -1947,8 +1940,8 @@ qla1280_init_rings(struct scsi_qla_host *ha)
mb[3] = ha->response_dma & 0x;
mb[2] = (ha->response_dma >> 16) & 0x;
mb[5] = 0;
-   mb[7] = pci_dma_hi32(ha->response_dma) & 0x;
-   mb[6] = pci_dma_hi32(ha->response_dma) >> 16;
+   mb[7] = upper_32_bits(ha->response_dma) & 0x;
+   mb[6] = upper_32_bits(ha->response_dma) >> 16;
status = qla1280_mailbox_command(ha, BIT_7 | BIT_6 | BIT_5 |
 BIT_3 | BIT_2 | BIT_1 | BIT_0,
 [0]);
@@ -2914,13 +2907,13 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, 
struct srb * sp)
 SCSI_BUS_32(cmd));
 #endif
*dword_ptr++ =
-   cpu_to_le32(pci_dma_lo32(dma_handle));
+   cpu_to_le32(lower_32_bits(dma_handle));
*dword_ptr++ =
-   cpu_to_le32(pci_dma_hi32(dma_handle));
+   cpu_to_le32(upper_32_bits(dma_handle));
*dword_ptr++ = cpu_to_le32(sg_dma_len(s));
dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n",
-   cpu_to_le32(pci_dma_hi32(dma_handle)),
-   cpu_to_le32(pci_dma_lo32(dma_handle)),
+   cpu_to_le32(upper_32_bits(dma_handle)),
+   cpu_to_le32(lower_32_bits(dma_handle)),
cpu_to_le32(sg_dma_len(sg_next(s;
remseg--;
}
@@ -2976,14 +2969,14 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, 
struct srb * sp)
 SCSI_BUS_32(cmd));
 #endif
*dword_ptr++ =
-   cpu_to_le32(pci_dma_lo32(dma_handle));
+   cpu_t

[PATCH 2/3] wd719x: use per-command private data

2018-10-18 Thread Christoph Hellwig
Add the SCB onto the scsi command allocation and use dma streaming
mappings for it only when in use.  This avoid possibly calling
dma_alloc_coherent under a lock or even in irq context, while also
making the code simpler.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 95 ++-
 drivers/scsi/wd719x.h |  1 -
 2 files changed, 39 insertions(+), 57 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index 7b05bbcfb186..d47190f08ed6 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -153,8 +153,6 @@ static int wd719x_direct_cmd(struct wd719x *wd, u8 opcode, 
u8 dev, u8 lun,
 
 static void wd719x_destroy(struct wd719x *wd)
 {
-   struct wd719x_scb *scb;
-
/* stop the RISC */
if (wd719x_direct_cmd(wd, WD719X_CMD_SLEEP, 0, 0, 0, 0,
  WD719X_WAIT_FOR_RISC))
@@ -164,10 +162,6 @@ static void wd719x_destroy(struct wd719x *wd)
 
WARN_ON_ONCE(!list_empty(>active_scbs));
 
-   /* free all SCBs */
-   list_for_each_entry(scb, >free_scbs, list)
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
-   scb->phys);
/* free internal buffers */
pci_free_consistent(wd->pdev, wd->fw_size, wd->fw_virt, wd->fw_phys);
wd->fw_virt = NULL;
@@ -180,18 +174,20 @@ static void wd719x_destroy(struct wd719x *wd)
free_irq(wd->pdev->irq, wd);
 }
 
-/* finish a SCSI command, mark SCB (if any) as free, unmap buffers */
-static void wd719x_finish_cmd(struct scsi_cmnd *cmd, int result)
+/* finish a SCSI command, unmap buffers */
+static void wd719x_finish_cmd(struct wd719x_scb *scb, int result)
 {
+   struct scsi_cmnd *cmd = scb->cmd;
struct wd719x *wd = shost_priv(cmd->device->host);
-   struct wd719x_scb *scb = (struct wd719x_scb *) cmd->host_scribble;
 
-   if (scb) {
-   list_move(>list, >free_scbs);
-   dma_unmap_single(>pdev->dev, cmd->SCp.dma_handle,
-SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
-   scsi_dma_unmap(cmd);
-   }
+   list_del(>list);
+
+   dma_unmap_single(>pdev->dev, scb->phys,
+   sizeof(struct wd719x_scb), DMA_TO_DEVICE);
+   scsi_dma_unmap(cmd);
+   dma_unmap_single(>pdev->dev, cmd->SCp.dma_handle,
+SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+
cmd->result = result << 16;
cmd->scsi_done(cmd);
 }
@@ -201,36 +197,10 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
 {
int i, count_sg;
unsigned long flags;
-   struct wd719x_scb *scb;
+   struct wd719x_scb *scb = scsi_cmd_priv(cmd);
struct wd719x *wd = shost_priv(sh);
-   dma_addr_t phys;
-
-   cmd->host_scribble = NULL;
 
-   /* get a free SCB - either from existing ones or allocate a new one */
-   spin_lock_irqsave(wd->sh->host_lock, flags);
-   scb = list_first_entry_or_null(>free_scbs, struct wd719x_scb, list);
-   if (scb) {
-   list_del(>list);
-   phys = scb->phys;
-   } else {
-   spin_unlock_irqrestore(wd->sh->host_lock, flags);
-   scb = pci_alloc_consistent(wd->pdev, sizeof(struct wd719x_scb),
-  );
-   spin_lock_irqsave(wd->sh->host_lock, flags);
-   if (!scb) {
-   dev_err(>pdev->dev, "unable to allocate SCB\n");
-   wd719x_finish_cmd(cmd, DID_ERROR);
-   spin_unlock_irqrestore(wd->sh->host_lock, flags);
-   return 0;
-   }
-   }
-   memset(scb, 0, sizeof(struct wd719x_scb));
-   list_add(>list, >active_scbs);
-
-   scb->phys = phys;
scb->cmd = cmd;
-   cmd->host_scribble = (char *) scb;
 
scb->CDB_tag = 0;   /* Tagged queueing not supported yet */
scb->devid = cmd->device->id;
@@ -243,6 +213,8 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, struct 
scsi_cmnd *cmd)
scb->sense_buf_length = SCSI_SENSE_BUFFERSIZE;
cmd->SCp.dma_handle = dma_map_single(>pdev->dev, cmd->sense_buffer,
SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+   if (dma_mapping_error(>pdev->dev, cmd->SCp.dma_handle))
+   goto out_error;
scb->sense_buf = cpu_to_le32(cmd->SCp.dma_handle);
 
/* request autosense */
@@ -257,11 +229,8 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, 
struct scsi_cmnd *cmd)
 
/* Scather/gather */
count_sg = scsi_dma_map(cmd);
-   if (count_sg < 0) {
-   wd719x_finish_cmd(cmd, DID_ERROR);
-   spin_unlock_irqr

[PATCH 3/3] wd719x: always use generic DMA API

2018-10-18 Thread Christoph Hellwig
The wd719x driver currently uses a mix of the legacy PCI DMA and
the generic DMA APIs.  Switch it over to the generic DMA API entirely.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 32 +---
 1 file changed, 17 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index d47190f08ed6..76442abef2e3 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -163,13 +163,14 @@ static void wd719x_destroy(struct wd719x *wd)
WARN_ON_ONCE(!list_empty(>active_scbs));
 
/* free internal buffers */
-   pci_free_consistent(wd->pdev, wd->fw_size, wd->fw_virt, wd->fw_phys);
+   dma_free_coherent(>pdev->dev, wd->fw_size, wd->fw_virt,
+ wd->fw_phys);
wd->fw_virt = NULL;
-   pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
-   wd->hash_phys);
+   dma_free_coherent(>pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
+ wd->hash_phys);
wd->hash_virt = NULL;
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param),
-   wd->params, wd->params_phys);
+   dma_free_coherent(>pdev->dev, sizeof(struct wd719x_host_param),
+ wd->params, wd->params_phys);
wd->params = NULL;
free_irq(wd->pdev->irq, wd);
 }
@@ -313,8 +314,8 @@ static int wd719x_chip_init(struct wd719x *wd)
wd->fw_size = ALIGN(fw_wcs->size, 4) + fw_risc->size;
 
if (!wd->fw_virt)
-   wd->fw_virt = pci_alloc_consistent(wd->pdev, wd->fw_size,
-  >fw_phys);
+   wd->fw_virt = dma_alloc_coherent(>pdev->dev, wd->fw_size,
+>fw_phys, GFP_KERNEL);
if (!wd->fw_virt) {
ret = -ENOMEM;
goto wd719x_init_end;
@@ -801,17 +802,18 @@ static int wd719x_board_found(struct Scsi_Host *sh)
wd->fw_virt = NULL;
 
/* memory area for host (EEPROM) parameters */
-   wd->params = pci_alloc_consistent(wd->pdev,
- sizeof(struct wd719x_host_param),
- >params_phys);
+   wd->params = dma_alloc_coherent(>pdev->dev,
+   sizeof(struct wd719x_host_param),
+   >params_phys, GFP_KERNEL);
if (!wd->params) {
dev_warn(>pdev->dev, "unable to allocate parameter 
buffer\n");
return -ENOMEM;
}
 
/* memory area for the RISC for hash table of outstanding requests */
-   wd->hash_virt = pci_alloc_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE,
->hash_phys);
+   wd->hash_virt = dma_alloc_coherent(>pdev->dev,
+  WD719X_HASH_TABLE_SIZE,
+  >hash_phys, GFP_KERNEL);
if (!wd->hash_virt) {
dev_warn(>pdev->dev, "unable to allocate hash buffer\n");
ret = -ENOMEM;
@@ -843,10 +845,10 @@ static int wd719x_board_found(struct Scsi_Host *sh)
 fail_free_irq:
free_irq(wd->pdev->irq, wd);
 fail_free_hash:
-   pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
+   dma_free_coherent(>pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt,
wd->hash_phys);
 fail_free_params:
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param),
+   dma_free_coherent(>pdev->dev, sizeof(struct wd719x_host_param),
wd->params, wd->params_phys);
 
return ret;
@@ -879,7 +881,7 @@ static int wd719x_pci_probe(struct pci_dev *pdev, const 
struct pci_device_id *d)
if (err)
goto fail;
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
dev_warn(>dev, "Unable to set 32-bit DMA mask\n");
goto disable_device;
}
-- 
2.19.1



[PATCH 1/3] wd719x: there should be no active SCBs on removal

2018-10-18 Thread Christoph Hellwig
So warn on that case instead of trying to free them, which would be fatal
in case we actuall had active ones.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/wd719x.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c
index 974bfb3f30f4..7b05bbcfb186 100644
--- a/drivers/scsi/wd719x.c
+++ b/drivers/scsi/wd719x.c
@@ -162,10 +162,9 @@ static void wd719x_destroy(struct wd719x *wd)
/* disable RISC */
wd719x_writeb(wd, WD719X_PCI_MODE_SELECT, 0);
 
+   WARN_ON_ONCE(!list_empty(>active_scbs));
+
/* free all SCBs */
-   list_for_each_entry(scb, >active_scbs, list)
-   pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
-   scb->phys);
list_for_each_entry(scb, >free_scbs, list)
pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb,
scb->phys);
-- 
2.19.1



dma related cleanups for wd719x

2018-10-18 Thread Christoph Hellwig
Hi Ondrej,

can you look over this series, which cleans up a few dma-related
bits in the wd719x driver?


Re: aacraid: latest driver results in Host adapter abort request. / Outstanding commands on (0,0,0,0):

2018-10-17 Thread Christoph Hellwig
On Tue, Oct 16, 2018 at 07:33:53PM +0200, Stefan Priebe - Profihost AG wrote:
> Hi David,
> 
> can you give as any hint? We're running aroud 120 Adaptec Controllers
> and i don't want to replace them all...

4.15 had a fair amount of aacraid changes.  You can't bisect them by
any chance?


Re: [PATCH] ib_srp: Remove WARN_ON in srp_terminate_io()

2018-10-17 Thread Christoph Hellwig
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index e52b9d3c0bd6..c777b36ba62a 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -483,6 +483,8 @@ nvme_fc_signal_discovery_scan(struct nvme_fc_lport *lport,
>   char hostaddr[FCNVME_TRADDR_LENGTH];/* NVMEFC_HOST_TRADDR=...*/
>   char tgtaddr[FCNVME_TRADDR_LENGTH]; /* NVMEFC_TRADDR=...*/
>   char *envp[4] = { "FC_EVENT=nvmediscovery", hostaddr, tgtaddr, NULL };
> + char *aen_envp[5] = { "NVME_EVENT=discovery", "NVME_TRTYPE=fc",
> +   hostaddr, tgtaddr, NULL };

I don't think this belongs into the patch..


[PATCH 21/28] qla1280: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/qla1280.c | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
index 390775d5c918..15a50cc7e4b3 100644
--- a/drivers/scsi/qla1280.c
+++ b/drivers/scsi/qla1280.c
@@ -1750,7 +1750,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
uint8_t *sp, *tbuf;
dma_addr_t p_tbuf;
 
-   tbuf = pci_alloc_consistent(ha->pdev, 8000, _tbuf);
+   tbuf = dma_alloc_coherent(>pdev->dev, 8000, _tbuf, GFP_KERNEL);
if (!tbuf)
return -ENOMEM;
 #endif
@@ -1841,7 +1841,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
 
  out:
 #if DUMP_IT_BACK
-   pci_free_consistent(ha->pdev, 8000, tbuf, p_tbuf);
+   dma_free_coherent(>pdev->dev, 8000, tbuf, p_tbuf);
 #endif
return err;
 }
@@ -4259,8 +4259,8 @@ qla1280_probe_one(struct pci_dev *pdev, const struct 
pci_device_id *id)
ha->devnum = devnum;/* specifies microcode load address */
 
 #ifdef QLA_64BIT_PTR
-   if (pci_set_dma_mask(ha->pdev, DMA_BIT_MASK(64))) {
-   if (pci_set_dma_mask(ha->pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>pdev->dev, DMA_BIT_MASK(64))) {
+   if (dma_set_mask(>pdev->dev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING "scsi(%li): Unable to set a "
   "suitable DMA mask - aborting\n", ha->host_no);
error = -ENODEV;
@@ -4270,7 +4270,7 @@ qla1280_probe_one(struct pci_dev *pdev, const struct 
pci_device_id *id)
dprintk(2, "scsi(%li): 64 Bit PCI Addressing Enabled\n",
ha->host_no);
 #else
-   if (pci_set_dma_mask(ha->pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>pdev->dev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING "scsi(%li): Unable to set a "
   "suitable DMA mask - aborting\n", ha->host_no);
error = -ENODEV;
@@ -4278,17 +4278,17 @@ qla1280_probe_one(struct pci_dev *pdev, const struct 
pci_device_id *id)
}
 #endif
 
-   ha->request_ring = pci_alloc_consistent(ha->pdev,
+   ha->request_ring = dma_alloc_coherent(>pdev->dev,
((REQUEST_ENTRY_CNT + 1) * sizeof(request_t)),
-   >request_dma);
+   >request_dma, GFP_KERNEL);
if (!ha->request_ring) {
printk(KERN_INFO "qla1280: Failed to get request memory\n");
goto error_put_host;
}
 
-   ha->response_ring = pci_alloc_consistent(ha->pdev,
+   ha->response_ring = dma_alloc_coherent(>pdev->dev,
((RESPONSE_ENTRY_CNT + 1) * sizeof(struct response)),
-   >response_dma);
+   >response_dma, GFP_KERNEL);
if (!ha->response_ring) {
printk(KERN_INFO "qla1280: Failed to get response memory\n");
goto error_free_request_ring;
@@ -4370,11 +4370,11 @@ qla1280_probe_one(struct pci_dev *pdev, const struct 
pci_device_id *id)
release_region(host->io_port, 0xff);
 #endif
  error_free_response_ring:
-   pci_free_consistent(ha->pdev,
+   dma_free_coherent(>pdev->dev,
((RESPONSE_ENTRY_CNT + 1) * sizeof(struct response)),
ha->response_ring, ha->response_dma);
  error_free_request_ring:
-   pci_free_consistent(ha->pdev,
+   dma_free_coherent(>pdev->dev,
((REQUEST_ENTRY_CNT + 1) * sizeof(request_t)),
ha->request_ring, ha->request_dma);
  error_put_host:
@@ -4404,10 +4404,10 @@ qla1280_remove_one(struct pci_dev *pdev)
release_region(host->io_port, 0xff);
 #endif
 
-   pci_free_consistent(ha->pdev,
+   dma_free_coherent(>pdev->dev,
((REQUEST_ENTRY_CNT + 1) * (sizeof(request_t))),
ha->request_ring, ha->request_dma);
-   pci_free_consistent(ha->pdev,
+   dma_free_coherent(>pdev->dev,
((RESPONSE_ENTRY_CNT + 1) * (sizeof(struct response))),
ha->response_ring, ha->response_dma);
 
-- 
2.19.1



[PATCH 24/28] snic: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/snic/snic_disc.c |  7 ---
 drivers/scsi/snic/snic_io.c   | 25 +
 drivers/scsi/snic/snic_main.c | 24 ++--
 drivers/scsi/snic/snic_scsi.c | 11 +--
 drivers/scsi/snic/vnic_dev.c  | 29 ++---
 5 files changed, 38 insertions(+), 58 deletions(-)

diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
index b106596cc0cf..e9ccfb97773f 100644
--- a/drivers/scsi/snic/snic_disc.c
+++ b/drivers/scsi/snic/snic_disc.c
@@ -111,8 +111,8 @@ snic_queue_report_tgt_req(struct snic *snic)
 
SNIC_BUG_ONunsigned long)buf) % SNIC_SG_DESC_ALIGN) != 0);
 
-   pa = pci_map_single(snic->pdev, buf, buf_len, PCI_DMA_FROMDEVICE);
-   if (pci_dma_mapping_error(snic->pdev, pa)) {
+   pa = dma_map_single(>pdev->dev, buf, buf_len, DMA_FROM_DEVICE);
+   if (dma_mapping_error(>pdev->dev, pa)) {
SNIC_HOST_ERR(snic->shost,
  "Rpt-tgt rspbuf %p: PCI DMA Mapping Failed\n",
  buf);
@@ -138,7 +138,8 @@ snic_queue_report_tgt_req(struct snic *snic)
 
ret = snic_queue_wq_desc(snic, rqi->req, rqi->req_len);
if (ret) {
-   pci_unmap_single(snic->pdev, pa, buf_len, PCI_DMA_FROMDEVICE);
+   dma_unmap_single(>pdev->dev, pa, buf_len,
+DMA_FROM_DEVICE);
kfree(buf);
rqi->sge_va = 0;
snic_release_untagged_req(snic, rqi);
diff --git a/drivers/scsi/snic/snic_io.c b/drivers/scsi/snic/snic_io.c
index 8e69548395b9..159ee94d2a55 100644
--- a/drivers/scsi/snic/snic_io.c
+++ b/drivers/scsi/snic/snic_io.c
@@ -102,7 +102,8 @@ snic_free_wq_buf(struct vnic_wq *wq, struct vnic_wq_buf 
*buf)
struct snic_req_info *rqi = NULL;
unsigned long flags;
 
-   pci_unmap_single(snic->pdev, buf->dma_addr, buf->len, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, buf->dma_addr, buf->len,
+DMA_TO_DEVICE);
 
rqi = req_to_rqi(req);
spin_lock_irqsave(>spl_cmd_lock, flags);
@@ -172,8 +173,8 @@ snic_queue_wq_desc(struct snic *snic, void *os_buf, u16 len)
snic_print_desc(__func__, os_buf, len);
 
/* Map request buffer */
-   pa = pci_map_single(snic->pdev, os_buf, len, PCI_DMA_TODEVICE);
-   if (pci_dma_mapping_error(snic->pdev, pa)) {
+   pa = dma_map_single(>pdev->dev, os_buf, len, DMA_TO_DEVICE);
+   if (dma_mapping_error(>pdev->dev, pa)) {
SNIC_HOST_ERR(snic->shost, "qdesc: PCI DMA Mapping Fail.\n");
 
return -ENOMEM;
@@ -186,7 +187,7 @@ snic_queue_wq_desc(struct snic *snic, void *os_buf, u16 len)
spin_lock_irqsave(>wq_lock[q_num], flags);
desc_avail = snic_wqdesc_avail(snic, q_num, req->hdr.type);
if (desc_avail <= 0) {
-   pci_unmap_single(snic->pdev, pa, len, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, pa, len, DMA_TO_DEVICE);
req->req_pa = 0;
spin_unlock_irqrestore(>wq_lock[q_num], flags);
atomic64_inc(>s_stats.misc.wq_alloc_fail);
@@ -350,29 +351,29 @@ snic_req_free(struct snic *snic, struct snic_req_info 
*rqi)
 
if (rqi->abort_req) {
if (rqi->abort_req->req_pa)
-   pci_unmap_single(snic->pdev,
+   dma_unmap_single(>pdev->dev,
 rqi->abort_req->req_pa,
 sizeof(struct snic_host_req),
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
 
mempool_free(rqi->abort_req, snic->req_pool[SNIC_REQ_TM_CACHE]);
}
 
if (rqi->dr_req) {
if (rqi->dr_req->req_pa)
-   pci_unmap_single(snic->pdev,
+   dma_unmap_single(>pdev->dev,
 rqi->dr_req->req_pa,
 sizeof(struct snic_host_req),
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
 
mempool_free(rqi->dr_req, snic->req_pool[SNIC_REQ_TM_CACHE]);
}
 
if (rqi->req->req_pa)
-   pci_unmap_single(snic->pdev,
+   dma_unmap_single(>pdev->dev,
 rqi->req->req_pa,
 rqi->req_len,
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
 
mempool_free(rq

[PATCH 26/28] smartpqi: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Tested-by: Don Brace 
Acked-by: Don Brace 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/smartpqi/smartpqi_init.c | 100 +++---
 drivers/scsi/smartpqi/smartpqi_sis.c  |  11 ++-
 2 files changed, 47 insertions(+), 64 deletions(-)

diff --git a/drivers/scsi/smartpqi/smartpqi_init.c 
b/drivers/scsi/smartpqi/smartpqi_init.c
index 2112ea6723c6..a25a07a0b7f0 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -349,16 +349,16 @@ static inline u32 pqi_read_heartbeat_counter(struct 
pqi_ctrl_info *ctrl_info)
 
 static int pqi_map_single(struct pci_dev *pci_dev,
struct pqi_sg_descriptor *sg_descriptor, void *buffer,
-   size_t buffer_length, int data_direction)
+   size_t buffer_length, enum dma_data_direction data_direction)
 {
dma_addr_t bus_address;
 
-   if (!buffer || buffer_length == 0 || data_direction == PCI_DMA_NONE)
+   if (!buffer || buffer_length == 0 || data_direction == DMA_NONE)
return 0;
 
-   bus_address = pci_map_single(pci_dev, buffer, buffer_length,
+   bus_address = dma_map_single(_dev->dev, buffer, buffer_length,
data_direction);
-   if (pci_dma_mapping_error(pci_dev, bus_address))
+   if (dma_mapping_error(_dev->dev, bus_address))
return -ENOMEM;
 
put_unaligned_le64((u64)bus_address, _descriptor->address);
@@ -370,15 +370,15 @@ static int pqi_map_single(struct pci_dev *pci_dev,
 
 static void pqi_pci_unmap(struct pci_dev *pci_dev,
struct pqi_sg_descriptor *descriptors, int num_descriptors,
-   int data_direction)
+   enum dma_data_direction data_direction)
 {
int i;
 
-   if (data_direction == PCI_DMA_NONE)
+   if (data_direction == DMA_NONE)
return;
 
for (i = 0; i < num_descriptors; i++)
-   pci_unmap_single(pci_dev,
+   dma_unmap_single(_dev->dev,
(dma_addr_t)get_unaligned_le64([i].address),
get_unaligned_le32([i].length),
data_direction);
@@ -387,10 +387,9 @@ static void pqi_pci_unmap(struct pci_dev *pci_dev,
 static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info,
struct pqi_raid_path_request *request, u8 cmd,
u8 *scsi3addr, void *buffer, size_t buffer_length,
-   u16 vpd_page, int *pci_direction)
+   u16 vpd_page, enum dma_data_direction *dir)
 {
u8 *cdb;
-   int pci_dir;
 
memset(request, 0, sizeof(*request));
 
@@ -458,23 +457,21 @@ static int pqi_build_raid_path_request(struct 
pqi_ctrl_info *ctrl_info,
 
switch (request->data_direction) {
case SOP_READ_FLAG:
-   pci_dir = PCI_DMA_FROMDEVICE;
+   *dir = DMA_FROM_DEVICE;
break;
case SOP_WRITE_FLAG:
-   pci_dir = PCI_DMA_TODEVICE;
+   *dir = DMA_TO_DEVICE;
break;
case SOP_NO_DIRECTION_FLAG:
-   pci_dir = PCI_DMA_NONE;
+   *dir = DMA_NONE;
break;
default:
-   pci_dir = PCI_DMA_BIDIRECTIONAL;
+   *dir = DMA_BIDIRECTIONAL;
break;
}
 
-   *pci_direction = pci_dir;
-
return pqi_map_single(ctrl_info->pci_dev, >sg_descriptors[0],
-   buffer, buffer_length, pci_dir);
+   buffer, buffer_length, *dir);
 }
 
 static inline void pqi_reinit_io_request(struct pqi_io_request *io_request)
@@ -516,21 +513,19 @@ static int pqi_identify_controller(struct pqi_ctrl_info 
*ctrl_info,
struct bmic_identify_controller *buffer)
 {
int rc;
-   int pci_direction;
+   enum dma_data_direction dir;
struct pqi_raid_path_request request;
 
rc = pqi_build_raid_path_request(ctrl_info, ,
BMIC_IDENTIFY_CONTROLLER, RAID_CTLR_LUNID, buffer,
-   sizeof(*buffer), 0, _direction);
+   sizeof(*buffer), 0, );
if (rc)
return rc;
 
rc = pqi_submit_raid_request_synchronous(ctrl_info, , 0,
NULL, NO_TIMEOUT);
 
-   pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1,
-   pci_direction);
-
+   pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir);
return rc;
 }
 
@@ -538,21 +533,19 @@ static int pqi_scsi_inquiry(struct pqi_ctrl_info 
*ctrl_info,
u8 *scsi3addr, u16 vpd_page, void *buffer, size_t buffer_length)
 {
int rc;
-   int pci_direction;
+   enum dma_data_direction dir;
struct pqi_raid_path_request request;
 
rc = pqi_build_raid_path_request(ctrl_info, ,
INQUIRY, scsi3addr, buffer, buffer_length, vpd_page,
-   _direction);
+  

[PATCH 28/28] mesh: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/mesh.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c
index 82e01dbe90af..ec6940f2fcb3 100644
--- a/drivers/scsi/mesh.c
+++ b/drivers/scsi/mesh.c
@@ -1915,8 +1915,8 @@ static int mesh_probe(struct macio_dev *mdev, const 
struct of_device_id *match)
/* We use the PCI APIs for now until the generic one gets fixed
 * enough or until we get some macio-specific versions
 */
-   dma_cmd_space = pci_zalloc_consistent(macio_get_pci_dev(mdev),
- ms->dma_cmd_size, _cmd_bus);
+   dma_cmd_space = dma_zalloc_coherent(_get_pci_dev(mdev)->dev,
+   ms->dma_cmd_size, _cmd_bus, GFP_KERNEL);
if (dma_cmd_space == NULL) {
printk(KERN_ERR "mesh: can't allocate DMA table\n");
goto out_unmap;
@@ -1974,7 +1974,7 @@ static int mesh_probe(struct macio_dev *mdev, const 
struct of_device_id *match)
 */
mesh_shutdown(mdev);
set_mesh_power(ms, 0);
-   pci_free_consistent(macio_get_pci_dev(mdev), ms->dma_cmd_size,
+   dma_free_coherent(_get_pci_dev(mdev)->dev, ms->dma_cmd_size,
ms->dma_cmd_space, ms->dma_cmd_bus);
  out_unmap:
iounmap(ms->dma);
@@ -2007,7 +2007,7 @@ static int mesh_remove(struct macio_dev *mdev)
iounmap(ms->dma);
 
/* Free DMA commands memory */
-   pci_free_consistent(macio_get_pci_dev(mdev), ms->dma_cmd_size,
+   dma_free_coherent(_get_pci_dev(mdev)->dev, ms->dma_cmd_size,
ms->dma_cmd_space, ms->dma_cmd_bus);
 
/* Release memory resources */
-- 
2.19.1



[PATCH 27/28] ips: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/ips.c | 80 --
 1 file changed, 41 insertions(+), 39 deletions(-)

diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index bd6ac6b5980a..378b6f37b613 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -208,7 +208,7 @@ module_param(ips, charp, 0);
 
 #define IPS_DMA_DIR(scb) ((!scb->scsi_cmd || ips_is_passthru(scb->scsi_cmd) || 
\
  DMA_NONE == scb->scsi_cmd->sc_data_direction) ? \
- PCI_DMA_BIDIRECTIONAL : \
+ DMA_BIDIRECTIONAL : \
  scb->scsi_cmd->sc_data_direction)
 
 #ifdef IPS_DEBUG
@@ -1529,11 +1529,12 @@ ips_alloc_passthru_buffer(ips_ha_t * ha, int length)
if (ha->ioctl_data && length <= ha->ioctl_len)
return 0;
/* there is no buffer or it's not big enough, allocate a new one */
-   bigger_buf = pci_alloc_consistent(ha->pcidev, length, _busaddr);
+   bigger_buf = dma_alloc_coherent(>pcidev->dev, length, _busaddr,
+   GFP_KERNEL);
if (bigger_buf) {
/* free the old memory */
-   pci_free_consistent(ha->pcidev, ha->ioctl_len, ha->ioctl_data,
-   ha->ioctl_busaddr);
+   dma_free_coherent(>pcidev->dev, ha->ioctl_len,
+ ha->ioctl_data, ha->ioctl_busaddr);
/* use the new memory */
ha->ioctl_data = (char *) bigger_buf;
ha->ioctl_len = length;
@@ -1678,9 +1679,8 @@ ips_flash_copperhead(ips_ha_t * ha, ips_passthru_t * pt, 
ips_scb_t * scb)
} else if (!ha->flash_data) {
datasize = pt->CoppCP.cmd.flashfw.total_packets *
pt->CoppCP.cmd.flashfw.count;
-   ha->flash_data = pci_alloc_consistent(ha->pcidev,
- datasize,
- 
>flash_busaddr);
+   ha->flash_data = dma_alloc_coherent(>pcidev->dev,
+   datasize, >flash_busaddr, 
GFP_KERNEL);
if (!ha->flash_data){
printk(KERN_WARNING "Unable to allocate a flash 
buffer\n");
return IPS_FAILURE;
@@ -1858,7 +1858,7 @@ ips_flash_firmware(ips_ha_t * ha, ips_passthru_t * pt, 
ips_scb_t * scb)
 
scb->data_len = ha->flash_datasize;
scb->data_busaddr =
-   pci_map_single(ha->pcidev, ha->flash_data, scb->data_len,
+   dma_map_single(>pcidev->dev, ha->flash_data, scb->data_len,
   IPS_DMA_DIR(scb));
scb->flags |= IPS_SCB_MAP_SINGLE;
scb->cmd.flashfw.command_id = IPS_COMMAND_ID(ha, scb);
@@ -1880,8 +1880,8 @@ ips_free_flash_copperhead(ips_ha_t * ha)
if (ha->flash_data == ips_FlashData)
test_and_clear_bit(0, _FlashDataInUse);
else if (ha->flash_data)
-   pci_free_consistent(ha->pcidev, ha->flash_len, ha->flash_data,
-   ha->flash_busaddr);
+   dma_free_coherent(>pcidev->dev, ha->flash_len,
+ ha->flash_data, ha->flash_busaddr);
ha->flash_data = NULL;
 }
 
@@ -4212,7 +4212,7 @@ ips_free(ips_ha_t * ha)
 
if (ha) {
if (ha->enq) {
-   pci_free_consistent(ha->pcidev, sizeof(IPS_ENQ),
+   dma_free_coherent(>pcidev->dev, sizeof(IPS_ENQ),
ha->enq, ha->enq_busaddr);
ha->enq = NULL;
}
@@ -4221,7 +4221,7 @@ ips_free(ips_ha_t * ha)
ha->conf = NULL;
 
if (ha->adapt) {
-   pci_free_consistent(ha->pcidev,
+   dma_free_coherent(>pcidev->dev,
sizeof (IPS_ADAPTER) +
sizeof (IPS_IO_CMD), ha->adapt,
ha->adapt->hw_status_start);
@@ -4229,7 +4229,7 @@ ips_free(ips_ha_t * ha)
}
 
if (ha->logical_drive_info) {
-   pci_free_consistent(ha->pcidev,
+   dma_free_coherent(>pcidev->dev,
sizeof (IPS_LD_INFO),
ha->logical_drive_info,
ha->logical_drive_info_dma_addr);
@@ -42

[PATCH 23/28] qla4xxx: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/qla4xxx/ql4_os.c | 25 -
 1 file changed, 8 insertions(+), 17 deletions(-)

diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
index 0e13349dce57..662c033f428d 100644
--- a/drivers/scsi/qla4xxx/ql4_os.c
+++ b/drivers/scsi/qla4xxx/ql4_os.c
@@ -3382,7 +3382,7 @@ static int qla4xxx_alloc_pdu(struct iscsi_task *task, 
uint8_t opcode)
if (task->data_count) {
task_data->data_dma = dma_map_single(>pdev->dev, task->data,
 task->data_count,
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
}
 
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
@@ -3437,7 +3437,7 @@ static void qla4xxx_task_cleanup(struct iscsi_task *task)
 
if (task->data_count) {
dma_unmap_single(>pdev->dev, task_data->data_dma,
-task->data_count, PCI_DMA_TODEVICE);
+task->data_count, DMA_TO_DEVICE);
}
 
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
@@ -9020,25 +9020,16 @@ static void qla4xxx_remove_adapter(struct pci_dev *pdev)
 /**
  * qla4xxx_config_dma_addressing() - Configure OS DMA addressing method.
  * @ha: HA context
- *
- * At exit, the @ha's flags.enable_64bit_addressing set to indicated
- * supported addressing method.
  */
 static void qla4xxx_config_dma_addressing(struct scsi_qla_host *ha)
 {
-   int retval;
-
/* Update our PCI device dma_mask for full 64 bit mask */
-   if (pci_set_dma_mask(ha->pdev, DMA_BIT_MASK(64)) == 0) {
-   if (pci_set_consistent_dma_mask(ha->pdev, DMA_BIT_MASK(64))) {
-   dev_dbg(>pdev->dev,
- "Failed to set 64 bit PCI consistent mask; "
-  "using 32 bit.\n");
-   retval = pci_set_consistent_dma_mask(ha->pdev,
-DMA_BIT_MASK(32));
-   }
-   } else
-   retval = pci_set_dma_mask(ha->pdev, DMA_BIT_MASK(32));
+   if (dma_set_mask_and_coherent(>pdev->dev, DMA_BIT_MASK(64))) {
+   dev_dbg(>pdev->dev,
+ "Failed to set 64 bit PCI consistent mask; "
+  "using 32 bit.\n");
+   dma_set_mask_and_coherent(>pdev->dev, DMA_BIT_MASK(32));
+   }
 }
 
 static int qla4xxx_slave_alloc(struct scsi_device *sdev)
-- 
2.19.1



[PATCH 22/28] qla2xxx: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/qla2xxx/qla_target.c  | 8 
 drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_target.c 
b/drivers/scsi/qla2xxx/qla_target.c
index 3015f1bbcf1a..39828207bc1d 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -2425,7 +2425,7 @@ static int qlt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
BUG_ON(cmd->sg_cnt == 0);
 
prm->sg = (struct scatterlist *)cmd->sg;
-   prm->seg_cnt = pci_map_sg(cmd->qpair->pdev, cmd->sg,
+   prm->seg_cnt = dma_map_sg(>qpair->pdev->dev, cmd->sg,
cmd->sg_cnt, cmd->dma_data_direction);
if (unlikely(prm->seg_cnt == 0))
goto out_err;
@@ -2452,7 +2452,7 @@ static int qlt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
 
if (cmd->prot_sg_cnt) {
prm->prot_sg  = cmd->prot_sg;
-   prm->prot_seg_cnt = pci_map_sg(cmd->qpair->pdev,
+   prm->prot_seg_cnt = dma_map_sg(>qpair->pdev->dev,
cmd->prot_sg, cmd->prot_sg_cnt,
cmd->dma_data_direction);
if (unlikely(prm->prot_seg_cnt == 0))
@@ -2487,12 +2487,12 @@ static void qlt_unmap_sg(struct scsi_qla_host *vha, 
struct qla_tgt_cmd *cmd)
 
qpair = cmd->qpair;
 
-   pci_unmap_sg(qpair->pdev, cmd->sg, cmd->sg_cnt,
+   dma_unmap_sg(>pdev->dev, cmd->sg, cmd->sg_cnt,
cmd->dma_data_direction);
cmd->sg_mapped = 0;
 
if (cmd->prot_sg_cnt)
-   pci_unmap_sg(qpair->pdev, cmd->prot_sg, cmd->prot_sg_cnt,
+   dma_unmap_sg(>pdev->dev, cmd->prot_sg, cmd->prot_sg_cnt,
cmd->dma_data_direction);
 
if (!cmd->ctx)
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c 
b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index 731a094d2386..65053c066680 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -424,7 +424,7 @@ static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
se_cmd->pi_err = 0;
 
/*
-* qla_target.c:qlt_rdy_to_xfer() will call pci_map_sg() to setup
+* qla_target.c:qlt_rdy_to_xfer() will call dma_map_sg() to setup
 * the SGL mappings into PCIe memory for incoming FCP WRITE data.
 */
return qlt_rdy_to_xfer(cmd);
-- 
2.19.1



[PATCH 25/28] vmw_pscsi: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/vmw_pvscsi.c | 77 +++
 1 file changed, 38 insertions(+), 39 deletions(-)

diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
index 0cd947f78b5b..6e491023fdd8 100644
--- a/drivers/scsi/vmw_pvscsi.c
+++ b/drivers/scsi/vmw_pvscsi.c
@@ -372,9 +372,9 @@ static int pvscsi_map_buffers(struct pvscsi_adapter 
*adapter,
pvscsi_create_sg(ctx, sg, segs);
 
e->flags |= PVSCSI_FLAG_CMD_WITH_SG_LIST;
-   ctx->sglPA = pci_map_single(adapter->dev, ctx->sgl,
-   SGL_SIZE, PCI_DMA_TODEVICE);
-   if (pci_dma_mapping_error(adapter->dev, ctx->sglPA)) {
+   ctx->sglPA = dma_map_single(>dev->dev,
+   ctx->sgl, SGL_SIZE, DMA_TO_DEVICE);
+   if (dma_mapping_error(>dev->dev, ctx->sglPA)) {
scmd_printk(KERN_ERR, cmd,
"vmw_pvscsi: Failed to map ctx 
sglist for DMA.\n");
scsi_dma_unmap(cmd);
@@ -389,9 +389,9 @@ static int pvscsi_map_buffers(struct pvscsi_adapter 
*adapter,
 * In case there is no S/G list, scsi_sglist points
 * directly to the buffer.
 */
-   ctx->dataPA = pci_map_single(adapter->dev, sg, bufflen,
+   ctx->dataPA = dma_map_single(>dev->dev, sg, bufflen,
 cmd->sc_data_direction);
-   if (pci_dma_mapping_error(adapter->dev, ctx->dataPA)) {
+   if (dma_mapping_error(>dev->dev, ctx->dataPA)) {
scmd_printk(KERN_ERR, cmd,
"vmw_pvscsi: Failed to map direct data 
buffer for DMA.\n");
return -ENOMEM;
@@ -417,23 +417,23 @@ static void pvscsi_unmap_buffers(const struct 
pvscsi_adapter *adapter,
if (count != 0) {
scsi_dma_unmap(cmd);
if (ctx->sglPA) {
-   pci_unmap_single(adapter->dev, ctx->sglPA,
-SGL_SIZE, PCI_DMA_TODEVICE);
+   dma_unmap_single(>dev->dev, ctx->sglPA,
+SGL_SIZE, DMA_TO_DEVICE);
ctx->sglPA = 0;
}
} else
-   pci_unmap_single(adapter->dev, ctx->dataPA, bufflen,
-cmd->sc_data_direction);
+   dma_unmap_single(>dev->dev, ctx->dataPA,
+bufflen, cmd->sc_data_direction);
}
if (cmd->sense_buffer)
-   pci_unmap_single(adapter->dev, ctx->sensePA,
-SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE);
+   dma_unmap_single(>dev->dev, ctx->sensePA,
+SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
 }
 
 static int pvscsi_allocate_rings(struct pvscsi_adapter *adapter)
 {
-   adapter->rings_state = pci_alloc_consistent(adapter->dev, PAGE_SIZE,
-   >ringStatePA);
+   adapter->rings_state = dma_alloc_coherent(>dev->dev, PAGE_SIZE,
+   >ringStatePA, GFP_KERNEL);
if (!adapter->rings_state)
return -ENOMEM;
 
@@ -441,17 +441,17 @@ static int pvscsi_allocate_rings(struct pvscsi_adapter 
*adapter)
 pvscsi_ring_pages);
adapter->req_depth = adapter->req_pages
* PVSCSI_MAX_NUM_REQ_ENTRIES_PER_PAGE;
-   adapter->req_ring = pci_alloc_consistent(adapter->dev,
-adapter->req_pages * PAGE_SIZE,
->reqRingPA);
+   adapter->req_ring = dma_alloc_coherent(>dev->dev,
+   adapter->req_pages * PAGE_SIZE, >reqRingPA,
+   GFP_KERNEL);
if (!adapter->req_ring)
return -ENOMEM;
 
adapter->cmp_pages = min(PVSCSI_MAX_NUM_PAGES_CMP_RING,
 pvscsi_ring_pages);
-   adapter->cmp_ring = pci_alloc_consistent(adapter->dev,
-adapter->cmp_pages * PAGE_SIZE,
->cmpRingPA);
+   adapter->cmp_ring = dma_alloc_coherent(>dev->dev,
+  

[PATCH 08/28] be2iscsi: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/be2iscsi/be_cmds.c  | 10 ++---
 drivers/scsi/be2iscsi/be_iscsi.c | 13 +++---
 drivers/scsi/be2iscsi/be_main.c  | 72 ++--
 drivers/scsi/be2iscsi/be_mgmt.c  | 27 ++--
 4 files changed, 58 insertions(+), 64 deletions(-)

diff --git a/drivers/scsi/be2iscsi/be_cmds.c b/drivers/scsi/be2iscsi/be_cmds.c
index c10aac4dbc5e..0a6972ee94d7 100644
--- a/drivers/scsi/be2iscsi/be_cmds.c
+++ b/drivers/scsi/be2iscsi/be_cmds.c
@@ -520,7 +520,7 @@ int beiscsi_process_mcc_compl(struct be_ctrl_info *ctrl,
 **/
tag_mem = >ptag_state[tag].tag_mem_state;
if (tag_mem->size) {
-   pci_free_consistent(ctrl->pdev, tag_mem->size,
+   dma_free_coherent(>pdev->dev, tag_mem->size,
tag_mem->va, tag_mem->dma);
tag_mem->size = 0;
}
@@ -1269,12 +1269,12 @@ int beiscsi_check_supported_fw(struct be_ctrl_info 
*ctrl,
struct be_sge *sge = nonembedded_sgl(wrb);
int status = 0;
 
-   nonemb_cmd.va = pci_alloc_consistent(ctrl->pdev,
+   nonemb_cmd.va = dma_alloc_coherent(>pdev->dev,
sizeof(struct be_mgmt_controller_attributes),
-   _cmd.dma);
+   _cmd.dma, GFP_KERNEL);
if (nonemb_cmd.va == NULL) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT,
-   "BG_%d : pci_alloc_consistent failed in %s\n",
+   "BG_%d : dma_alloc_coherent failed in %s\n",
__func__);
return -ENOMEM;
}
@@ -1314,7 +1314,7 @@ int beiscsi_check_supported_fw(struct be_ctrl_info *ctrl,
"BG_%d :  Failed in beiscsi_check_supported_fw\n");
mutex_unlock(>mbox_lock);
if (nonemb_cmd.va)
-   pci_free_consistent(ctrl->pdev, nonemb_cmd.size,
+   dma_free_coherent(>pdev->dev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
 
return status;
diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
index c8f0a2144b44..913290378afb 100644
--- a/drivers/scsi/be2iscsi/be_iscsi.c
+++ b/drivers/scsi/be2iscsi/be_iscsi.c
@@ -1071,9 +1071,9 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
else
req_memsize = sizeof(struct tcp_connect_and_offload_in_v1);
 
-   nonemb_cmd.va = pci_alloc_consistent(phba->ctrl.pdev,
+   nonemb_cmd.va = dma_alloc_coherent(>ctrl.pdev->dev,
req_memsize,
-   _cmd.dma);
+   _cmd.dma, GFP_KERNEL);
if (nonemb_cmd.va == NULL) {
 
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
@@ -1091,7 +1091,7 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
"BS_%d : mgmt_open_connection Failed for cid=%d\n",
beiscsi_ep->ep_cid);
 
-   pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
+   dma_free_coherent(>ctrl.pdev->dev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
beiscsi_free_ep(beiscsi_ep);
return -EAGAIN;
@@ -1104,8 +1104,9 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
"BS_%d : mgmt_open_connection Failed");
 
if (ret != -EBUSY)
-   pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
-   nonemb_cmd.va, nonemb_cmd.dma);
+   dma_free_coherent(>ctrl.pdev->dev,
+   nonemb_cmd.size, nonemb_cmd.va,
+   nonemb_cmd.dma);
 
beiscsi_free_ep(beiscsi_ep);
return ret;
@@ -1118,7 +1119,7 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : mgmt_open_connection Success\n");
 
-   pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
+   dma_free_coherent(>ctrl.pdev->dev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
return 0;
 }
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index d544453aa466..5278fdc2c52d 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -511,18 +511,9 @@ static int beiscsi_enable_pci(struct pci_dev *pcidev)
}
 
pci_set_master(pcidev);
-   ret

[PATCH 17/28] nsp32: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/nsp32.c | 18 ++
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/scsi/nsp32.c b/drivers/scsi/nsp32.c
index 8620ac5d6e41..5aac3e801903 100644
--- a/drivers/scsi/nsp32.c
+++ b/drivers/scsi/nsp32.c
@@ -2638,7 +2638,7 @@ static int nsp32_detect(struct pci_dev *pdev)
/*
 * setup DMA 
 */
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32)) != 0) {
nsp32_msg (KERN_ERR, "failed to set PCI DMA mask");
goto scsi_unregister;
}
@@ -2646,7 +2646,9 @@ static int nsp32_detect(struct pci_dev *pdev)
/*
 * allocate autoparam DMA resource.
 */
-   data->autoparam = pci_alloc_consistent(pdev, sizeof(nsp32_autoparam), 
&(data->auto_paddr));
+   data->autoparam = dma_alloc_coherent(>dev,
+   sizeof(nsp32_autoparam), &(data->auto_paddr),
+   GFP_KERNEL);
if (data->autoparam == NULL) {
nsp32_msg(KERN_ERR, "failed to allocate DMA memory");
goto scsi_unregister;
@@ -2655,8 +2657,8 @@ static int nsp32_detect(struct pci_dev *pdev)
/*
 * allocate scatter-gather DMA resource.
 */
-   data->sg_list = pci_alloc_consistent(pdev, NSP32_SG_TABLE_SIZE,
-&(data->sg_paddr));
+   data->sg_list = dma_alloc_coherent(>dev, NSP32_SG_TABLE_SIZE,
+   >sg_paddr, GFP_KERNEL);
if (data->sg_list == NULL) {
nsp32_msg(KERN_ERR, "failed to allocate DMA memory");
goto free_autoparam;
@@ -2761,11 +2763,11 @@ static int nsp32_detect(struct pci_dev *pdev)
free_irq(host->irq, data);
 
  free_sg_list:
-   pci_free_consistent(pdev, NSP32_SG_TABLE_SIZE,
+   dma_free_coherent(>dev, NSP32_SG_TABLE_SIZE,
data->sg_list, data->sg_paddr);
 
  free_autoparam:
-   pci_free_consistent(pdev, sizeof(nsp32_autoparam),
+   dma_free_coherent(>dev, sizeof(nsp32_autoparam),
data->autoparam, data->auto_paddr);

  scsi_unregister:
@@ -2780,12 +2782,12 @@ static int nsp32_release(struct Scsi_Host *host)
nsp32_hw_data *data = (nsp32_hw_data *)host->hostdata;
 
if (data->autoparam) {
-   pci_free_consistent(data->Pci, sizeof(nsp32_autoparam),
+   dma_free_coherent(>Pci->dev, sizeof(nsp32_autoparam),
data->autoparam, data->auto_paddr);
}
 
if (data->sg_list) {
-   pci_free_consistent(data->Pci, NSP32_SG_TABLE_SIZE,
+   dma_free_coherent(>Pci->dev, NSP32_SG_TABLE_SIZE,
data->sg_list, data->sg_paddr);
}
 
-- 
2.19.1



[PATCH 12/28] megaraid_mbox: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/megaraid/megaraid_mbox.c | 51 +--
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/scsi/megaraid/megaraid_mbox.c 
b/drivers/scsi/megaraid/megaraid_mbox.c
index 89c85a5a47af..fbae1d88a29b 100644
--- a/drivers/scsi/megaraid/megaraid_mbox.c
+++ b/drivers/scsi/megaraid/megaraid_mbox.c
@@ -457,10 +457,9 @@ megaraid_probe_one(struct pci_dev *pdev, const struct 
pci_device_id *id)
 
// Setup the default DMA mask. This would be changed later on
// depending on hardware capabilities
-   if (pci_set_dma_mask(adapter->pdev, DMA_BIT_MASK(32)) != 0) {
-
+   if (dma_set_mask(>pdev->dev, DMA_BIT_MASK(32))) {
con_log(CL_ANN, (KERN_WARNING
-   "megaraid: pci_set_dma_mask failed:%d\n", __LINE__));
+   "megaraid: dma_set_mask failed:%d\n", __LINE__));
 
goto out_free_adapter;
}
@@ -878,11 +877,12 @@ megaraid_init_mbox(adapter_t *adapter)
adapter->pdev->device == PCI_DEVICE_ID_PERC4_DI_EVERGLADES) ||
(adapter->pdev->vendor == PCI_VENDOR_ID_DELL &&
adapter->pdev->device == PCI_DEVICE_ID_PERC4E_DI_KOBUK)) {
-   if (pci_set_dma_mask(adapter->pdev, DMA_BIT_MASK(64))) {
+   if (dma_set_mask(>pdev->dev, DMA_BIT_MASK(64))) {
con_log(CL_ANN, (KERN_WARNING
"megaraid: DMA mask for 64-bit failed\n"));
 
-   if (pci_set_dma_mask (adapter->pdev, DMA_BIT_MASK(32))) 
{
+   if (dma_set_mask(>pdev->dev,
+   DMA_BIT_MASK(32))) {
con_log(CL_ANN, (KERN_WARNING
"megaraid: 32-bit DMA mask failed\n"));
goto out_free_sysfs_res;
@@ -975,9 +975,9 @@ megaraid_alloc_cmd_packets(adapter_t *adapter)
 * Allocate the common 16-byte aligned memory for the handshake
 * mailbox.
 */
-   raid_dev->una_mbox64 = pci_zalloc_consistent(adapter->pdev,
-sizeof(mbox64_t),
-_dev->una_mbox64_dma);
+   raid_dev->una_mbox64 = dma_zalloc_coherent(>pdev->dev,
+   sizeof(mbox64_t), _dev->una_mbox64_dma,
+   GFP_KERNEL);
 
if (!raid_dev->una_mbox64) {
con_log(CL_ANN, (KERN_WARNING
@@ -1003,8 +1003,8 @@ megaraid_alloc_cmd_packets(adapter_t *adapter)
align;
 
// Allocate memory for commands issued internally
-   adapter->ibuf = pci_zalloc_consistent(pdev, MBOX_IBUF_SIZE,
- >ibuf_dma_h);
+   adapter->ibuf = dma_zalloc_coherent(>dev, MBOX_IBUF_SIZE,
+   >ibuf_dma_h, GFP_KERNEL);
if (!adapter->ibuf) {
 
con_log(CL_ANN, (KERN_WARNING
@@ -1082,7 +1082,7 @@ megaraid_alloc_cmd_packets(adapter_t *adapter)
 
scb->scp= NULL;
scb->state  = SCB_FREE;
-   scb->dma_direction  = PCI_DMA_NONE;
+   scb->dma_direction  = DMA_NONE;
scb->dma_type   = MRAID_DMA_NONE;
scb->dev_channel= -1;
scb->dev_target = -1;
@@ -1098,10 +1098,10 @@ megaraid_alloc_cmd_packets(adapter_t *adapter)
 out_free_scb_list:
kfree(adapter->kscb_list);
 out_free_ibuf:
-   pci_free_consistent(pdev, MBOX_IBUF_SIZE, (void *)adapter->ibuf,
+   dma_free_coherent(>dev, MBOX_IBUF_SIZE, (void *)adapter->ibuf,
adapter->ibuf_dma_h);
 out_free_common_mbox:
-   pci_free_consistent(adapter->pdev, sizeof(mbox64_t),
+   dma_free_coherent(>pdev->dev, sizeof(mbox64_t),
(caddr_t)raid_dev->una_mbox64, raid_dev->una_mbox64_dma);
 
return -1;
@@ -1123,10 +1123,10 @@ megaraid_free_cmd_packets(adapter_t *adapter)
 
kfree(adapter->kscb_list);
 
-   pci_free_consistent(adapter->pdev, MBOX_IBUF_SIZE,
+   dma_free_coherent(>pdev->dev, MBOX_IBUF_SIZE,
(void *)adapter->ibuf, adapter->ibuf_dma_h);
 
-   pci_free_consistent(adapter->pdev, sizeof(mbox64_t),
+   dma_free_coherent(>pdev->dev, sizeof(mbox64_t),
(caddr_t)raid_dev->una_mbox64, raid_dev->una_mbox64_dma);
return;
 }
@@ -2915,9 +2915,8 @@ megaraid_mbox_product_info(adapter_t *adapter)
 * Issue an ENQUIRY3 command to find out certain adapter parameters,

[PATCH 13/28] megaraid_sas: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/megaraid/megaraid_sas_base.c   | 150 ++--
 drivers/scsi/megaraid/megaraid_sas_fusion.c |  16 +--
 2 files changed, 83 insertions(+), 83 deletions(-)

diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c 
b/drivers/scsi/megaraid/megaraid_sas_base.c
index 9aa9590c5373..f35deb109905 100644
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -1330,11 +1330,11 @@ megasas_build_dcdb(struct megasas_instance *instance, 
struct scsi_cmnd *scp,
device_id = MEGASAS_DEV_INDEX(scp);
pthru = (struct megasas_pthru_frame *)cmd->frame;
 
-   if (scp->sc_data_direction == PCI_DMA_TODEVICE)
+   if (scp->sc_data_direction == DMA_TO_DEVICE)
flags = MFI_FRAME_DIR_WRITE;
-   else if (scp->sc_data_direction == PCI_DMA_FROMDEVICE)
+   else if (scp->sc_data_direction == DMA_FROM_DEVICE)
flags = MFI_FRAME_DIR_READ;
-   else if (scp->sc_data_direction == PCI_DMA_NONE)
+   else if (scp->sc_data_direction == DMA_NONE)
flags = MFI_FRAME_DIR_NONE;
 
if (instance->flag_ieee == 1) {
@@ -1428,9 +1428,9 @@ megasas_build_ldio(struct megasas_instance *instance, 
struct scsi_cmnd *scp,
device_id = MEGASAS_DEV_INDEX(scp);
ldio = (struct megasas_io_frame *)cmd->frame;
 
-   if (scp->sc_data_direction == PCI_DMA_TODEVICE)
+   if (scp->sc_data_direction == DMA_TO_DEVICE)
flags = MFI_FRAME_DIR_WRITE;
-   else if (scp->sc_data_direction == PCI_DMA_FROMDEVICE)
+   else if (scp->sc_data_direction == DMA_FROM_DEVICE)
flags = MFI_FRAME_DIR_READ;
 
if (instance->flag_ieee == 1) {
@@ -2240,9 +2240,9 @@ static int megasas_get_ld_vf_affiliation_111(struct 
megasas_instance *instance,
   sizeof(struct MR_LD_VF_AFFILIATION_111));
else {
new_affiliation_111 =
-   pci_zalloc_consistent(instance->pdev,
+   dma_zalloc_coherent(>pdev->dev,
  sizeof(struct 
MR_LD_VF_AFFILIATION_111),
- _affiliation_111_h);
+ _affiliation_111_h, 
GFP_KERNEL);
if (!new_affiliation_111) {
dev_printk(KERN_DEBUG, >pdev->dev, "SR-IOV: 
Couldn't allocate "
   "memory for new affiliation for scsi%d\n",
@@ -2302,7 +2302,7 @@ static int megasas_get_ld_vf_affiliation_111(struct 
megasas_instance *instance,
}
 out:
if (new_affiliation_111) {
-   pci_free_consistent(instance->pdev,
+   dma_free_coherent(>pdev->dev,
sizeof(struct MR_LD_VF_AFFILIATION_111),
new_affiliation_111,
new_affiliation_111_h);
@@ -2347,10 +2347,10 @@ static int megasas_get_ld_vf_affiliation_12(struct 
megasas_instance *instance,
   sizeof(struct MR_LD_VF_AFFILIATION));
else {
new_affiliation =
-   pci_zalloc_consistent(instance->pdev,
+   dma_zalloc_coherent(>pdev->dev,
  (MAX_LOGICAL_DRIVES + 1) *
  sizeof(struct 
MR_LD_VF_AFFILIATION),
- _affiliation_h);
+ _affiliation_h, GFP_KERNEL);
if (!new_affiliation) {
dev_printk(KERN_DEBUG, >pdev->dev, "SR-IOV: 
Couldn't allocate "
   "memory for new affiliation for scsi%d\n",
@@ -2470,7 +2470,7 @@ static int megasas_get_ld_vf_affiliation_12(struct 
megasas_instance *instance,
}
 
if (new_affiliation)
-   pci_free_consistent(instance->pdev,
+   dma_free_coherent(>pdev->dev,
(MAX_LOGICAL_DRIVES + 1) *
sizeof(struct MR_LD_VF_AFFILIATION),
new_affiliation, new_affiliation_h);
@@ -2513,9 +2513,9 @@ int megasas_sriov_start_heartbeat(struct megasas_instance 
*instance,
 
if (initial) {
instance->hb_host_mem =
-   pci_zalloc_consistent(instance->pdev,
+   dma_zalloc_coherent(>pdev->dev,
  sizeof(struct 
MR_CTRL_HB_HOST_MEM),
- >hb_host_mem_h);
+ >hb_host_mem_h

[PATCH 14/28] mpt3sas: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Also simplify setting the DMA mask a bit.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/mpt3sas/mpt3sas_base.c  | 67 
 drivers/scsi/mpt3sas/mpt3sas_ctl.c   | 34 ++--
 drivers/scsi/mpt3sas/mpt3sas_transport.c | 18 ---
 3 files changed, 61 insertions(+), 58 deletions(-)

diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c 
b/drivers/scsi/mpt3sas/mpt3sas_base.c
index 166b607690a1..2500377d0723 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
@@ -2259,7 +2259,7 @@ _base_build_sg_scmd(struct MPT3SAS_ADAPTER *ioc,
sges_left = scsi_dma_map(scmd);
if (sges_left < 0) {
sdev_printk(KERN_ERR, scmd->device,
-"pci_map_sg failed: request for %d bytes!\n",
+"scsi_dma_map failed: request for %d bytes!\n",
 scsi_bufflen(scmd));
return -ENOMEM;
}
@@ -2407,7 +2407,7 @@ _base_build_sg_scmd_ieee(struct MPT3SAS_ADAPTER *ioc,
sges_left = scsi_dma_map(scmd);
if (sges_left < 0) {
sdev_printk(KERN_ERR, scmd->device,
-   "pci_map_sg failed: request for %d bytes!\n",
+   "scsi_dma_map failed: request for %d bytes!\n",
scsi_bufflen(scmd));
return -ENOMEM;
}
@@ -2552,39 +2552,37 @@ _base_build_sg_ieee(struct MPT3SAS_ADAPTER *ioc, void 
*psge,
 static int
 _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
 {
+   u64 required_mask, coherent_mask;
struct sysinfo s;
-   u64 consistent_dma_mask;
 
if (ioc->is_mcpu_endpoint)
goto try_32bit;
 
+   required_mask = dma_get_required_mask(>dev);
+   if (sizeof(dma_addr_t) == 4 || required_mask == 32)
+   goto try_32bit;
+
if (ioc->dma_mask)
-   consistent_dma_mask = DMA_BIT_MASK(64);
+   coherent_mask = DMA_BIT_MASK(64);
else
-   consistent_dma_mask = DMA_BIT_MASK(32);
-
-   if (sizeof(dma_addr_t) > 4) {
-   const uint64_t required_mask =
-   dma_get_required_mask(>dev);
-   if ((required_mask > DMA_BIT_MASK(32)) &&
-   !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
-   !pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) {
-   ioc->base_add_sg_single = &_base_add_sg_single_64;
-   ioc->sge_size = sizeof(Mpi2SGESimple64_t);
-   ioc->dma_mask = 64;
-   goto out;
-   }
-   }
+   coherent_mask = DMA_BIT_MASK(32);
+
+   if (dma_set_mask(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_coherent_mask(>dev, coherent_mask))
+   goto try_32bit;
+
+   ioc->base_add_sg_single = &_base_add_sg_single_64;
+   ioc->sge_size = sizeof(Mpi2SGESimple64_t);
+   ioc->dma_mask = 64;
+   goto out;
 
  try_32bit:
-   if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))
-   && !pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   ioc->base_add_sg_single = &_base_add_sg_single_32;
-   ioc->sge_size = sizeof(Mpi2SGESimple32_t);
-   ioc->dma_mask = 32;
-   } else
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32)))
return -ENODEV;
 
+   ioc->base_add_sg_single = &_base_add_sg_single_32;
+   ioc->sge_size = sizeof(Mpi2SGESimple32_t);
+   ioc->dma_mask = 32;
  out:
si_meminfo();
ioc_info(ioc, "%d BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld 
kB)\n",
@@ -3777,8 +3775,8 @@ _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc)
}
 
data_length = sizeof(Mpi2FWImageHeader_t);
-   fwpkg_data = pci_alloc_consistent(ioc->pdev, data_length,
-   _data_dma);
+   fwpkg_data = dma_alloc_coherent(>pdev->dev, data_length,
+   _data_dma, GFP_KERNEL);
if (!fwpkg_data) {
ioc_err(ioc, "failure at %s:%d/%s()!\n",
__FILE__, __LINE__, __func__);
@@ -3837,7 +3835,7 @@ _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc)
ioc->base_cmds.status = MPT3_CMD_NOT_USED;
 out:
if (fwpkg_data)
-   pci_free_consistent(ioc->pdev, data_length, fwpkg_data,
+   dma_free_coherent(>pdev->dev, data_length, fwpkg_data,
fwpkg_data_dma);
return r;
 }
@@ -4146,7 +4144,7 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
dexitprintk(ioc, ioc_info(ioc, "%s\n", __func__));
 

[PATCH 19/28] qedf: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/qedf/qedf_main.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 0a5dd5595dd3..d5a4f17fce51 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -2855,12 +2855,12 @@ static int qedf_set_fcoe_pf_param(struct qedf_ctx *qedf)
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC, "Number of CQs is %d.\n",
   qedf->num_queues);
 
-   qedf->p_cpuq = pci_alloc_consistent(qedf->pdev,
+   qedf->p_cpuq = dma_alloc_coherent(>pdev->dev,
qedf->num_queues * sizeof(struct qedf_glbl_q_params),
-   >hw_p_cpuq);
+   >hw_p_cpuq, GFP_KERNEL);
 
if (!qedf->p_cpuq) {
-   QEDF_ERR(&(qedf->dbg_ctx), "pci_alloc_consistent failed.\n");
+   QEDF_ERR(&(qedf->dbg_ctx), "dma_alloc_coherent failed.\n");
return 1;
}
 
@@ -2929,7 +2929,7 @@ static void qedf_free_fcoe_pf_param(struct qedf_ctx *qedf)
 
if (qedf->p_cpuq) {
size = qedf->num_queues * sizeof(struct qedf_glbl_q_params);
-   pci_free_consistent(qedf->pdev, size, qedf->p_cpuq,
+   dma_free_coherent(>pdev->dev, size, qedf->p_cpuq,
qedf->hw_p_cpuq);
}
 
-- 
2.19.1



[PATCH 09/28] csiostor: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/csiostor/csio_init.c  |  7 ++-
 drivers/scsi/csiostor/csio_lnode.c |  6 +++---
 drivers/scsi/csiostor/csio_scsi.c  | 12 ++--
 drivers/scsi/csiostor/csio_wr.c| 17 +
 4 files changed, 20 insertions(+), 22 deletions(-)

diff --git a/drivers/scsi/csiostor/csio_init.c 
b/drivers/scsi/csiostor/csio_init.c
index ed2dae657964..aa04e4a7aed5 100644
--- a/drivers/scsi/csiostor/csio_init.c
+++ b/drivers/scsi/csiostor/csio_init.c
@@ -210,11 +210,8 @@ csio_pci_init(struct pci_dev *pdev, int *bars)
pci_set_master(pdev);
pci_try_set_mwi(pdev);
 
-   if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
-   pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
-   } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
-   } else {
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
dev_err(>dev, "No suitable DMA available.\n");
goto err_release_regions;
}
diff --git a/drivers/scsi/csiostor/csio_lnode.c 
b/drivers/scsi/csiostor/csio_lnode.c
index cc5611efc7a9..66e58f0a75dc 100644
--- a/drivers/scsi/csiostor/csio_lnode.c
+++ b/drivers/scsi/csiostor/csio_lnode.c
@@ -1845,8 +1845,8 @@ csio_ln_fdmi_init(struct csio_lnode *ln)
/* Allocate Dma buffers for FDMI response Payload */
dma_buf = >mgmt_req->dma_buf;
dma_buf->len = 2048;
-   dma_buf->vaddr = pci_alloc_consistent(hw->pdev, dma_buf->len,
-   _buf->paddr);
+   dma_buf->vaddr = dma_alloc_coherent(>pdev->dev, dma_buf->len,
+   _buf->paddr, GFP_KERNEL);
if (!dma_buf->vaddr) {
csio_err(hw, "Failed to alloc DMA buffer for FDMI!\n");
kfree(ln->mgmt_req);
@@ -1873,7 +1873,7 @@ csio_ln_fdmi_exit(struct csio_lnode *ln)
 
dma_buf = >mgmt_req->dma_buf;
if (dma_buf->vaddr)
-   pci_free_consistent(hw->pdev, dma_buf->len, dma_buf->vaddr,
+   dma_free_coherent(>pdev->dev, dma_buf->len, dma_buf->vaddr,
dma_buf->paddr);
 
kfree(ln->mgmt_req);
diff --git a/drivers/scsi/csiostor/csio_scsi.c 
b/drivers/scsi/csiostor/csio_scsi.c
index dab0d3f9bee1..8c15b7acb4b7 100644
--- a/drivers/scsi/csiostor/csio_scsi.c
+++ b/drivers/scsi/csiostor/csio_scsi.c
@@ -2349,8 +2349,8 @@ csio_scsi_alloc_ddp_bufs(struct csio_scsim *scm, struct 
csio_hw *hw,
}
 
/* Allocate Dma buffers for DDP */
-   ddp_desc->vaddr = pci_alloc_consistent(hw->pdev, unit_size,
-   _desc->paddr);
+   ddp_desc->vaddr = dma_alloc_coherent(>pdev->dev, unit_size,
+   _desc->paddr, GFP_KERNEL);
if (!ddp_desc->vaddr) {
csio_err(hw,
 "SCSI response DMA buffer (ddp) allocation"
@@ -2372,8 +2372,8 @@ csio_scsi_alloc_ddp_bufs(struct csio_scsim *scm, struct 
csio_hw *hw,
list_for_each(tmp, >ddp_freelist) {
ddp_desc = (struct csio_dma_buf *) tmp;
tmp = csio_list_prev(tmp);
-   pci_free_consistent(hw->pdev, ddp_desc->len, ddp_desc->vaddr,
-   ddp_desc->paddr);
+   dma_free_coherent(>pdev->dev, ddp_desc->len,
+ ddp_desc->vaddr, ddp_desc->paddr);
list_del_init(_desc->list);
kfree(ddp_desc);
}
@@ -2399,8 +2399,8 @@ csio_scsi_free_ddp_bufs(struct csio_scsim *scm, struct 
csio_hw *hw)
list_for_each(tmp, >ddp_freelist) {
ddp_desc = (struct csio_dma_buf *) tmp;
tmp = csio_list_prev(tmp);
-   pci_free_consistent(hw->pdev, ddp_desc->len, ddp_desc->vaddr,
-   ddp_desc->paddr);
+   dma_free_coherent(>pdev->dev, ddp_desc->len,
+ ddp_desc->vaddr, ddp_desc->paddr);
list_del_init(_desc->list);
kfree(ddp_desc);
}
diff --git a/drivers/scsi/csiostor/csio_wr.c b/drivers/scsi/csiostor/csio_wr.c
index 5022e82ccc4f..dc12933533d5 100644
--- a/drivers/scsi/csiostor/csio_wr.c
+++ b/drivers/scsi/csiostor/csio_wr.c
@@ -124,8 +124,8 @@ csio_wr_fill_fl(struct csio_hw *hw, struct csio_q *flq)
 
while (n--) {
buf->len = sge->sge_fl_

[PATCH 11/28] hpsa: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Tested-by: Don Brace 
Acked-by: Don Brace 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/hpsa.c | 136 ++--
 1 file changed, 69 insertions(+), 67 deletions(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 666ba09e5f42..758ffd6419b4 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -2240,8 +2240,8 @@ static int hpsa_map_ioaccel2_sg_chain_block(struct 
ctlr_info *h,
 
chain_block = h->ioaccel2_cmd_sg_list[c->cmdindex];
chain_size = le32_to_cpu(cp->sg[0].length);
-   temp64 = pci_map_single(h->pdev, chain_block, chain_size,
-   PCI_DMA_TODEVICE);
+   temp64 = dma_map_single(>pdev->dev, chain_block, chain_size,
+   DMA_TO_DEVICE);
if (dma_mapping_error(>pdev->dev, temp64)) {
/* prevent subsequent unmapping */
cp->sg->address = 0;
@@ -2261,7 +2261,7 @@ static void hpsa_unmap_ioaccel2_sg_chain_block(struct 
ctlr_info *h,
chain_sg = cp->sg;
temp64 = le64_to_cpu(chain_sg->address);
chain_size = le32_to_cpu(cp->sg[0].length);
-   pci_unmap_single(h->pdev, temp64, chain_size, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, temp64, chain_size, DMA_TO_DEVICE);
 }
 
 static int hpsa_map_sg_chain_block(struct ctlr_info *h,
@@ -2277,8 +2277,8 @@ static int hpsa_map_sg_chain_block(struct ctlr_info *h,
chain_len = sizeof(*chain_sg) *
(le16_to_cpu(c->Header.SGTotal) - h->max_cmd_sg_entries);
chain_sg->Len = cpu_to_le32(chain_len);
-   temp64 = pci_map_single(h->pdev, chain_block, chain_len,
-   PCI_DMA_TODEVICE);
+   temp64 = dma_map_single(>pdev->dev, chain_block, chain_len,
+   DMA_TO_DEVICE);
if (dma_mapping_error(>pdev->dev, temp64)) {
/* prevent subsequent unmapping */
chain_sg->Addr = cpu_to_le64(0);
@@ -2297,8 +2297,8 @@ static void hpsa_unmap_sg_chain_block(struct ctlr_info *h,
return;
 
chain_sg = >SG[h->max_cmd_sg_entries - 1];
-   pci_unmap_single(h->pdev, le64_to_cpu(chain_sg->Addr),
-   le32_to_cpu(chain_sg->Len), PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, le64_to_cpu(chain_sg->Addr),
+   le32_to_cpu(chain_sg->Len), DMA_TO_DEVICE);
 }
 
 
@@ -2759,13 +2759,13 @@ static void complete_scsi_command(struct CommandList 
*cp)
return hpsa_cmd_free_and_done(h, cp, cmd);
 }
 
-static void hpsa_pci_unmap(struct pci_dev *pdev,
-   struct CommandList *c, int sg_used, int data_direction)
+static void hpsa_pci_unmap(struct pci_dev *pdev, struct CommandList *c,
+   int sg_used, enum dma_data_direction data_direction)
 {
int i;
 
for (i = 0; i < sg_used; i++)
-   pci_unmap_single(pdev, (dma_addr_t) le64_to_cpu(c->SG[i].Addr),
+   dma_unmap_single(>dev, le64_to_cpu(c->SG[i].Addr),
le32_to_cpu(c->SG[i].Len),
data_direction);
 }
@@ -2774,17 +2774,17 @@ static int hpsa_map_one(struct pci_dev *pdev,
struct CommandList *cp,
unsigned char *buf,
size_t buflen,
-   int data_direction)
+   enum dma_data_direction data_direction)
 {
u64 addr64;
 
-   if (buflen == 0 || data_direction == PCI_DMA_NONE) {
+   if (buflen == 0 || data_direction == DMA_NONE) {
cp->Header.SGList = 0;
cp->Header.SGTotal = cpu_to_le16(0);
return 0;
}
 
-   addr64 = pci_map_single(pdev, buf, buflen, data_direction);
+   addr64 = dma_map_single(>dev, buf, buflen, data_direction);
if (dma_mapping_error(>dev, addr64)) {
/* Prevent subsequent unmap of something never mapped */
cp->Header.SGList = 0;
@@ -2845,7 +2845,8 @@ static u32 lockup_detected(struct ctlr_info *h)
 
 #define MAX_DRIVER_CMD_RETRIES 25
 static int hpsa_scsi_do_simple_cmd_with_retry(struct ctlr_info *h,
-   struct CommandList *c, int data_direction, unsigned long timeout_msecs)
+   struct CommandList *c, enum dma_data_direction data_direction,
+   unsigned long timeout_msecs)
 {
int backoff_time = 10, retry_count = 0;
int rc;
@@ -2969,8 +2970,8 @@ static int hpsa_do_receive_diagnostic(struct ctlr_info 
*h, u8 *scsi3addr,
rc = -1;
goto out;
}
-   rc = hpsa_scsi_do_simple_cmd_with_retry(h, c,
-   PCI_DMA_FROMDEVICE, NO_TIMEOUT);
+   rc = hpsa_scsi_do_simple_cmd_with_retry(h, c, DMA_FROM_DEVICE,
+   NO_TIMEOU

[PATCH 03/28] 3w-xxx: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Acked-by: Adam Radford 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/3w-.c | 20 ++--
 drivers/scsi/3w-.h |  1 -
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/3w-.c b/drivers/scsi/3w-.c
index 471366945bd4..a58257645e94 100644
--- a/drivers/scsi/3w-.c
+++ b/drivers/scsi/3w-.c
@@ -834,15 +834,17 @@ static int tw_allocate_memory(TW_Device_Extension 
*tw_dev, int size, int which)
 
dprintk(KERN_NOTICE "3w-: tw_allocate_memory()\n");
 
-   cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, 
_handle);
+   cpu_addr = dma_alloc_coherent(_dev->tw_pci_dev->dev,
+   size * TW_Q_LENGTH, _handle, GFP_KERNEL);
if (cpu_addr == NULL) {
-   printk(KERN_WARNING "3w-: pci_alloc_consistent() 
failed.\n");
+   printk(KERN_WARNING "3w-: dma_alloc_coherent() failed.\n");
return 1;
}
 
if ((unsigned long)cpu_addr % (tw_dev->tw_pci_dev->device == 
TW_DEVICE_ID ? TW_ALIGNMENT_6000 : TW_ALIGNMENT_7000)) {
printk(KERN_WARNING "3w-: Couldn't allocate correctly 
aligned memory.\n");
-   pci_free_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, 
cpu_addr, dma_handle);
+   dma_free_coherent(_dev->tw_pci_dev->dev, size * TW_Q_LENGTH,
+   cpu_addr, dma_handle);
return 1;
}
 
@@ -1062,10 +1064,16 @@ static void 
tw_free_device_extension(TW_Device_Extension *tw_dev)
 
/* Free command packet and generic buffer memory */
if (tw_dev->command_packet_virtual_address[0])
-   pci_free_consistent(tw_dev->tw_pci_dev, 
sizeof(TW_Command)*TW_Q_LENGTH, tw_dev->command_packet_virtual_address[0], 
tw_dev->command_packet_physical_address[0]);
+   dma_free_coherent(_dev->tw_pci_dev->dev,
+   sizeof(TW_Command) * TW_Q_LENGTH,
+   tw_dev->command_packet_virtual_address[0],
+   tw_dev->command_packet_physical_address[0]);
 
if (tw_dev->alignment_virtual_address[0])
-   pci_free_consistent(tw_dev->tw_pci_dev, 
sizeof(TW_Sector)*TW_Q_LENGTH, tw_dev->alignment_virtual_address[0], 
tw_dev->alignment_physical_address[0]);
+   dma_free_coherent(_dev->tw_pci_dev->dev,
+   sizeof(TW_Sector) * TW_Q_LENGTH,
+   tw_dev->alignment_virtual_address[0],
+   tw_dev->alignment_physical_address[0]);
 } /* End tw_free_device_extension() */
 
 /* This function will send an initconnection command to controller */
@@ -2260,7 +2268,7 @@ static int tw_probe(struct pci_dev *pdev, const struct 
pci_device_id *dev_id)
 
pci_set_master(pdev);
 
-   retval = pci_set_dma_mask(pdev, TW_DMA_MASK);
+   retval = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32));
if (retval) {
printk(KERN_WARNING "3w-: Failed to set dma mask.");
goto out_disable_device;
diff --git a/drivers/scsi/3w-.h b/drivers/scsi/3w-.h
index 69e80c1ed1ca..bd87fbacfbc7 100644
--- a/drivers/scsi/3w-.h
+++ b/drivers/scsi/3w-.h
@@ -230,7 +230,6 @@ static unsigned char tw_sense_table[][4] =
 #define TW_IOCTL_TIMEOUT  25 /* 25 seconds */
 #define TW_IOCTL_CHRDEV_TIMEOUT   60 /* 60 seconds */
 #define TW_IOCTL_CHRDEV_FREE  -1
-#define TW_DMA_MASK  DMA_BIT_MASK(32)
 #define TW_MAX_CDB_LEN   16
 
 /* Bitmask macros to eliminate bitfields */
-- 
2.19.1



[PATCH 07/28] atp870u: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/atp870u.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c
index 8996d2329e11..802d15018ec0 100644
--- a/drivers/scsi/atp870u.c
+++ b/drivers/scsi/atp870u.c
@@ -1193,7 +1193,7 @@ static void atp870u_free_tables(struct Scsi_Host *host)
for (k = 0; k < 16; k++) {
if (!atp_dev->id[j][k].prd_table)
continue;
-   pci_free_consistent(atp_dev->pdev, 1024, 
atp_dev->id[j][k].prd_table, atp_dev->id[j][k].prd_bus);
+   dma_free_coherent(_dev->pdev->dev, 1024, 
atp_dev->id[j][k].prd_table, atp_dev->id[j][k].prd_bus);
atp_dev->id[j][k].prd_table = NULL;
}
}
@@ -1205,7 +1205,7 @@ static int atp870u_init_tables(struct Scsi_Host *host)
int c,k;
for(c=0;c < 2;c++) {
for(k=0;k<16;k++) {
-   atp_dev->id[c][k].prd_table = 
pci_alloc_consistent(atp_dev->pdev, 1024, &(atp_dev->id[c][k].prd_bus));
+   atp_dev->id[c][k].prd_table = 
dma_alloc_coherent(_dev->pdev->dev, 1024, &(atp_dev->id[c][k].prd_bus), 
GFP_KERNEL);
if (!atp_dev->id[c][k].prd_table) {
printk("atp870u_init_tables fail\n");
atp870u_free_tables(host);
@@ -1509,7 +1509,7 @@ static int atp870u_probe(struct pci_dev *pdev, const 
struct pci_device_id *ent)
if (err)
goto fail;
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
 printk(KERN_ERR "atp870u: DMA mask required but not 
available.\n");
 err = -EIO;
 goto disable_device;
-- 
2.19.1



[PATCH 01/28] aic94xx: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/aic94xx/aic94xx_init.c |  9 ++
 drivers/scsi/aic94xx/aic94xx_task.c | 46 ++---
 2 files changed, 25 insertions(+), 30 deletions(-)

diff --git a/drivers/scsi/aic94xx/aic94xx_init.c 
b/drivers/scsi/aic94xx/aic94xx_init.c
index 1391e5f35918..41c4d8abdd4a 100644
--- a/drivers/scsi/aic94xx/aic94xx_init.c
+++ b/drivers/scsi/aic94xx/aic94xx_init.c
@@ -771,13 +771,8 @@ static int asd_pci_probe(struct pci_dev *dev, const struct 
pci_device_id *id)
goto Err_remove;
 
err = -ENODEV;
-   if (!pci_set_dma_mask(dev, DMA_BIT_MASK(64))
-   && !pci_set_consistent_dma_mask(dev, DMA_BIT_MASK(64)))
-   ;
-   else if (!pci_set_dma_mask(dev, DMA_BIT_MASK(32))
-&& !pci_set_consistent_dma_mask(dev, DMA_BIT_MASK(32)))
-   ;
-   else {
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
asd_printk("no suitable DMA mask for %s\n", pci_name(dev));
goto Err_remove;
}
diff --git a/drivers/scsi/aic94xx/aic94xx_task.c 
b/drivers/scsi/aic94xx/aic94xx_task.c
index cdd4ab683be9..7fea344531f6 100644
--- a/drivers/scsi/aic94xx/aic94xx_task.c
+++ b/drivers/scsi/aic94xx/aic94xx_task.c
@@ -42,13 +42,13 @@ static void asd_can_dequeue(struct asd_ha_struct *asd_ha, 
int num)
spin_unlock_irqrestore(_ha->seq.pend_q_lock, flags);
 }
 
-/* PCI_DMA_... to our direction translation.
+/* DMA_... to our direction translation.
  */
 static const u8 data_dir_flags[] = {
-   [PCI_DMA_BIDIRECTIONAL] = DATA_DIR_BYRECIPIENT, /* UNSPECIFIED */
-   [PCI_DMA_TODEVICE]  = DATA_DIR_OUT, /* OUTBOUND */
-   [PCI_DMA_FROMDEVICE]= DATA_DIR_IN, /* INBOUND */
-   [PCI_DMA_NONE]  = DATA_DIR_NONE, /* NO TRANSFER */
+   [DMA_BIDIRECTIONAL] = DATA_DIR_BYRECIPIENT, /* UNSPECIFIED */
+   [DMA_TO_DEVICE] = DATA_DIR_OUT, /* OUTBOUND */
+   [DMA_FROM_DEVICE]   = DATA_DIR_IN,  /* INBOUND */
+   [DMA_NONE]  = DATA_DIR_NONE,/* NO TRANSFER */
 };
 
 static int asd_map_scatterlist(struct sas_task *task,
@@ -60,12 +60,12 @@ static int asd_map_scatterlist(struct sas_task *task,
struct scatterlist *sc;
int num_sg, res;
 
-   if (task->data_dir == PCI_DMA_NONE)
+   if (task->data_dir == DMA_NONE)
return 0;
 
if (task->num_scatter == 0) {
void *p = task->scatter;
-   dma_addr_t dma = pci_map_single(asd_ha->pcidev, p,
+   dma_addr_t dma = dma_map_single(_ha->pcidev->dev, p,
task->total_xfer_len,
task->data_dir);
sg_arr[0].bus_addr = cpu_to_le64((u64)dma);
@@ -79,7 +79,7 @@ static int asd_map_scatterlist(struct sas_task *task,
if (sas_protocol_ata(task->task_proto))
num_sg = task->num_scatter;
else
-   num_sg = pci_map_sg(asd_ha->pcidev, task->scatter,
+   num_sg = dma_map_sg(_ha->pcidev->dev, task->scatter,
task->num_scatter, task->data_dir);
if (num_sg == 0)
return -ENOMEM;
@@ -126,8 +126,8 @@ static int asd_map_scatterlist(struct sas_task *task,
return 0;
 err_unmap:
if (sas_protocol_ata(task->task_proto))
-   pci_unmap_sg(asd_ha->pcidev, task->scatter, task->num_scatter,
-task->data_dir);
+   dma_unmap_sg(_ha->pcidev->dev, task->scatter,
+task->num_scatter, task->data_dir);
return res;
 }
 
@@ -136,21 +136,21 @@ static void asd_unmap_scatterlist(struct asd_ascb *ascb)
struct asd_ha_struct *asd_ha = ascb->ha;
struct sas_task *task = ascb->uldd_task;
 
-   if (task->data_dir == PCI_DMA_NONE)
+   if (task->data_dir == DMA_NONE)
return;
 
if (task->num_scatter == 0) {
dma_addr_t dma = (dma_addr_t)
   le64_to_cpu(ascb->scb->ssp_task.sg_element[0].bus_addr);
-   pci_unmap_single(ascb->ha->pcidev, dma, task->total_xfer_len,
-task->data_dir);
+   dma_unmap_single(>ha->pcidev->dev, dma,
+task->total_xfer_len, task->data_dir);
return;
}
 
asd_free_coherent(asd_ha, ascb->sg_arr);
if (task->task_proto != SAS_PROTOCOL_STP)
-   

[PATCH 04/28] 3w-sas: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Acked-by: Adam Radford 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/3w-sas.c | 38 +-
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
index 40c1e6e64f58..266bdac75304 100644
--- a/drivers/scsi/3w-sas.c
+++ b/drivers/scsi/3w-sas.c
@@ -644,8 +644,8 @@ static int twl_allocate_memory(TW_Device_Extension *tw_dev, 
int size, int which)
unsigned long *cpu_addr;
int retval = 1;
 
-   cpu_addr = pci_zalloc_consistent(tw_dev->tw_pci_dev, size * TW_Q_LENGTH,
-_handle);
+   cpu_addr = dma_zalloc_coherent(_dev->tw_pci_dev->dev,
+   size * TW_Q_LENGTH, _handle, GFP_KERNEL);
if (!cpu_addr) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x5, "Memory allocation 
failed");
goto out;
@@ -899,19 +899,19 @@ static int twl_fill_sense(TW_Device_Extension *tw_dev, 
int i, int request_id, in
 static void twl_free_device_extension(TW_Device_Extension *tw_dev)
 {
if (tw_dev->command_packet_virt[0])
-   pci_free_consistent(tw_dev->tw_pci_dev,
+   dma_free_coherent(_dev->tw_pci_dev->dev,
sizeof(TW_Command_Full)*TW_Q_LENGTH,
tw_dev->command_packet_virt[0],
tw_dev->command_packet_phys[0]);
 
if (tw_dev->generic_buffer_virt[0])
-   pci_free_consistent(tw_dev->tw_pci_dev,
+   dma_free_coherent(_dev->tw_pci_dev->dev,
TW_SECTOR_SIZE*TW_Q_LENGTH,
tw_dev->generic_buffer_virt[0],
tw_dev->generic_buffer_phys[0]);
 
if (tw_dev->sense_buffer_virt[0])
-   pci_free_consistent(tw_dev->tw_pci_dev,
+   dma_free_coherent(_dev->tw_pci_dev->dev,
sizeof(TW_Command_Apache_Header)*
TW_Q_LENGTH,
tw_dev->sense_buffer_virt[0],
@@ -1571,14 +1571,12 @@ static int twl_probe(struct pci_dev *pdev, const struct 
pci_device_id *dev_id)
pci_set_master(pdev);
pci_try_set_mwi(pdev);
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)))
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   TW_PRINTK(host, TW_DRIVER, 0x18, "Failed to set dma 
mask");
-   retval = -ENODEV;
-   goto out_disable_device;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   TW_PRINTK(host, TW_DRIVER, 0x18, "Failed to set dma mask");
+   retval = -ENODEV;
+   goto out_disable_device;
+   }
 
host = scsi_host_alloc(_template, sizeof(TW_Device_Extension));
if (!host) {
@@ -1805,14 +1803,12 @@ static int twl_resume(struct pci_dev *pdev)
pci_set_master(pdev);
pci_try_set_mwi(pdev);
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)))
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   TW_PRINTK(host, TW_DRIVER, 0x25, "Failed to set dma 
mask during resume");
-   retval = -ENODEV;
-   goto out_disable_device;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   TW_PRINTK(host, TW_DRIVER, 0x25, "Failed to set dma mask during 
resume");
+   retval = -ENODEV;
+   goto out_disable_device;
+   }
 
/* Initialize the card */
if (twl_reset_sequence(tw_dev, 0)) {
-- 
2.19.1



[PATCH 06/28] a100u2w: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/a100u2w.c | 20 +---
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/a100u2w.c b/drivers/scsi/a100u2w.c
index 23b17621b6d2..00072ed9540b 100644
--- a/drivers/scsi/a100u2w.c
+++ b/drivers/scsi/a100u2w.c
@@ -1094,7 +1094,7 @@ static int inia100_probe_one(struct pci_dev *pdev,
 
if (pci_enable_device(pdev))
goto out;
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
+   if (dma_set_mask(>dev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING "Unable to set 32bit DMA "
"on inia100 adapter, ignoring.\n");
goto out_disable_device;
@@ -1124,7 +1124,8 @@ static int inia100_probe_one(struct pci_dev *pdev,
 
/* Get total memory needed for SCB */
sz = ORC_MAXQUEUE * sizeof(struct orc_scb);
-   host->scb_virt = pci_zalloc_consistent(pdev, sz, >scb_phys);
+   host->scb_virt = dma_zalloc_coherent(>dev, sz, >scb_phys,
+GFP_KERNEL);
if (!host->scb_virt) {
printk("inia100: SCB memory allocation error\n");
goto out_host_put;
@@ -1132,7 +1133,8 @@ static int inia100_probe_one(struct pci_dev *pdev,
 
/* Get total memory needed for ESCB */
sz = ORC_MAXQUEUE * sizeof(struct orc_extended_scb);
-   host->escb_virt = pci_zalloc_consistent(pdev, sz, >escb_phys);
+   host->escb_virt = dma_zalloc_coherent(>dev, sz, >escb_phys,
+ GFP_KERNEL);
if (!host->escb_virt) {
printk("inia100: ESCB memory allocation error\n");
goto out_free_scb_array;
@@ -1177,10 +1179,12 @@ static int inia100_probe_one(struct pci_dev *pdev,
 out_free_irq:
 free_irq(shost->irq, shost);
 out_free_escb_array:
-   pci_free_consistent(pdev, ORC_MAXQUEUE * sizeof(struct 
orc_extended_scb),
+   dma_free_coherent(>dev,
+   ORC_MAXQUEUE * sizeof(struct orc_extended_scb),
host->escb_virt, host->escb_phys);
 out_free_scb_array:
-   pci_free_consistent(pdev, ORC_MAXQUEUE * sizeof(struct orc_scb),
+   dma_free_coherent(>dev,
+   ORC_MAXQUEUE * sizeof(struct orc_scb),
host->scb_virt, host->scb_phys);
 out_host_put:
scsi_host_put(shost);
@@ -1200,9 +1204,11 @@ static void inia100_remove_one(struct pci_dev *pdev)
scsi_remove_host(shost);
 
 free_irq(shost->irq, shost);
-   pci_free_consistent(pdev, ORC_MAXQUEUE * sizeof(struct 
orc_extended_scb),
+   dma_free_coherent(>dev,
+   ORC_MAXQUEUE * sizeof(struct orc_extended_scb),
host->escb_virt, host->escb_phys);
-   pci_free_consistent(pdev, ORC_MAXQUEUE * sizeof(struct orc_scb),
+   dma_free_coherent(>dev,
+   ORC_MAXQUEUE * sizeof(struct orc_scb),
host->scb_virt, host->scb_phys);
 release_region(shost->io_port, 256);
 
-- 
2.19.1



[PATCH 18/28] pm8001: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
Reviewed-by: Jack Wang 
---
 drivers/scsi/pm8001/pm8001_hwi.c  | 22 +++---
 drivers/scsi/pm8001/pm8001_init.c | 28 +---
 drivers/scsi/pm8001/pm8001_sas.c  |  8 
 drivers/scsi/pm8001/pm80xx_hwi.c  | 22 +++---
 4 files changed, 31 insertions(+), 49 deletions(-)

diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
index e37ab9789ba6..d0bb357034d8 100644
--- a/drivers/scsi/pm8001/pm8001_hwi.c
+++ b/drivers/scsi/pm8001/pm8001_hwi.c
@@ -2420,7 +2420,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, 
void *piomb)
sata_resp = >sata_resp[0];
resp = (struct ata_task_resp *)ts->buf;
if (t->ata_task.dma_xfer == 0 &&
-   t->data_dir == PCI_DMA_FROMDEVICE) {
+   t->data_dir == DMA_FROM_DEVICE) {
len = sizeof(struct pio_setup_fis);
PM8001_IO_DBG(pm8001_ha,
pm8001_printk("PIO read len = %d\n", len));
@@ -4203,12 +4203,12 @@ static int process_oq(struct pm8001_hba_info 
*pm8001_ha, u8 vec)
return ret;
 }
 
-/* PCI_DMA_... to our direction translation. */
+/* DMA_... to our direction translation. */
 static const u8 data_dir_flags[] = {
-   [PCI_DMA_BIDIRECTIONAL] = DATA_DIR_BYRECIPIENT,/* UNSPECIFIED */
-   [PCI_DMA_TODEVICE]  = DATA_DIR_OUT,/* OUTBOUND */
-   [PCI_DMA_FROMDEVICE]= DATA_DIR_IN,/* INBOUND */
-   [PCI_DMA_NONE]  = DATA_DIR_NONE,/* NO TRANSFER */
+   [DMA_BIDIRECTIONAL] = DATA_DIR_BYRECIPIENT, /* UNSPECIFIED */
+   [DMA_TO_DEVICE] = DATA_DIR_OUT, /* OUTBOUND */
+   [DMA_FROM_DEVICE]   = DATA_DIR_IN,  /* INBOUND */
+   [DMA_NONE]  = DATA_DIR_NONE,/* NO TRANSFER */
 };
 void
 pm8001_chip_make_sg(struct scatterlist *scatter, int nr, void *prd)
@@ -4255,13 +4255,13 @@ static int pm8001_chip_smp_req(struct pm8001_hba_info 
*pm8001_ha,
 * DMA-map SMP request, response buffers
 */
sg_req = >smp_task.smp_req;
-   elem = dma_map_sg(pm8001_ha->dev, sg_req, 1, PCI_DMA_TODEVICE);
+   elem = dma_map_sg(pm8001_ha->dev, sg_req, 1, DMA_TO_DEVICE);
if (!elem)
return -ENOMEM;
req_len = sg_dma_len(sg_req);
 
sg_resp = >smp_task.smp_resp;
-   elem = dma_map_sg(pm8001_ha->dev, sg_resp, 1, PCI_DMA_FROMDEVICE);
+   elem = dma_map_sg(pm8001_ha->dev, sg_resp, 1, DMA_FROM_DEVICE);
if (!elem) {
rc = -ENOMEM;
goto err_out;
@@ -4294,10 +4294,10 @@ static int pm8001_chip_smp_req(struct pm8001_hba_info 
*pm8001_ha,
 
 err_out_2:
dma_unmap_sg(pm8001_ha->dev, >task->smp_task.smp_resp, 1,
-   PCI_DMA_FROMDEVICE);
+   DMA_FROM_DEVICE);
 err_out:
dma_unmap_sg(pm8001_ha->dev, >task->smp_task.smp_req, 1,
-   PCI_DMA_TODEVICE);
+   DMA_TO_DEVICE);
return rc;
 }
 
@@ -4376,7 +4376,7 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info 
*pm8001_ha,
u32  opc = OPC_INB_SATA_HOST_OPSTART;
memset(_cmd, 0, sizeof(sata_cmd));
circularQ = _ha->inbnd_q_tbl[0];
-   if (task->data_dir == PCI_DMA_NONE) {
+   if (task->data_dir == DMA_NONE) {
ATAP = 0x04;  /* no data*/
PM8001_IO_DBG(pm8001_ha, pm8001_printk("no data\n"));
} else if (likely(!task->ata_task.device_control_reg_update)) {
diff --git a/drivers/scsi/pm8001/pm8001_init.c 
b/drivers/scsi/pm8001/pm8001_init.c
index 501830caba21..d71e7e4ec29c 100644
--- a/drivers/scsi/pm8001/pm8001_init.c
+++ b/drivers/scsi/pm8001/pm8001_init.c
@@ -152,7 +152,7 @@ static void pm8001_free(struct pm8001_hba_info *pm8001_ha)
 
for (i = 0; i < USI_MAX_MEMCNT; i++) {
if (pm8001_ha->memoryMap.region[i].virt_ptr != NULL) {
-   pci_free_consistent(pm8001_ha->pdev,
+   dma_free_coherent(_ha->pdev->dev,
(pm8001_ha->memoryMap.region[i].total_len +
pm8001_ha->memoryMap.region[i].alignment),
pm8001_ha->memoryMap.region[i].virt_ptr,
@@ -501,30 +501,12 @@ static int pci_go_44(struct pci_dev *pdev)
 {
int rc;
 
-   if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(44))) {
-   rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(44));
-   if (rc) {
-   rc = pci_set_consistent_dma_mask(pdev,
-   DMA_BIT_MASK(32));
-   if (rc) {
-  

[PATCH 05/28] BusLogic: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/BusLogic.c | 36 +++-
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
index 0d4ffe0ae306..9cee941f97d6 100644
--- a/drivers/scsi/BusLogic.c
+++ b/drivers/scsi/BusLogic.c
@@ -201,8 +201,8 @@ static bool __init blogic_create_initccbs(struct 
blogic_adapter *adapter)
dma_addr_t blkp;
 
while (adapter->alloc_ccbs < adapter->initccbs) {
-   blk_pointer = pci_alloc_consistent(adapter->pci_device,
-   blk_size, );
+   blk_pointer = dma_alloc_coherent(>pci_device->dev,
+   blk_size, , GFP_KERNEL);
if (blk_pointer == NULL) {
blogic_err("UNABLE TO ALLOCATE CCB GROUP - DETACHING\n",
adapter);
@@ -227,15 +227,16 @@ static void blogic_destroy_ccbs(struct blogic_adapter 
*adapter)
next_ccb = ccb->next_all;
if (ccb->allocgrp_head) {
if (lastccb)
-   pci_free_consistent(adapter->pci_device,
+   dma_free_coherent(>pci_device->dev,
lastccb->allocgrp_size, lastccb,
lastccb->allocgrp_head);
lastccb = ccb;
}
}
if (lastccb)
-   pci_free_consistent(adapter->pci_device, lastccb->allocgrp_size,
-   lastccb, lastccb->allocgrp_head);
+   dma_free_coherent(>pci_device->dev,
+   lastccb->allocgrp_size, lastccb,
+   lastccb->allocgrp_head);
 }
 
 
@@ -256,8 +257,8 @@ static void blogic_create_addlccbs(struct blogic_adapter 
*adapter,
if (addl_ccbs <= 0)
return;
while (adapter->alloc_ccbs - prev_alloc < addl_ccbs) {
-   blk_pointer = pci_alloc_consistent(adapter->pci_device,
-   blk_size, );
+   blk_pointer = dma_alloc_coherent(>pci_device->dev,
+   blk_size, , GFP_KERNEL);
if (blk_pointer == NULL)
break;
blogic_init_ccbs(adapter, blk_pointer, blk_size, blkp);
@@ -318,8 +319,8 @@ static void blogic_dealloc_ccb(struct blogic_ccb *ccb, int 
dma_unmap)
if (ccb->command != NULL)
scsi_dma_unmap(ccb->command);
if (dma_unmap)
-   pci_unmap_single(adapter->pci_device, ccb->sensedata,
-ccb->sense_datalen, PCI_DMA_FROMDEVICE);
+   dma_unmap_single(>pci_device->dev, ccb->sensedata,
+ccb->sense_datalen, DMA_FROM_DEVICE);
 
ccb->command = NULL;
ccb->status = BLOGIC_CCB_FREE;
@@ -712,7 +713,7 @@ static int __init blogic_init_mm_probeinfo(struct 
blogic_adapter *adapter)
if (pci_enable_device(pci_device))
continue;
 
-   if (pci_set_dma_mask(pci_device, DMA_BIT_MASK(32)))
+   if (dma_set_mask(_device->dev, DMA_BIT_MASK(32)))
continue;
 
bus = pci_device->bus->number;
@@ -895,7 +896,7 @@ static int __init blogic_init_mm_probeinfo(struct 
blogic_adapter *adapter)
if (pci_enable_device(pci_device))
continue;
 
-   if (pci_set_dma_mask(pci_device, DMA_BIT_MASK(32)))
+   if (dma_set_mask(_device->dev, DMA_BIT_MASK(32)))
continue;
 
bus = pci_device->bus->number;
@@ -952,7 +953,7 @@ static int __init blogic_init_fp_probeinfo(struct 
blogic_adapter *adapter)
if (pci_enable_device(pci_device))
continue;
 
-   if (pci_set_dma_mask(pci_device, DMA_BIT_MASK(32)))
+   if (dma_set_mask(_device->dev, DMA_BIT_MASK(32)))
continue;
 
bus = pci_device->bus->number;
@@ -2040,7 +2041,7 @@ static void blogic_relres(struct blogic_adapter *adapter)
   Release any allocated memory structs not released elsewhere
 */
if (adapter->mbox_space)
-   pci_free_consistent(adapter->pci_device, adapter->mbox_sz,
+   dma_free_coherent(>pci_device->dev, adapter->mbox_sz,
adapter->mbox_space, adapter->mbox_space_handle);
pci_dev_put(adapter->pci_device);
adapter->mbox_space = NULL;
@@ -2092,8 +2093,9 @

[PATCH 15/28] mvumi: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Also reuse an existing helper (after fixing the error return) to set the
DMA mask instead of having three copies of the code.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/mvumi.c | 89 ++--
 1 file changed, 36 insertions(+), 53 deletions(-)

diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index b3cd9a6b1d30..2458974d1af6 100644
--- a/drivers/scsi/mvumi.c
+++ b/drivers/scsi/mvumi.c
@@ -143,8 +143,8 @@ static struct mvumi_res *mvumi_alloc_mem_resource(struct 
mvumi_hba *mhba,
 
case RESOURCE_UNCACHED_MEMORY:
size = round_up(size, 8);
-   res->virt_addr = pci_zalloc_consistent(mhba->pdev, size,
-  >bus_addr);
+   res->virt_addr = dma_zalloc_coherent(>pdev->dev, size,
+   >bus_addr, GFP_KERNEL);
if (!res->virt_addr) {
dev_err(>pdev->dev,
"unable to allocate consistent mem,"
@@ -175,7 +175,7 @@ static void mvumi_release_mem_resource(struct mvumi_hba 
*mhba)
list_for_each_entry_safe(res, tmp, >res_list, entry) {
switch (res->type) {
case RESOURCE_UNCACHED_MEMORY:
-   pci_free_consistent(mhba->pdev, res->size,
+   dma_free_coherent(>pdev->dev, res->size,
res->virt_addr, res->bus_addr);
break;
case RESOURCE_CACHED_MEMORY:
@@ -211,14 +211,14 @@ static int mvumi_make_sgl(struct mvumi_hba *mhba, struct 
scsi_cmnd *scmd,
dma_addr_t busaddr;
 
sg = scsi_sglist(scmd);
-   *sg_count = pci_map_sg(mhba->pdev, sg, sgnum,
-  (int) scmd->sc_data_direction);
+   *sg_count = dma_map_sg(>pdev->dev, sg, sgnum,
+  scmd->sc_data_direction);
if (*sg_count > mhba->max_sge) {
dev_err(>pdev->dev,
"sg count[0x%x] is bigger than max sg[0x%x].\n",
*sg_count, mhba->max_sge);
-   pci_unmap_sg(mhba->pdev, sg, sgnum,
-(int) scmd->sc_data_direction);
+   dma_unmap_sg(>pdev->dev, sg, sgnum,
+scmd->sc_data_direction);
return -1;
}
for (i = 0; i < *sg_count; i++) {
@@ -246,7 +246,8 @@ static int mvumi_internal_cmd_sgl(struct mvumi_hba *mhba, 
struct mvumi_cmd *cmd,
if (size == 0)
return 0;
 
-   virt_addr = pci_zalloc_consistent(mhba->pdev, size, _addr);
+   virt_addr = dma_zalloc_coherent(>pdev->dev, size, _addr,
+   GFP_KERNEL);
if (!virt_addr)
return -1;
 
@@ -274,8 +275,8 @@ static struct mvumi_cmd *mvumi_create_internal_cmd(struct 
mvumi_hba *mhba,
}
INIT_LIST_HEAD(>queue_pointer);
 
-   cmd->frame = pci_alloc_consistent(mhba->pdev,
-   mhba->ib_max_size, >frame_phys);
+   cmd->frame = dma_alloc_coherent(>pdev->dev, mhba->ib_max_size,
+   >frame_phys, GFP_KERNEL);
if (!cmd->frame) {
dev_err(>pdev->dev, "failed to allocate memory for FW"
" frame,size = %d.\n", mhba->ib_max_size);
@@ -287,7 +288,7 @@ static struct mvumi_cmd *mvumi_create_internal_cmd(struct 
mvumi_hba *mhba,
if (mvumi_internal_cmd_sgl(mhba, cmd, buf_size)) {
dev_err(>pdev->dev, "failed to allocate memory"
" for internal frame\n");
-   pci_free_consistent(mhba->pdev, mhba->ib_max_size,
+   dma_free_coherent(>pdev->dev, mhba->ib_max_size,
cmd->frame, cmd->frame_phys);
kfree(cmd);
return NULL;
@@ -313,10 +314,10 @@ static void mvumi_delete_internal_cmd(struct mvumi_hba 
*mhba,
phy_addr = (dma_addr_t) m_sg->baseaddr_l |
(dma_addr_t) ((m_sg->baseaddr_h << 16) << 16);
 
-   pci_free_consistent(mhba->pdev, size, cmd->data_buf,
+   dma_free_coherent(>pdev->dev, size, cmd->data_buf,
phy_addr);
}
-   pci_free_consistent(mhba->pdev, mhba->ib_max_size,
+   dma_free_coherent(>pdev->dev, mhba->ib_max_size,
  

[PATCH 02/28] 3w-9xxx: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Acked-by: Adam Radford 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/3w-9xxx.c | 50 --
 1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
index 27521fc3ef5a..05293babb031 100644
--- a/drivers/scsi/3w-9xxx.c
+++ b/drivers/scsi/3w-9xxx.c
@@ -518,7 +518,8 @@ static int twa_allocate_memory(TW_Device_Extension *tw_dev, 
int size, int which)
unsigned long *cpu_addr;
int retval = 1;
 
-   cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, 
_handle);
+   cpu_addr = dma_alloc_coherent(_dev->tw_pci_dev->dev,
+   size * TW_Q_LENGTH, _handle, GFP_KERNEL);
if (!cpu_addr) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x5, "Memory allocation 
failed");
goto out;
@@ -526,7 +527,8 @@ static int twa_allocate_memory(TW_Device_Extension *tw_dev, 
int size, int which)
 
if ((unsigned long)cpu_addr % (TW_ALIGNMENT_9000)) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x6, "Failed to allocate 
correctly aligned memory");
-   pci_free_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, 
cpu_addr, dma_handle);
+   dma_free_coherent(_dev->tw_pci_dev->dev, size * TW_Q_LENGTH,
+   cpu_addr, dma_handle);
goto out;
}
 
@@ -1027,16 +1029,16 @@ static int twa_fill_sense(TW_Device_Extension *tw_dev, 
int request_id, int copy_
 static void twa_free_device_extension(TW_Device_Extension *tw_dev)
 {
if (tw_dev->command_packet_virt[0])
-   pci_free_consistent(tw_dev->tw_pci_dev,
-   sizeof(TW_Command_Full)*TW_Q_LENGTH,
-   tw_dev->command_packet_virt[0],
-   tw_dev->command_packet_phys[0]);
+   dma_free_coherent(_dev->tw_pci_dev->dev,
+   sizeof(TW_Command_Full) * TW_Q_LENGTH,
+   tw_dev->command_packet_virt[0],
+   tw_dev->command_packet_phys[0]);
 
if (tw_dev->generic_buffer_virt[0])
-   pci_free_consistent(tw_dev->tw_pci_dev,
-   TW_SECTOR_SIZE*TW_Q_LENGTH,
-   tw_dev->generic_buffer_virt[0],
-   tw_dev->generic_buffer_phys[0]);
+   dma_free_coherent(_dev->tw_pci_dev->dev,
+   TW_SECTOR_SIZE * TW_Q_LENGTH,
+   tw_dev->generic_buffer_virt[0],
+   tw_dev->generic_buffer_phys[0]);
 
kfree(tw_dev->event_queue[0]);
 } /* End twa_free_device_extension() */
@@ -2015,14 +2017,12 @@ static int twa_probe(struct pci_dev *pdev, const struct 
pci_device_id *dev_id)
pci_set_master(pdev);
pci_try_set_mwi(pdev);
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)))
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   TW_PRINTK(host, TW_DRIVER, 0x23, "Failed to set dma 
mask");
-   retval = -ENODEV;
-   goto out_disable_device;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   TW_PRINTK(host, TW_DRIVER, 0x23, "Failed to set dma mask");
+   retval = -ENODEV;
+   goto out_disable_device;
+   }
 
host = scsi_host_alloc(_template, sizeof(TW_Device_Extension));
if (!host) {
@@ -2237,14 +2237,12 @@ static int twa_resume(struct pci_dev *pdev)
pci_set_master(pdev);
pci_try_set_mwi(pdev);
 
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)))
-   if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))
-   || pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
-   TW_PRINTK(host, TW_DRIVER, 0x40, "Failed to set dma 
mask during resume");
-   retval = -ENODEV;
-   goto out_disable_device;
-   }
+   if (dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64)) ||
+   dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32))) {
+   TW_PRINTK(host, TW_DRIVER, 0x40, "Failed to set dma mask during 
resume");
+   retval = -ENODEV;
+   goto out_disable_device;
+   }
 
/* Initialize the card */
if (twa_reset_sequence(tw_dev, 0)) {
-- 
2.19.1



[PATCH 16/28] mvsas: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/mvsas/mv_init.c | 21 +++--
 drivers/scsi/mvsas/mv_sas.c  | 12 ++--
 2 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
index 8c91637cd598..3ac34373746c 100644
--- a/drivers/scsi/mvsas/mv_init.c
+++ b/drivers/scsi/mvsas/mv_init.c
@@ -403,29 +403,14 @@ static int pci_go_64(struct pci_dev *pdev)
 {
int rc;
 
-   if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
-   rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
-   if (rc) {
-   rc = pci_set_consistent_dma_mask(pdev, 
DMA_BIT_MASK(32));
-   if (rc) {
-   dev_printk(KERN_ERR, >dev,
-  "64-bit DMA enable failed\n");
-   return rc;
-   }
-   }
-   } else {
-   rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+   rc = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(64));
+   if (rc) {
+   rc = dma_set_mask_and_coherent(>dev, DMA_BIT_MASK(32));
if (rc) {
dev_printk(KERN_ERR, >dev,
   "32-bit DMA enable failed\n");
return rc;
}
-   rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
-   if (rc) {
-   dev_printk(KERN_ERR, >dev,
-  "32-bit consistent DMA enable failed\n");
-   return rc;
-   }
}
 
return rc;
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index cff43bd9f675..3df1428df317 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -336,13 +336,13 @@ static int mvs_task_prep_smp(struct mvs_info *mvi,
 * DMA-map SMP request, response buffers
 */
sg_req = >smp_task.smp_req;
-   elem = dma_map_sg(mvi->dev, sg_req, 1, PCI_DMA_TODEVICE);
+   elem = dma_map_sg(mvi->dev, sg_req, 1, DMA_TO_DEVICE);
if (!elem)
return -ENOMEM;
req_len = sg_dma_len(sg_req);
 
sg_resp = >smp_task.smp_resp;
-   elem = dma_map_sg(mvi->dev, sg_resp, 1, PCI_DMA_FROMDEVICE);
+   elem = dma_map_sg(mvi->dev, sg_resp, 1, DMA_FROM_DEVICE);
if (!elem) {
rc = -ENOMEM;
goto err_out;
@@ -416,10 +416,10 @@ static int mvs_task_prep_smp(struct mvs_info *mvi,
 
 err_out_2:
dma_unmap_sg(mvi->dev, >task->smp_task.smp_resp, 1,
-PCI_DMA_FROMDEVICE);
+DMA_FROM_DEVICE);
 err_out:
dma_unmap_sg(mvi->dev, >task->smp_task.smp_req, 1,
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
return rc;
 }
 
@@ -904,9 +904,9 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct 
sas_task *task,
switch (task->task_proto) {
case SAS_PROTOCOL_SMP:
dma_unmap_sg(mvi->dev, >smp_task.smp_resp, 1,
-PCI_DMA_FROMDEVICE);
+DMA_FROM_DEVICE);
dma_unmap_sg(mvi->dev, >smp_task.smp_req, 1,
-PCI_DMA_TODEVICE);
+DMA_TO_DEVICE);
break;
 
case SAS_PROTOCOL_SATA:
-- 
2.19.1



[PATCH 10/28] fnic: switch to generic DMA API

2018-10-14 Thread Christoph Hellwig
Switch from the legacy PCI DMA API to the generic DMA API.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/fnic/fnic_fcs.c  | 41 +++
 drivers/scsi/fnic/fnic_main.c | 19 ++--
 drivers/scsi/fnic/fnic_scsi.c | 38 ++--
 drivers/scsi/fnic/vnic_dev.c  | 26 +++---
 4 files changed, 49 insertions(+), 75 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c
index c7bf316d8e83..844ef688fa91 100644
--- a/drivers/scsi/fnic/fnic_fcs.c
+++ b/drivers/scsi/fnic/fnic_fcs.c
@@ -836,8 +836,8 @@ static void fnic_rq_cmpl_frame_recv(struct vnic_rq *rq, 
struct cq_desc
u32 fcp_bytes_written = 0;
unsigned long flags;
 
-   pci_unmap_single(fnic->pdev, buf->dma_addr, buf->len,
-PCI_DMA_FROMDEVICE);
+   dma_unmap_single(>pdev->dev, buf->dma_addr, buf->len,
+DMA_FROM_DEVICE);
skb = buf->os_buf;
fp = (struct fc_frame *)skb;
buf->os_buf = NULL;
@@ -977,9 +977,8 @@ int fnic_alloc_rq_frame(struct vnic_rq *rq)
skb_reset_transport_header(skb);
skb_reset_network_header(skb);
skb_put(skb, len);
-   pa = pci_map_single(fnic->pdev, skb->data, len, PCI_DMA_FROMDEVICE);
-
-   if (pci_dma_mapping_error(fnic->pdev, pa)) {
+   pa = dma_map_single(>pdev->dev, skb->data, len, DMA_FROM_DEVICE);
+   if (dma_mapping_error(>pdev->dev, pa)) {
r = -ENOMEM;
printk(KERN_ERR "PCI mapping failed with error %d\n", r);
goto free_skb;
@@ -998,8 +997,8 @@ void fnic_free_rq_buf(struct vnic_rq *rq, struct 
vnic_rq_buf *buf)
struct fc_frame *fp = buf->os_buf;
struct fnic *fnic = vnic_dev_priv(rq->vdev);
 
-   pci_unmap_single(fnic->pdev, buf->dma_addr, buf->len,
-PCI_DMA_FROMDEVICE);
+   dma_unmap_single(>pdev->dev, buf->dma_addr, buf->len,
+DMA_FROM_DEVICE);
 
dev_kfree_skb(fp_skb(fp));
buf->os_buf = NULL;
@@ -1018,7 +1017,6 @@ void fnic_eth_send(struct fcoe_ctlr *fip, struct sk_buff 
*skb)
struct ethhdr *eth_hdr;
struct vlan_ethhdr *vlan_hdr;
unsigned long flags;
-   int r;
 
if (!fnic->vlan_hw_insert) {
eth_hdr = (struct ethhdr *)skb_mac_header(skb);
@@ -1038,11 +1036,10 @@ void fnic_eth_send(struct fcoe_ctlr *fip, struct 
sk_buff *skb)
}
}
 
-   pa = pci_map_single(fnic->pdev, skb->data, skb->len, PCI_DMA_TODEVICE);
-
-   r = pci_dma_mapping_error(fnic->pdev, pa);
-   if (r) {
-   printk(KERN_ERR "PCI mapping failed with error %d\n", r);
+   pa = dma_map_single(>pdev->dev, skb->data, skb->len,
+   DMA_TO_DEVICE);
+   if (dma_mapping_error(>pdev->dev, pa)) {
+   printk(KERN_ERR "DMA mapping failed\n");
goto free_skb;
}
 
@@ -1058,7 +1055,7 @@ void fnic_eth_send(struct fcoe_ctlr *fip, struct sk_buff 
*skb)
 
 irq_restore:
spin_unlock_irqrestore(>wq_lock[0], flags);
-   pci_unmap_single(fnic->pdev, pa, skb->len, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, pa, skb->len, DMA_TO_DEVICE);
 free_skb:
kfree_skb(skb);
 }
@@ -1115,9 +1112,8 @@ static int fnic_send_frame(struct fnic *fnic, struct 
fc_frame *fp)
if (FC_FCOE_VER)
FC_FCOE_ENCAPS_VER(fcoe_hdr, FC_FCOE_VER);
 
-   pa = pci_map_single(fnic->pdev, eth_hdr, tot_len, PCI_DMA_TODEVICE);
-
-   if (pci_dma_mapping_error(fnic->pdev, pa)) {
+   pa = dma_map_single(>pdev->dev, eth_hdr, tot_len, DMA_TO_DEVICE);
+   if (dma_mapping_error(>pdev->dev, pa)) {
ret = -ENOMEM;
printk(KERN_ERR "DMA map failed with error %d\n", ret);
goto free_skb_on_err;
@@ -1131,8 +1127,7 @@ static int fnic_send_frame(struct fnic *fnic, struct 
fc_frame *fp)
spin_lock_irqsave(>wq_lock[0], flags);
 
if (!vnic_wq_desc_avail(wq)) {
-   pci_unmap_single(fnic->pdev, pa,
-tot_len, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, pa, tot_len, DMA_TO_DEVICE);
ret = -1;
goto irq_restore;
}
@@ -1247,8 +1242,8 @@ static void fnic_wq_complete_frame_send(struct vnic_wq 
*wq,
struct fc_frame *fp = (struct fc_frame *)skb;
struct fnic *fnic = vnic_dev_priv(wq->vdev);
 
-   pci_unmap_single(fnic->pdev, buf->dma_addr,
-buf->len, PCI_DMA_TODEVICE);
+   dma_unmap_single(>pdev->dev, buf->dma_addr, buf->len,
+DMA_TO_DEVICE);
dev_kfree_skb_irq(f

[PATCH 20/28] qedi: fully convert to the generic DMA API

2018-10-14 Thread Christoph Hellwig
The driver is currently using an odd mix of legacy PCI DMA API and
generic DMA API calls, switch it over to the generic API entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 drivers/scsi/qedi/qedi_main.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index aa96bccb5a96..95d3fce994f6 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -806,11 +806,11 @@ static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
memset(>pf_params.iscsi_pf_params, 0,
   sizeof(qedi->pf_params.iscsi_pf_params));
 
-   qedi->p_cpuq = pci_alloc_consistent(qedi->pdev,
+   qedi->p_cpuq = dma_alloc_coherent(>pdev->dev,
qedi->num_queues * sizeof(struct qedi_glbl_q_params),
-   >hw_p_cpuq);
+   >hw_p_cpuq, GFP_KERNEL);
if (!qedi->p_cpuq) {
-   QEDI_ERR(>dbg_ctx, "pci_alloc_consistent fail\n");
+   QEDI_ERR(>dbg_ctx, "pci_alloc_coherent fail\n");
rval = -1;
goto err_alloc_mem;
}
@@ -871,7 +871,7 @@ static void qedi_free_iscsi_pf_param(struct qedi_ctx *qedi)
 
if (qedi->p_cpuq) {
size = qedi->num_queues * sizeof(struct qedi_glbl_q_params);
-   pci_free_consistent(qedi->pdev, size, qedi->p_cpuq,
+   dma_free_coherent(>pdev->dev, size, qedi->p_cpuq,
qedi->hw_p_cpuq);
}
 
-- 
2.19.1



switch most scsi drivers to the generic DMA API v2

2018-10-14 Thread Christoph Hellwig
A lot of SCSI drivers still use the legacy PCI DMA API.  While a few of
them have various oddities that should be deal with separately, most of
them can be very trivially converted over.

Two interesting things to look out for:

  - pci_(z)alloc_consistent forced GFP_ATOMIC allocations, which is a bad
idea almost all the time.  All these patches switch over to GFP_KERNEL.
The few drivers were we can't do this will be deal with separately.
  - a lot of odd things were going on when setting the dma mask.  This
series switches to use dma_set_mask_and_coherent where possible.

Change since v1:
 - minor indentation and comment fixups
 - collected various reviewed-by/acked-by/tested-by tags


Re: [PATCH 02/19] megaraid_sas: Add support for FW snap dump

2018-10-14 Thread Christoph Hellwig
> +
> + instance->snapdump_prop =
> + pci_alloc_consistent(pdev,
> +  sizeof(struct 
> MR_SNAPDUMP_PROPERTIES),
> +  >snapdump_prop_h);

No new calls to the PCI DMA API please.

Please also review my patch titled
"[PATCH 13/28] megaraid_sas: switch to generic DMA API" that I sent to
the list earlier this week and preferably rebase on top of it.


  1   2   3   4   5   6   7   8   9   10   >