On Fri, Mar 09, 2018 at 08:00:52AM +0100, Hannes Reinecke wrote:
> On 03/09/2018 04:32 AM, Ming Lei wrote:
> > Hi All,
> >
> > The patches fixes reply queue(virt-queue on virtio-scsi) selection on hpsa,
> > megaraid_sa and virtio-scsi, and IO hang can be caused easily by this issue.
> >
> > This
On 03/09/2018 04:32 AM, Ming Lei wrote:
> Hi All,
>
> The patches fixes reply queue(virt-queue on virtio-scsi) selection on hpsa,
> megaraid_sa and virtio-scsi, and IO hang can be caused easily by this issue.
>
> This issue is triggered by 84676c1f21e8 ("genirq/affinity: assign vectors
> to all
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, March 8, 2018 4:54 PM
> To: Kashyap Desai
> Cc: Jens Axboe; linux-bl...@vger.kernel.org; Christoph Hellwig; Mike
Snitzer;
> linux-scsi@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sandoval;
> Martin K
On 07/03/18 05:01, Martin K. Petersen wrote:
This patch series adds OCXL support to the cxlflash driver. With this
support, new devices using the OCXL transport will be supported by the
cxlflash driver along with the existing CXL devices. An effort is made
to keep this transport specific
On 27/02/18 09:21, Uma Krishnan wrote:
Per the OCXL specification, the maximum PASID supported by the AFU is
indicated by a field within the configuration space. Similar to acTags,
implementations can choose to use any sub-range of PASID within their
assigned range. For cxlflash, the entire
On 27/02/18 09:21, Uma Krishnan wrote:
The OCXL specification supports distributing acTags amongst different
AFUs and functions on the link. As cxlflash devices are expected to only
support a single AFU and function, the entire range that was assigned to
the function is also assigned to the AFU.
On 27/02/18 09:21, Uma Krishnan wrote:
The host AFU configuration is read on the initialization path to identify
the features and configuration of the AFU. This data is cached for use in
later configuration steps.
Signed-off-by: Uma Krishnan
Acked-by: Matthew R.
On 27/02/18 09:20, Uma Krishnan wrote:
The OCXL specification supports distributing acTags amongst different
AFUs and functions on the link. The platform-specific acTag range for the
link is obtained using the OCXL provider services and then assigned to the
host function based on implementation.
Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
has been merged to V4.16-rc, and it is easy to allocate all offline CPUs
for some irq vectors, this can't be avoided even though the allocation
is improved.
For example, on a 8cores VM, 4~7 are not-present/offline, 4 queues
>From 84676c1f21 (genirq/affinity: assign vectors to all possible CPUs),
one msix vector can be created without any online CPU mapped, then
command may be queued, and won't be notified after its completion.
This patch setups mapping between cpu and reply queue according to irq
affinity info
>From 84676c1f21 (genirq/affinity: assign vectors to all possible CPUs),
one msix vector can be created without any online CPU mapped, then one
command's completion may not be notified.
This patch setups mapping between cpu and reply queue according to irq
affinity info retrived by
Hi All,
The patches fixes reply queue(virt-queue on virtio-scsi) selection on hpsa,
megaraid_sa and virtio-scsi, and IO hang can be caused easily by this issue.
This issue is triggered by 84676c1f21e8 ("genirq/affinity: assign vectors
to all possible CPUs"). After 84676c1f21e8, it is easy to see
On Thu, 8 Mar 2018 11:52:25 -0800, Kees Cook wrote:
> On Thu, Mar 8, 2018 at 5:22 AM, Stephen Kitt wrote:
> > -static const int num_critical_sections = sizeof(critical_sections)
> > - / sizeof(*critical_sections);
> >
On 03/08/2018 07:11 AM, Souptick Joarder wrote:
> Use dma_pool_zalloc() instead of dma_pool_alloc + memset
>
> Signed-off-by: Souptick Joarder
> ---
> drivers/scsi/ipr.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/scsi/ipr.c
On Thu, Mar 8, 2018 at 1:38 PM, Stephen Kitt wrote:
> In preparation to enabling -Wvla, remove VLAs and replace them with
> fixed-length arrays instead.
>
> bfad_bsg.c uses a variable-length array declaration to measure the
> size of a putative array; this can be replaced by the
In preparation to enabling -Wvla, remove VLAs and replace them with
fixed-length arrays instead.
bfad_bsg.c uses a variable-length array declaration to measure the
size of a putative array; this can be replaced by the product of the
size of an element and the number of elements, avoiding the VLA
On Thu, Mar 8, 2018 at 12:51 PM, Stephen Kitt wrote:
> In preparation to enabling -Wvla, remove VLAs and replace them with
> fixed-length arrays instead.
>
> The arrays fixed here, using the number of constant sections, aren't
> really VLAs, but they appear so to the compiler.
On Thu, 2018-03-08 at 21:42 +0800, Ming Lei wrote:
> On Wed, Mar 07, 2018 at 09:11:37AM -0500, Laurence Oberman wrote:
> > On Tue, 2018-03-06 at 14:24 -0500, Martin K. Petersen wrote:
> > > Ming,
> > >
> > > > Given both Don and Laurence have verified that patch 1 and
> > > > patch 2
> > > > does
In preparation to enabling -Wvla, remove VLAs and replace them with
fixed-length arrays instead.
The arrays fixed here, using the number of constant sections, aren't
really VLAs, but they appear so to the compiler. Replace the array
sizes with a pre-processor-level constant instead using
> Hi Meelis,
>
> This issue should already be addressed by a very recent commit:
>
> 6a2cf8d3663e13e1 scsi: qla2xxx: Fix crashes in qla2x00_probe_one on probe
> failure
What tree is that commit in?
--
Meelis Roos (mr...@linux.ee)
On Thu, Mar 8, 2018 at 5:22 AM, Stephen Kitt wrote:
> In preparation to enabling -Wvla, remove VLAs and replace them with
> fixed-length arrays instead.
>
> The arrays fixed here, using the number of constant sections, aren't
> really VLAs, but they appear so to the compiler. Since
Menion,
> So, assuming that there is no disconnection ad USB level (and it is
> not since I don't get any log of it), the question is: how can trigger
> a probe or call the sd_revalidate_disk? Can it be the filesystem?
revalidate is either a function of either device discovery following a
> Hi Meelis,
>
> This issue should already be addressed by a very recent commit:
>
> 6a2cf8d3663e13e1 scsi: qla2xxx: Fix crashes in qla2x00_probe_one on probe
> failure
Good, will test.
> Furthermore, the additions in qla2x00_remove_one of:
>
> + qla2x00_mem_free(ha);
> +
> +
On 08/03/18 15:56, Bart Van Assche wrote:
On Thu, 2018-03-08 at 07:59 +, Tvrtko Ursulin wrote:
However there is a different bug in my patch relating to the last entry
which can have shorter length from the rest. So get_order on the last
entry is incorrect - I have to store the deduced
Hi Li Wei,
On 2018/2/13 10:14, Li Wei wrote:
> arm64: dts: add ufs node for Hisilicon.
>
> Signed-off-by: Li Wei
Fine to me. Thanks!
Acked-by: Wei Xu
Best Regards,
Wei
> ---
> arch/arm64/boot/dts/hisilicon/hi3660.dtsi | 19 +++
>
On Thu, Mar 08, 2018 at 08:45:25AM, Meelis Roos wrote:
> When firmware init fails, qla2x00_probe_one() does double free of req and rsp
> queues and possibly other structures allocated by qla2x00_mem_alloc().
> Fix it by pulling out qla2x00_mem_free() and qla2x00_free_queues() invocations
> from
On Thu, 2018-03-08 at 07:59 +, Tvrtko Ursulin wrote:
> However there is a different bug in my patch relating to the last entry
> which can have shorter length from the rest. So get_order on the last
> entry is incorrect - I have to store the deduced order and carry it over.
Will that work
On Thu, 2018-03-08 at 09:41 +0100, Hannes Reinecke wrote:
> IE the _entire_ request set is allocated as _one_ array, making it quite
> hard to handle from the lower-level CPU caches.
> Also the 'node' indicator doesn't really help us here, as the requests
> have to be access by all CPUs in the
When firmware init fails, qla2x00_probe_one() does double free of req and rsp
queues and possibly other structures allocated by qla2x00_mem_alloc().
Fix it by pulling out qla2x00_mem_free() and qla2x00_free_queues() invocations
from qla2x00_free_device() and call them manually where needed, and
Fix an obvious copy-paste error in freeing QLAFX00 response queue - the code
checked for rsp->ring but freed rsp->ring_fx00.
Signed-off-by: Meelis Roos
---
drivers/scsi/qla2xxx/qla_os.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On Wed, Mar 07, 2018 at 09:11:37AM -0500, Laurence Oberman wrote:
> On Tue, 2018-03-06 at 14:24 -0500, Martin K. Petersen wrote:
> > Ming,
> >
> > > Given both Don and Laurence have verified that patch 1 and patch 2
> > > does fix IO hang, could you consider to merge the two first?
> >
> > Oh,
In preparation to enabling -Wvla, remove VLAs and replace them with
fixed-length arrays instead.
The arrays fixed here, using the number of constant sections, aren't
really VLAs, but they appear so to the compiler. Since we know at
build-time how many critical sections there are, we might as well
Use dma_pool_zalloc() instead of dma_pool_alloc + memset
Signed-off-by: Souptick Joarder
---
drivers/scsi/ipr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index e07dd99..97387be 100644
---
On Thu, Mar 08, 2018 at 07:06:25PM +0800, Ming Lei wrote:
> On Thu, Mar 08, 2018 at 03:34:31PM +0530, Kashyap Desai wrote:
> > > -Original Message-
> > > From: Ming Lei [mailto:ming@redhat.com]
> > > Sent: Thursday, March 8, 2018 6:46 AM
> > > To: Kashyap Desai
> > > Cc: Jens Axboe;
neither, there are no dbg for kernel ppa in ubuntu :(
2018-03-08 12:10 GMT+01:00 Steffen Maier :
>
> On 03/08/2018 12:07 PM, Menion wrote:
>>
>> Unfortunately the Ubuntu kernel is not configured for ftrace or
>> kprobe, and I am operating this server so I am not sure if
On 03/08/2018 12:07 PM, Menion wrote:
Unfortunately the Ubuntu kernel is not configured for ftrace or
kprobe, and I am operating this server so I am not sure if I will
eventually find the time and the risk to install a self-compiled
kernel
systemtap?
Currently DMA mask for UFS HCI is set by reading CAP register's
[64AS] bit. Some HCI controller like Exynos support 36-bit bus address.
This works perfectly fine with DMA mask set as 64 in case there is no
IOMMU attached to HCI.
In case if HCI is behind an IOMMU, setting DMA mask as 64 bit won't
Unfortunately the Ubuntu kernel is not configured for ftrace or
kprobe, and I am operating this server so I am not sure if I will
eventually find the time and the risk to install a self-compiled
kernel
2018-03-08 11:53 GMT+01:00 Steffen Maier :
>
> On 03/08/2018 11:34
On Thu, Mar 08, 2018 at 03:34:31PM +0530, Kashyap Desai wrote:
> > -Original Message-
> > From: Ming Lei [mailto:ming@redhat.com]
> > Sent: Thursday, March 8, 2018 6:46 AM
> > To: Kashyap Desai
> > Cc: Jens Axboe; linux-bl...@vger.kernel.org; Christoph Hellwig; Mike
> Snitzer;
> >
On Thu, Mar 08, 2018 at 08:54:43AM +0100, Christoph Hellwig wrote:
> > + /* 256 tags should be high enough to saturate device */
> > + int max_queues = DIV_ROUND_UP(h->scsi_host->can_queue, 256);
> > +
> > + /* per NUMA node hw queue */
> > + h->scsi_host->nr_hw_queues = min_t(int,
On 03/08/2018 11:34 AM, Menion wrote:
I did some more test
This log is specific from the function sd_read_capacitysd_revalidate_disk
From what I can see, it seems that it is called only when probing
newly attached devices
A quick look in the code I see that it is called by sd_revalidate_disk
Anyhow, I checked something that I should have checked since the beginning.
I have stopped smartd and I still get this log, so it is something
else doing it, but does anyone have an idea how understand what
subsystem is calling again and again the read_capacity_10?
2018-03-08 10:16 GMT+01:00
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, March 8, 2018 6:46 AM
> To: Kashyap Desai
> Cc: Jens Axboe; linux-bl...@vger.kernel.org; Christoph Hellwig; Mike
Snitzer;
> linux-scsi@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sandoval;
> Martin K
On Thu, Mar 08, 2018 at 08:52:52AM +0100, Christoph Hellwig wrote:
> On Tue, Feb 27, 2018 at 06:07:46PM +0800, Ming Lei wrote:
> > This patch can support to partition host-wide tags to multiple hw queues,
> > so each hw queue related data structures(tags, hctx) can be accessed in
> > NUMA locality
On Thu, Mar 08, 2018 at 09:41:16AM +0100, Hannes Reinecke wrote:
> On 03/08/2018 09:15 AM, Ming Lei wrote:
> > On Thu, Mar 08, 2018 at 08:50:35AM +0100, Christoph Hellwig wrote:
> >>> +static void hpsa_setup_reply_map(struct ctlr_info *h)
> >>> +{
> >>> + const struct cpumask *mask;
> >>> +
Hi
I have tried it, but it does not work:
[ 39.230095] sd 0:0:0:0: [sda] Very big device. Trying to use READ
CAPACITY(16).
[ 39.338032] sd 0:0:0:1: [sdb] Very big device. Trying to use READ
CAPACITY(16).
[ 39.618268] sd 0:0:0:2: [sdc] Very big device. Trying to use READ
CAPACITY(16).
[
Currently DMA mask for UFS HCI is set by reading CAP register's
[64AS] bit. Some HCI controller like Exynos support 36-bit bus address.
This works perfectly fine with DMA mask set as 64 in case there is no
IOMMU attached to HCI.
In case if HCI is behind an IOMMU, setting DMA mask as 64 bit won't
When scsi disks went wrong frequently, and with serial console
attached, tasks may be blocked in the following flow for more than 10s:
[ 557.369580] <> [] blkcg_print_blkgs+0x76/0xf0
》 wait for blkg->q->queue_lock
[ 557.369581] [] cfqg_print_rwstat_recursive+0x36/0x40
[ 557.369583]
On 03/08/2018 09:15 AM, Ming Lei wrote:
> On Thu, Mar 08, 2018 at 08:50:35AM +0100, Christoph Hellwig wrote:
>>> +static void hpsa_setup_reply_map(struct ctlr_info *h)
>>> +{
>>> + const struct cpumask *mask;
>>> + unsigned int queue, cpu;
>>> +
>>> + for (queue = 0; queue < h->msix_vectors;
I tried a newer kernel (vmlinuz-4.16.0-rc3-00203-g8da5db7) on my Sun
E280R with onboard QLA2200 and it did not work. I fixed some lower
hanging fruits (error handler assuming qla2400 and up, double dma free
in error path, and a copypaste typo in freeing some data that I did not
hit - will
On Thu, Mar 08, 2018 at 08:50:35AM +0100, Christoph Hellwig wrote:
> > +static void hpsa_setup_reply_map(struct ctlr_info *h)
> > +{
> > + const struct cpumask *mask;
> > + unsigned int queue, cpu;
> > +
> > + for (queue = 0; queue < h->msix_vectors; queue++) {
> > + mask =
On 03/07/2018 07:49 PM, Himanshu Madhani wrote:
> Commit 7d64c39e64310 fixed regression of FCP discovery when
> Nport Handle is in-use and relogin is triggered. However,
> during FCP and FC-NVMe discovery this resulted into only
> discovering NVMe LUNs.
>
> This patch fixes issue where FCP and
Hi,
On 07/03/18 18:30, James Bottomley wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Firstly, I don't see any justifiable benefit to churning this API, so
why bother? but secondly this:
Primarily because I wanted to extend
53 matches
Mail list logo