[PATCH v2] base: dd: fix error return code of driver_sysfs_add()

2021-03-23 Thread Jia-Ju Bai
When device_create_file() fails and returns a non-zero value, no error return code of driver_sysfs_add() is assigned. To fix this bug, ret is assigned with the return value of device_create_file(), and then ret is checked. Reported-by: TOTE Robot Signed-off-by: Jia-Ju Bai --- v2: * Fix a

Re: [PATCH] base: dd: fix error return code of driver_sysfs_add()

2021-03-23 Thread Jia-Ju Bai
On 2021/3/23 21:57, Greg KH wrote: On Fri, Mar 05, 2021 at 02:24:05AM -0800, Jia-Ju Bai wrote: When device_create_file() fails and returns a non-zero value, no error return code of driver_sysfs_add() is assigned. To fix this bug, ret is assigned with the return value of device_create_file(),

Re: [PATCH] base: dd: fix error return code of driver_sysfs_add()

2021-03-23 Thread Greg KH
On Fri, Mar 05, 2021 at 02:24:05AM -0800, Jia-Ju Bai wrote: > When device_create_file() fails and returns a non-zero value, > no error return code of driver_sysfs_add() is assigned. > To fix this bug, ret is assigned with the return value of > device_create_file(), and then ret is checked. > >

Re: [PATCH] base: dd: fix error return code of driver_sysfs_add()

2021-03-05 Thread Rafael J. Wysocki
On Fri, Mar 5, 2021 at 11:24 AM Jia-Ju Bai wrote: > > When device_create_file() fails and returns a non-zero value, > no error return code of driver_sysfs_add() is assigned. > To fix this bug, ret is assigned with the return value of > device_create_file(), and then ret is checked. > >

[PATCH] base: dd: fix error return code of driver_sysfs_add()

2021-03-05 Thread Jia-Ju Bai
When device_create_file() fails and returns a non-zero value, no error return code of driver_sysfs_add() is assigned. To fix this bug, ret is assigned with the return value of device_create_file(), and then ret is checked. Reported-by: TOTE Robot Signed-off-by: Jia-Ju Bai ---

Re: [PATCH 2/2] driver core: dd: remove deferred_devices variable

2021-02-16 Thread Greg Kroah-Hartman
On Tue, Feb 16, 2021 at 03:34:27PM +0100, Rafael J. Wysocki wrote: > On Tue, Feb 16, 2021 at 3:24 PM Greg Kroah-Hartman > wrote: > > > > No need to save the debugfs dentry for the "devices_deferred" debugfs > > file (gotta love the juxtaposition), if we need to remove it we can look > > it up

Re: [PATCH 2/2] driver core: dd: remove deferred_devices variable

2021-02-16 Thread Rafael J. Wysocki
On Tue, Feb 16, 2021 at 3:24 PM Greg Kroah-Hartman wrote: > > No need to save the debugfs dentry for the "devices_deferred" debugfs > file (gotta love the juxtaposition), if we need to remove it we can look > it up from debugfs itself. > > Cc: "Rafael J. Wysocki" > Cc:

[PATCH 2/2] driver core: dd: remove deferred_devices variable

2021-02-16 Thread Greg Kroah-Hartman
No need to save the debugfs dentry for the "devices_deferred" debugfs file (gotta love the juxtaposition), if we need to remove it we can look it up from debugfs itself. Cc: "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/base/dd.c | 7 +++

[PATCH 10/20] RDMA/hw/hfi1/pio_copy: Provide entry for 'pio_copy()'s 'dd' param

2021-01-26 Thread Lee Jones
Fixes the following W=1 kernel build warning(s): drivers/infiniband/hw/hfi1/pio_copy.c:73: warning: Function parameter or member 'dd' not described in 'pio_copy' Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Doug Ledford Cc: Jason Gunthorpe Cc: linux-r...@vger.kernel.org Signed-off

Re: [question] What happens when dd writes data to a missing device?

2020-10-11 Thread Douglas Gilbert
On 2020-10-11 3:46 p.m., Mikhail Gavrilov wrote: Hi folks! I have a question. What happens when dd writes data to a missing device? For example: # dd if=/home/mikhail/Downloads/Fedora-Workstation-Live-x86_64-Rawhide-20201010.n.0.iso of=/dev/adb Today I and wrongly entered /dev/adb instead

Re: [question] What happens when dd writes data to a missing device?

2020-10-11 Thread Al Viro
On Mon, Oct 12, 2020 at 12:46:03AM +0500, Mikhail Gavrilov wrote: > Hi folks! > I have a question. > What happens when dd writes data to a missing device? > > For example: > # dd > if=/home/mikhail/Downloads/Fedora-Workstation-Live-x86_64-Rawhide-20201010.n.0.iso >

Re: [question] What happens when dd writes data to a missing device?

2020-10-11 Thread Sven-Haegar Koch
On Mon, 12 Oct 2020, Mikhail Gavrilov wrote: > I have a question. > What happens when dd writes data to a missing device? > > For example: > # dd > if=/home/mikhail/Downloads/Fedora-Workstation-Live-x86_64-Rawhide-20201010.n.0.iso > of=/dev/adb > > Today I and wrong

[question] What happens when dd writes data to a missing device?

2020-10-11 Thread Mikhail Gavrilov
Hi folks! I have a question. What happens when dd writes data to a missing device? For example: # dd if=/home/mikhail/Downloads/Fedora-Workstation-Live-x86_64-Rawhide-20201010.n.0.iso of=/dev/adb Today I and wrongly entered /dev/adb instead of /dev/sdb, and what my surprise was when the data

Re: [PATCH v7 0/2]dd support for Tianma nt36672a video mode panel

2020-09-22 Thread Sumit Semwal
Hi Sam / Theirry, On Wed, 2 Sep 2020 at 12:14, Sumit Semwal wrote: > > Some Poco F1 phones from Xiaomi have a FHD+ video mode panel based on the > Novatek NT36672A display controller; Add support for the same. This latest version (v7) had all the outstanding comments fixed, and Bjorn has also

[PATCH v7 0/2]dd support for Tianma nt36672a video mode panel

2020-09-02 Thread Sumit Semwal
Some Poco F1 phones from Xiaomi have a FHD+ video mode panel based on the Novatek NT36672A display controller; Add support for the same. Most of the panel data is taken from downstream panel dts, and is converted to drm-panel based driver by me. It has been validated with v5.9-rc1 based

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-09-09 Thread Rafael J. Wysocki
On Mon, Aug 13, 2018 at 7:40 PM Rishabh Bhatnagar wrote: > > From: Rishabh Bhatnagar > > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-09-09 Thread Rafael J. Wysocki
On Mon, Aug 13, 2018 at 7:40 PM Rishabh Bhatnagar wrote: > > From: Rishabh Bhatnagar > > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-09-07 Thread rishabhb
On 2018-08-13 10:39, Rishabh Bhatnagar wrote: From: Rishabh Bhatnagar Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-09-07 Thread rishabhb
On 2018-08-13 10:39, Rishabh Bhatnagar wrote: From: Rishabh Bhatnagar Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in

[PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-13 Thread Rishabh Bhatnagar
From: Rishabh Bhatnagar Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level.

[PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-13 Thread Rishabh Bhatnagar
From: Rishabh Bhatnagar Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level.

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-12 Thread Rafael J. Wysocki
On Fri, Aug 10, 2018 at 11:52 PM Rishabh Bhatnagar wrote: > > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were resolved much earlier, in some

Re: [PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-12 Thread Rafael J. Wysocki
On Fri, Aug 10, 2018 at 11:52 PM Rishabh Bhatnagar wrote: > > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were resolved much earlier, in some

[PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-10 Thread Rishabh Bhatnagar
Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level. Invoke one probe retry cycle

[PATCH v1] dd: Invoke one probe retry cycle after some initcall levels

2018-08-10 Thread Rishabh Bhatnagar
Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level. Invoke one probe retry cycle

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-10 Thread Sodagudi Prasad
On 2018-08-10 00:10, Rafael J. Wysocki wrote: On Fri, Aug 10, 2018 at 12:30 AM, wrote: On 2018-08-06 01:53, Rafael J. Wysocki wrote: On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-10 Thread Sodagudi Prasad
On 2018-08-10 00:10, Rafael J. Wysocki wrote: On Fri, Aug 10, 2018 at 12:30 AM, wrote: On 2018-08-06 01:53, Rafael J. Wysocki wrote: On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-10 Thread Rafael J. Wysocki
On Fri, Aug 10, 2018 at 12:30 AM, wrote: > On 2018-08-06 01:53, Rafael J. Wysocki wrote: >> >> On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad >> wrote: >>>> >>>> From: RAFAEL J. WYSOCKI >>>> Date: Wed, Aug 1, 2018 at 2:21 PM >>

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-10 Thread Rafael J. Wysocki
On Fri, Aug 10, 2018 at 12:30 AM, wrote: > On 2018-08-06 01:53, Rafael J. Wysocki wrote: >> >> On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad >> wrote: >>>> >>>> From: RAFAEL J. WYSOCKI >>>> Date: Wed, Aug 1, 2018 at 2:21 PM >>

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-09 Thread rishabhb
On 2018-08-06 01:53, Rafael J. Wysocki wrote: On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level To: Rishabh Bhatnagar Cc: "Rafael J. Wy

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-09 Thread rishabhb
On 2018-08-06 01:53, Rafael J. Wysocki wrote: On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level To: Rishabh Bhatnagar Cc: "Rafael J. Wy

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-06 Thread Rafael J. Wysocki
On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: >> From: RAFAEL J. WYSOCKI >> Date: Wed, Aug 1, 2018 at 2:21 PM >> Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every >> initcall level >> To: Rishabh Bhatnagar >> Cc: "Rafael J.

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-06 Thread Rafael J. Wysocki
On Fri, Aug 3, 2018 at 12:20 AM, Sodagudi Prasad wrote: >> From: RAFAEL J. WYSOCKI >> Date: Wed, Aug 1, 2018 at 2:21 PM >> Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every >> initcall level >> To: Rishabh Bhatnagar >> Cc: "Rafael J.

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-02 Thread Sodagudi Prasad
From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level To: Rishabh Bhatnagar Cc: "Rafael J. Wysocki" , Greg Kroah-Hartman , Linux Kernel Mailing List , ckad...@codeaurora.org, ts...@codeaurora.o

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-02 Thread Sodagudi Prasad
From: RAFAEL J. WYSOCKI Date: Wed, Aug 1, 2018 at 2:21 PM Subject: Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level To: Rishabh Bhatnagar Cc: "Rafael J. Wysocki" , Greg Kroah-Hartman , Linux Kernel Mailing List , ckad...@codeaurora.org, ts...@codeaurora.o

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-01 Thread Rafael J. Wysocki
On Wed, Aug 1, 2018 at 11:18 PM, wrote: > On 2018-07-24 01:24, Rafael J. Wysocki wrote: >> >> On Mon, Jul 23, 2018 at 10:22 PM, wrote: >>> >>> On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: > > > Drivers

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-01 Thread Rafael J. Wysocki
On Wed, Aug 1, 2018 at 11:18 PM, wrote: > On 2018-07-24 01:24, Rafael J. Wysocki wrote: >> >> On Mon, Jul 23, 2018 at 10:22 PM, wrote: >>> >>> On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: > > > Drivers

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-01 Thread rishabhb
On 2018-07-24 01:24, Rafael J. Wysocki wrote: On Mon, Jul 23, 2018 at 10:22 PM, wrote: On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: Drivers that are registered at an initcall level may have to wait until late_init before the

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-08-01 Thread rishabhb
On 2018-07-24 01:24, Rafael J. Wysocki wrote: On Mon, Jul 23, 2018 at 10:22 PM, wrote: On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: Drivers that are registered at an initcall level may have to wait until late_init before the

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-24 Thread Rafael J. Wysocki
On Mon, Jul 23, 2018 at 10:22 PM, wrote: > On 2018-07-23 04:17, Rafael J. Wysocki wrote: >> >> On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar >> wrote: >>> >>> Drivers that are registered at an initcall level may have to >>> wait until late_init before the probe deferral mechanism can >>>

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-24 Thread Rafael J. Wysocki
On Mon, Jul 23, 2018 at 10:22 PM, wrote: > On 2018-07-23 04:17, Rafael J. Wysocki wrote: >> >> On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar >> wrote: >>> >>> Drivers that are registered at an initcall level may have to >>> wait until late_init before the probe deferral mechanism can >>>

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-23 Thread rishabhb
On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-23 Thread rishabhb
On 2018-07-23 04:17, Rafael J. Wysocki wrote: On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-23 Thread Rafael J. Wysocki
On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were resolved much earlier, in some

Re: [PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-23 Thread Rafael J. Wysocki
On Thu, Jul 19, 2018 at 11:24 PM, Rishabh Bhatnagar wrote: > Drivers that are registered at an initcall level may have to > wait until late_init before the probe deferral mechanism can > retry their probe functions. It is possible that their > dependencies were resolved much earlier, in some

[PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-19 Thread Rishabh Bhatnagar
Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level. Invoke one probe retry cycle

[PATCH] dd: Invoke one probe retry cycle after every initcall level

2018-07-19 Thread Rishabh Bhatnagar
Drivers that are registered at an initcall level may have to wait until late_init before the probe deferral mechanism can retry their probe functions. It is possible that their dependencies were resolved much earlier, in some cases even before the next initcall level. Invoke one probe retry cycle

[PATCH 4.4 021/190] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-04-11 Thread Greg Kroah-Hartman
4.4-stable review patch. If anyone has any objections, please let me know. -- From: Wen Xiong <wenxi...@linux.vnet.ibm.com> [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/o

[PATCH 4.4 021/190] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-04-11 Thread Greg Kroah-Hartman
4.4-stable review patch. If anyone has any objections, please let me know. -- From: Wen Xiong [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block l

[PATCH 4.9 029/310] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-04-11 Thread Greg Kroah-Hartman
4.9-stable review patch. If anyone has any objections, please let me know. -- From: Wen Xiong <wenxi...@linux.vnet.ibm.com> [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/o

[PATCH 4.9 029/310] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-04-11 Thread Greg Kroah-Hartman
4.9-stable review patch. If anyone has any objections, please let me know. -- From: Wen Xiong [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block l

[PATCH AUTOSEL for 4.9 029/281] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-03-19 Thread Sasha Levin
From: Wen Xiong <wenxi...@linux.vnet.ibm.com> [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block layer split the bio after calling bio_integrity_prep(bio).

[PATCH AUTOSEL for 4.9 029/281] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-03-19 Thread Sasha Levin
From: Wen Xiong [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block layer split the bio after calling bio_integrity_prep(bio). This patch fixes the issue. Below is ho

[PATCH AUTOSEL for 4.4 021/167] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-03-19 Thread Sasha Levin
From: Wen Xiong <wenxi...@linux.vnet.ibm.com> [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block layer split the bio after calling bio_integrity_prep(bio).

[PATCH AUTOSEL for 4.4 021/167] blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op

2018-03-19 Thread Sasha Levin
From: Wen Xiong [ Upstream commit f36ea50ca0043e7b1204feaf1d2ba6bd68c08d36 ] When formatting NVMe to 512B/4K + T10 DIf/DIX, dd with split op returns "Input/output error". Looks block layer split the bio after calling bio_integrity_prep(bio). This patch fixes the issue. Below is ho

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Michal Hocko
On Thu 30-11-17 22:01:03, Wu Fengguang wrote: > On Thu, Nov 30, 2017 at 02:50:16PM +0100, Michal Hocko wrote: > > On Thu 30-11-17 21:38:40, Wu Fengguang wrote: > > > Hello, > > > > > > It looks like a regression in 4.15.0-rc1 -- the test case simply run a > > > set of parallel dd's and there

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Michal Hocko
On Thu 30-11-17 22:01:03, Wu Fengguang wrote: > On Thu, Nov 30, 2017 at 02:50:16PM +0100, Michal Hocko wrote: > > On Thu 30-11-17 21:38:40, Wu Fengguang wrote: > > > Hello, > > > > > > It looks like a regression in 4.15.0-rc1 -- the test case simply run a > > > set of parallel dd's and there

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
On Thu, Nov 30, 2017 at 10:08:04PM +0800, Fengguang Wu wrote: [ 78.848629] dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null) [ 78.857841] dd cpuset=/ mems_allowed=0-1 [ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1 #1

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
On Thu, Nov 30, 2017 at 10:08:04PM +0800, Fengguang Wu wrote: [ 78.848629] dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null) [ 78.857841] dd cpuset=/ mems_allowed=0-1 [ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1 #1

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
[ 78.848629] dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null) [ 78.857841] dd cpuset=/ mems_allowed=0-1 [ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1 #1 [ 78.870437] Call Trace: [ 78.873610] [ 78.876342] dump_stack

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
[ 78.848629] dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null) [ 78.857841] dd cpuset=/ mems_allowed=0-1 [ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1 #1 [ 78.870437] Call Trace: [ 78.873610] [ 78.876342] dump_stack

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
rely on atomic allocations. I just wonder if any changes make the pressure more tight than before. It may not even be a MM change -- in theory drivers might also use atomic allocations more aggressively than before. [...] [ 71.088242] dd: page allocation failure: order:0, mode:0x1080020

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Fengguang Wu
rely on atomic allocations. I just wonder if any changes make the pressure more tight than before. It may not even be a MM change -- in theory drivers might also use atomic allocations more aggressively than before. [...] [ 71.088242] dd: page allocation failure: order:0, mode:0x1080020

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Michal Hocko
the failure really depends on the state of the free memory and that can vary between runs depending on timing I guess. So I am not really sure this is a regression. But maybe there is something reclaim related going on here. [...] > [ 71.088242] dd: page allocation failure: order:0, > m

Re: dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)

2017-11-30 Thread Michal Hocko
the failure really depends on the state of the free memory and that can vary between runs depending on timing I guess. So I am not really sure this is a regression. But maybe there is something reclaim related going on here. [...] > [ 71.088242] dd: page allocation failure: order:0, > m

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Thierry Reding
On Tue, May 03, 2016 at 08:07:39AM -0700, Greg KH wrote: > On Tue, May 03, 2016 at 03:11:26PM +0200, Thierry Reding wrote: > > On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > > > the driver_data may be used for sanity check, it fails the > > > probe() if driver_data is NULL after it is

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Thierry Reding
On Tue, May 03, 2016 at 08:07:39AM -0700, Greg KH wrote: > On Tue, May 03, 2016 at 03:11:26PM +0200, Thierry Reding wrote: > > On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > > > the driver_data may be used for sanity check, it fails the > > > probe() if driver_data is NULL after it is

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Greg KH
On Tue, May 03, 2016 at 03:11:26PM +0200, Thierry Reding wrote: > On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > > the driver_data may be used for sanity check, it fails the > > probe() if driver_data is NULL after it is re-triggered. > > for example, soc_probe() in

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Greg KH
On Tue, May 03, 2016 at 03:11:26PM +0200, Thierry Reding wrote: > On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > > the driver_data may be used for sanity check, it fails the > > probe() if driver_data is NULL after it is re-triggered. > > for example, soc_probe() in

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Thierry Reding
On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > the driver_data may be used for sanity check, it fails the > probe() if driver_data is NULL after it is re-triggered. > for example, soc_probe() in sound/soc/soc-core.c > > Signed-off-by: Yi Zhang > --- >

Re: [PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-05-03 Thread Thierry Reding
On Tue, Mar 08, 2016 at 11:15:22PM +0800, Yi Zhang wrote: > the driver_data may be used for sanity check, it fails the > probe() if driver_data is NULL after it is re-triggered. > for example, soc_probe() in sound/soc/soc-core.c > > Signed-off-by: Yi Zhang > --- > drivers/base/dd.c | 3 ++- > 1

[PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-03-08 Thread Yi Zhang
the driver_data may be used for sanity check, it fails the probe() if driver_data is NULL after it is re-triggered. for example, soc_probe() in sound/soc/soc-core.c Signed-off-by: Yi Zhang --- drivers/base/dd.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff

[PATCH] base: dd: don't remove driver_data in -EPROBE_DEFER case

2016-03-08 Thread Yi Zhang
the driver_data may be used for sanity check, it fails the probe() if driver_data is NULL after it is re-triggered. for example, soc_probe() in sound/soc/soc-core.c Signed-off-by: Yi Zhang --- drivers/base/dd.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git

[PATCH v2 0/2] dd regulator support for pmbus and ltc2978

2014-08-22 Thread atull
From: Alan Tull Version 2 : * Get regulator_init_data from platform data or device tree. * One regulator per pmbus part. * Clean up Kconfig and #includes. This set of patches adds regulator support to pmbus_core.c and to ltc2978.c. Other pmbus parts can use this to add their own

[PATCH v2 0/2] dd regulator support for pmbus and ltc2978

2014-08-22 Thread atull
From: Alan Tull at...@opensource.altera.com Version 2 : * Get regulator_init_data from platform data or device tree. * One regulator per pmbus part. * Clean up Kconfig and #includes. This set of patches adds regulator support to pmbus_core.c and to ltc2978.c. Other pmbus parts can use

[PATCH v2 1/2] drivercore: dd: Move driver_deferred_probe_add/del function down in the code

2014-04-09 Thread Peter Ujfalusi
Move both functions after driver_deferred_probe_trigger() in the source file since upcoming patch will need to call the _trigger from _add. Move also the _del so the functions will be kept together. Signed-off-by: Peter Ujfalusi --- drivers/base/dd.c | 40

[PATCH v2 1/2] drivercore: dd: Move driver_deferred_probe_add/del function down in the code

2014-04-09 Thread Peter Ujfalusi
Move both functions after driver_deferred_probe_trigger() in the source file since upcoming patch will need to call the _trigger from _add. Move also the _del so the functions will be kept together. Signed-off-by: Peter Ujfalusi peter.ujfal...@ti.com --- drivers/base/dd.c | 40

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-30 Thread Francis Moreau
up On 03/22/2014 05:52 PM, Francis Moreau wrote: > Hello, > > I'm posting here because it might be a behaviour related to the kernel > internals that I can't explain from my user point of view :) > > Basically I'm doing this: > > mount -o bind /dev/ /mnt/dev &&

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-30 Thread Francis Moreau
up On 03/22/2014 05:52 PM, Francis Moreau wrote: Hello, I'm posting here because it might be a behaviour related to the kernel internals that I can't explain from my user point of view :) Basically I'm doing this: mount -o bind /dev/ /mnt/dev chroot /mnt dd bs=440 conv=notrunc

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Francis Moreau
m doing this: >> >> mount -o bind /dev/ /mnt/dev && >> chroot /mnt dd bs=440 conv=notrunc count=1 if=gptmbr.bin of=/dev/loop0 >> umount /mnt/dev >> >> but umount gives the following error: "umount: /mnt/dev: target is busy" > > What

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Al Viro
On Sat, Mar 22, 2014 at 05:52:24PM +0100, Francis Moreau wrote: > I'm posting here because it might be a behaviour related to the kernel > internals that I can't explain from my user point of view :) > > Basically I'm doing this: > > mount -o bind /dev/ /mnt/dev &&

Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Francis Moreau
Hello, I'm posting here because it might be a behaviour related to the kernel internals that I can't explain from my user point of view :) Basically I'm doing this: mount -o bind /dev/ /mnt/dev && chroot /mnt dd bs=440 conv=notrunc count=1 if=gptmbr.bin of=/dev/loop0 umount

Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Francis Moreau
Hello, I'm posting here because it might be a behaviour related to the kernel internals that I can't explain from my user point of view :) Basically I'm doing this: mount -o bind /dev/ /mnt/dev chroot /mnt dd bs=440 conv=notrunc count=1 if=gptmbr.bin of=/dev/loop0 umount /mnt/dev

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Al Viro
On Sat, Mar 22, 2014 at 05:52:24PM +0100, Francis Moreau wrote: I'm posting here because it might be a behaviour related to the kernel internals that I can't explain from my user point of view :) Basically I'm doing this: mount -o bind /dev/ /mnt/dev chroot /mnt dd bs=440 conv

Re: Can't umount /mnt/dev after calling dd(1) and with /mnt/dev is a bind mount

2014-03-22 Thread Francis Moreau
/ /mnt/dev chroot /mnt dd bs=440 conv=notrunc count=1 if=gptmbr.bin of=/dev/loop0 umount /mnt/dev but umount gives the following error: umount: /mnt/dev: target is busy What do you have in /proc/self/mountinfo before all that? here it is: $ cat /proc/self/mountinfo 15 19 0:3 / /proc

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Alex Shi
te: >>>>> Alex, >>>>> >>>>> We noticed some perf-stat.cpu-migrations changes by your commit >>>> >>>> Thanks for your wonder testing, Fengguang! >>>> How many CPU in your system and how many dd task running? >>

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Fengguang Wu
;> We noticed some perf-stat.cpu-migrations changes by your commit > >> > >> Thanks for your wonder testing, Fengguang! > >> How many CPU in your system and how many dd task running? > > > > For the case > > > > kbuildx/micro/dd-wri

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Alex Shi
hanks for your wonder testing, Fengguang! >> How many CPU in your system and how many dd task running? > > For the case > > kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd > > It's a KVM with 4 CPUs, 16G memory. There are 4 virtual disks, on top > of each disk

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Alex Shi
system and how many dd task running? For the case kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd It's a KVM with 4 CPUs, 16G memory. There are 4 virtual disks, on top of each disk there runs 10 dd tasks. I don't know much of kvm. Where the data get from, VM guest or VM host? Looks

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Fengguang Wu
Thanks for your wonder testing, Fengguang! How many CPU in your system and how many dd task running? For the case kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd It's a KVM with 4 CPUs, 16G memory. There are 4 virtual disks, on top of each disk there runs 10 dd tasks. I

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-24 Thread Alex Shi
-stat.cpu-migrations changes by your commit Thanks for your wonder testing, Fengguang! How many CPU in your system and how many dd task running? For the case kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd It's a KVM with 4 CPUs, 16G memory. There are 4 virtual disks, on top of each disk

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-22 Thread Fengguang Wu
On Mon, Dec 23, 2013 at 01:19:04PM +0800, Alex Shi wrote: > On 12/22/2013 08:30 AM, fengguang...@intel.com wrote: > > Alex, > > > > We noticed some perf-stat.cpu-migrations changes by your commit > > Thanks for your wonder testing, Fengguang! > How many CPU in yo

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-22 Thread Alex Shi
On 12/22/2013 08:30 AM, fengguang...@intel.com wrote: > Alex, > > We noticed some perf-stat.cpu-migrations changes by your commit Thanks for your wonder testing, Fengguang! How many CPU in your system and how many dd task running? > > 6eb8b571283e64e96ca1a91aad085f9e08f857f0 i

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-22 Thread Alex Shi
On 12/22/2013 08:30 AM, fengguang...@intel.com wrote: Alex, We noticed some perf-stat.cpu-migrations changes by your commit Thanks for your wonder testing, Fengguang! How many CPU in your system and how many dd task running? 6eb8b571283e64e96ca1a91aad085f9e08f857f0 is the first bad commit

Re: [sched] perf-stat.cpu-migrations changes in dd write test

2013-12-22 Thread Fengguang Wu
On Mon, Dec 23, 2013 at 01:19:04PM +0800, Alex Shi wrote: On 12/22/2013 08:30 AM, fengguang...@intel.com wrote: Alex, We noticed some perf-stat.cpu-migrations changes by your commit Thanks for your wonder testing, Fengguang! How many CPU in your system and how many dd task running

[sched] perf-stat.cpu-migrations changes in dd write test

2013-12-21 Thread fengguang . wu
494 6eb8b571283e64e96ca1 --- - 37779 ~40%+115.2% 81288 ~50% kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd 37779 +115.2% 81288 TOTAL proc-vmstat.pgscan_direct_normal kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd: p

[sched] perf-stat.cpu-migrations changes in dd write test

2013-12-21 Thread fengguang . wu
: 1de157d718f6494 6eb8b571283e64e96ca1 --- - 37779 ~40%+115.2% 81288 ~50% kbuildx/micro/dd-write/4HDD-JBOD-cfq-ext4-10dd 37779 +115.2% 81288 TOTAL proc-vmstat.pgscan_direct_normal kbuildx/micro/dd-write/4HDD-JBOD

Re: dd/mke2fs on loopback hangs

2007-11-13 Thread Petar Bogdanovic
On Tue, Nov 06, 2007 at 10:04:51PM +0100, Milan Broz wrote: > Petar Bogdanovic wrote: > > I experience some strange problems with my loopback-on-ext3-setup. > > After creating a plain `zeroed' dummy-file and doing a /dev/loop/0 on > > it, every dd or mke2fs hangs while doing c

Re: dd/mke2fs on loopback hangs

2007-11-13 Thread Petar Bogdanovic
On Tue, Nov 06, 2007 at 10:04:51PM +0100, Milan Broz wrote: Petar Bogdanovic wrote: I experience some strange problems with my loopback-on-ext3-setup. After creating a plain `zeroed' dummy-file and doing a /dev/loop/0 on it, every dd or mke2fs hangs while doing certain write()s. Here

Re: dd/mke2fs on loopback hangs

2007-11-07 Thread Petar Bogdanovic
On Tue, Nov 06, 2007 at 10:04:51PM +0100, Milan Broz wrote: > Petar Bogdanovic wrote: > > I experience some strange problems with my loopback-on-ext3-setup. > > After creating a plain `zeroed' dummy-file and doing a /dev/loop/0 on > > it, every dd or mke2fs hangs while doing c

  1   2   >